May 12, 2025

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Playing with vCluster

After my previous posts related to Argo CD (one about argocd-autopilot and another with some usage examples) I started to look into Kluctl (I also plan to review Flux, but I’m more interested on the kluctl approach right now).

While reading an entry on the project blog about Cluster API somehow I ended up on the vCluster site and decided to give it a try, as it can be a valid way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any given time.

On this post I will deploy a vcluster using the k3d_argocd kubernetes cluster (the one we created on the posts about argocd) as the host and will show how to:

  • use its ingress (in our case traefik) to access the API of the virtual one (removes the need of having to use the vcluster connect command to access it with kubectl),
  • publish the ingress objects deployed on the virtual cluster on the host ingress, and
  • use the sealed-secrets of the host cluster to manage the virtual cluster secrets.

Creating the virtual cluster

Installing the vcluster application

To create the virtual clusters we need the vcluster command, we can install it with arkade:

❯ arkade get vcluster

The vcluster.yaml file

To create the cluster we are going to use the following vcluster.yaml file (you can find the documentation about all its options here):

controlPlane:
  proxy:
    # Extra hostnames to sign the vCluster proxy certificate for
    extraSANs:
    - my-vcluster-api.lo.mixinet.net
exportKubeConfig:
  context: my-vcluster_k3d-argocd
  server: https://my-vcluster-api.lo.mixinet.net:8443
  secret:
    name: my-vcluster-kubeconfig
sync:
  toHost:
    ingresses:
      enabled: true
    serviceAccounts:
      enabled: true
  fromHost:
    ingressClasses:
      enabled: true
    nodes:
      enabled: true
      clearImageStatus: true
    secrets:
      enabled: true
      mappings:
        byName:
          # sync all Secrets from the 'my-vcluster-default' namespace to the
          # virtual "default" namespace.
          "my-vcluster-default/*": "default/*"
          # We could add other namespace mappings if needed, i.e.:
          # "my-vcluster-kube-system/*": "kube-system/*"

On the controlPlane section we’ve added the proxy.extraSANs entry to add an extra host name to make sure it is added to the cluster certificates if we use it from an ingress.

The exportKubeConfig section creates a kubeconfig secret on the virtual cluster namespace using the provided host name; the secret can be used by GitOps tools or we can dump it to a file to connect from our machine.

On the sync section we enable the synchronization of Ingress objects and ServiceAccounts from the virtual to the host cluster:

  • We copy the ingress definitions to use the ingress server that runs on the host to make them work from the outside world.
  • The service account synchronization is not really needed, but we enable it because if we test this configuration with EKS it would be useful if we use IAM roles for the service accounts.

On the opposite direction (from the host to the virtual cluster) we synchronize:

  • The IngressClass objects, to be able to use the host ingress server(s).
  • The Nodes (we are not using the info right now, but it could be interesting if we want to have the real information of the nodes running pods of the virtual cluster).
  • The Secrets from the my-vcluster-default host namespace to the default of the virtual cluster; that synchronization allows us to deploy SealedSecrets on the host that generate secrets that are copied automatically to the virtual one. Initially we only copy secrets for one namespace but if the virtual cluster needs others we can add namespaces on the host and their mappings to the virtual one on the vcluster.yaml file.

Creating the virtual cluster

To create the virtual cluster we run the following command:

vcluster create my-vcluster --namespace my-vcluster --upgrade --connect=false \
  --values vcluster.yaml

It creates the virtual cluster on the my-vcluster namespace using the vcluster.yaml file shown before without connecting to the cluster from our local machine (if we don’t pass that option the command adds an entry on our kubeconfig and launches a proxy to connect to the virtual cluster that we don’t plan to use).

Adding an ingress TCP route to connect to the vcluster api

As explained before, we need to create an IngressTcpRoute object to be able to connect to the vcluster API, we use the following definition:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: my-vcluster-api
  namespace: my-vcluster
spec:
  entryPoints:
    - websecure
  routes:
    - match: HostSNI(`my-vcluster-api.lo.mixinet.net`)
      services:
        - name: my-vcluster
          port: 443
  tls:
    passthrough: true

Once we apply those changes the cluster API will be available on the https://my-cluster-api.lo.mixinet.net:8443 URL using its own self signed certificate (we have enabled TLS passthrough) that includes the hostname we use (we adjusted it on the vcluster.yaml file, as explained before).

Getting the kubeconfig for the vcluster

Once the vcluster is running we will have its kubeconfig available on the my-vcluster-kubeconfig secret on its namespace on the host cluster.

To dump it to the ~/.kube/my-vcluster-config we can do the following:

❯ kubectl get -n my-vcluster secret/my-vcluster-kubeconfig \
    --template="{{.data.config}}" | base64 -d > ~/.kube/my-vcluster-config

Once available we can define the vkubectl alias to adjust the KUBECONFIG variable to access it:

alias vkubectl="KUBECONFIG=~/.kube/my-vcluster-config kubectl"

Or we can merge the configuration with the one on the KUBECONFIG variable and use kubectx or a similar tool to change the context (for our vcluster the context will be my-vcluster_k3d-argocd). If the KUBECONFIG variable is defined and only has the PATH to a single file the merge can be done running the following:

KUBECONFIG="$KUBECONFIG:~/.kube/my-vcluster-config" kubectl config view \
  --flatten >"$KUBECONFIG.new"
mv "$KUBECONFIG.new" "$KUBECONFIG"

On the rest of this post we will use the vkubectl alias when connecting to the virtual cluster, i.e. to check that it works we can run the cluster-info subcommand:

❯ vkubectl cluster-info
Kubernetes control plane is running at https://my-vcluster-api.lo.mixinet.net:8443
CoreDNS is running at https://my-vcluster-api.lo.mixinet.net:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Installing the dummyhttpd application

To test the virtual cluster we are going to install the dummyhttpd application using the following kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0
# Add the config map
configMapGenerator:
  - name: dummyhttp-configmap
    literals:
      - CM_VAR="Vcluster Test Value"
    behavior: create
    options:
      disableNameSuffixHash: true
patches:
  # Change the ingress host name
  - target:
      kind: Ingress
      name: dummyhttp
    patch: |-
      - op: replace
        path: /spec/rules/0/host
        value: vcluster-dummyhttp.lo.mixinet.net
  # Add reloader annotations -- it will only work if we install reloader on the
  # virtual cluster, as the one on the host cluster doesn't see the vcluster
  # deployment objects
  - target:
      kind: Deployment
      name: dummyhttp
    patch: |-
      - op: add
        path: /metadata/annotations
        value:
          reloader.stakater.com/auto: "true"
          reloader.stakater.com/rollout-strategy: "restart"

It is quite similar to the one we used on the Argo CD examples but uses a different DNS entry; to deploy it we run kustomize and vkubectl:

❯ kustomize build . | vkubectl apply -f -
configmap/dummyhttp-configmap created
service/dummyhttp created
deployment.apps/dummyhttp created
ingress.networking.k8s.io/dummyhttp created

We can check that everything worked using curl:

❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}

The objects available on the vcluster now are:

❯ vkubectl get all,configmap,ingress
NAME                             READY   STATUS    RESTARTS   AGE
pod/dummyhttp-55569589bc-9zl7t   1/1     Running   0          24s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/dummyhttp    ClusterIP   10.43.51.39    <none>        80/TCP    24s
service/kubernetes   ClusterIP   10.43.153.12   <none>        443/TCP   14m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dummyhttp   1/1     1            1           24s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dummyhttp-55569589bc   1         1         1       24s

NAME                            DATA   AGE
configmap/dummyhttp-configmap   1      24s
configmap/kube-root-ca.crt      1      14m

NAME                                CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    24s

While we have the following ones on the my-vcluster namespace of the host cluster:

❯ kubectl get all,configmap,ingress -n my-vcluster
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster   1/1     Running   0          18m
pod/dummyhttp-55569589bc-9zl7t-x-default-x-my-vcluster    1/1     Running   0          45s
pod/my-vcluster-0                                         1/1     Running   0          19m

NAME                                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
service/dummyhttp-x-default-x-my-vcluster      ClusterIP   10.43.51.39     <none>        80/TCP                   45s
service/kube-dns-x-kube-system-x-my-vcluster   ClusterIP   10.43.91.198    <none>        53/UDP,53/TCP,9153/TCP   18m
service/my-vcluster                            ClusterIP   10.43.153.12    <none>        443/TCP,10250/TCP        19m
service/my-vcluster-headless                   ClusterIP   None            <none>        443/TCP                  19m
service/my-vcluster-node-k3d-argocd-agent-1    ClusterIP   10.43.189.188   <none>        10250/TCP                18m

NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     19m

NAME                                                     DATA   AGE
configmap/coredns-x-kube-system-x-my-vcluster            2      18m
configmap/dummyhttp-configmap-x-default-x-my-vcluster    1      45s
configmap/kube-root-ca.crt                               1      19m
configmap/kube-root-ca.crt-x-default-x-my-vcluster       1      11m
configmap/kube-root-ca.crt-x-kube-system-x-my-vcluster   1      18m
configmap/vc-coredns-my-vcluster                         1      19m

NAME                                                        CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    45s

As shown, we have copies of the Service, Pod, Configmap and Ingress objects, but there is no copy of the Deployment or ReplicaSet.

Creating a sealed secret for dummyhttpd

To use the hosts sealed secrets controller with the virtual cluster we will create the my-vcluster-default namespace and add there the sealed secrets we want to have available as secrets on the default namespace of the virtual cluster:

❯ kubectl create namespace my-vcluster-default
❯ echo -n "Vcluster Boo" | kubectl create secret generic "dummyhttp-secret" \
    --namespace "my-vcluster-default" --dry-run=client \
    --from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml

After running the previous commands we have the following objects available on the host cluster:

❯ kubectl get sealedsecrets.bitnami.com,secrets -n my-vcluster-default
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     34s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      34s

And we can see that the secret is also available on the virtual cluster with the content we expected:

❯ vkubectl get secrets
NAME               TYPE     DATA   AGE
dummyhttp-secret   Opaque   1      34s
❯ vkubectl get secret/dummyhttp-secret --template="{{.data.SECRET_VAR}}" \
  | base64 -d
Vcluster Boo

But the output of the curl command has not changed because, although we have the reloader controller deployed on the host cluster, it does not see the Deployment object of the virtual one and the pods are not touched:

❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}

Installing the reloader application

To make reloader work on the virtual cluster we just need to install it as we did on the host using the following kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'

We deploy it with kustomize and vkubectl:

❯ kustomize build . | vkubectl apply -f -
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader created

As the controller was not available when the secret was created the pods linked to the Deployment are not updated, but we can force things removing the secret on the host system; after we do that the secret is re-created from the sealed version and copied to the virtual cluster where the reloader controller updates the pod and the curl command shows the new output:

❯ kubectl delete -n my-vcluster-default secrets dummyhttp-secret
secret "dummyhttp-secret" deleted
❯ sleep 2
❯ vkubectl get pods
NAME                         READY   STATUS        RESTARTS   AGE
dummyhttp-78bf5fb885-fmsvs   1/1     Terminating   0          6m33s
dummyhttp-c68684bbf-nx8f9    1/1     Running       0          6s
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"Vcluster Boo"}

If we change the secret on the host systems things get updated pretty quickly now:

❯ echo -n "New secret" | kubectl create secret generic "dummyhttp-secret" \
    --namespace "my-vcluster-default" --dry-run=client \
    --from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"New secret"}

Pause and restore the vcluster

The status of pods and statefulsets while the virtual cluster is active can be seen using kubectl:

❯ kubectl get pods,statefulsets -n my-vcluster
NAME                                                                 READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster              1/1     Running   0          127m
pod/dummyhttp-587c7855d7-pt9b8-x-default-x-my-vcluster               1/1     Running   0          4m39s
pod/my-vcluster-0                                                    1/1     Running   0          128m
pod/reloader-reloader-7f56c54d75-544gd-x-kube-system-x-my-vcluster   1/1     Running   0          60m

NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     128m

Pausing the vcluster

If we don’t need to use the virtual cluster we can pause it and after a small amount of time all Pods are gone because the statefulSet is scaled down to 0 (note that other resources like volumes are not removed, but all the objects that have to be scheduled and consume CPU cycles are not running, which can translate in a lot of savings when running on clusters from cloud platforms or, in a local cluster like the one we are using, frees resources like CPU and memory that now can be used for other things):

❯ vcluster pause my-vcluster
11:20:47 info Scale down statefulSet my-vcluster/my-vcluster...
11:20:48 done Successfully paused vcluster my-vcluster/my-vcluster
❯ kubectl get pods,statefulsets -n my-vcluster
NAME                           READY   AGE
statefulset.apps/my-vcluster   0/0     130m

Now the curl command fails:

❯ curl -s https://vcluster-dummyhttp.localhost.mixinet.net:8443
404 page not found

Although the ingress is still available (it returns a 404 because there is no pod behind the service):

❯ kubectl get ingress -n my-vcluster
NAME                                CLASS     HOSTS                               ADDRESS                            PORTS   AGE
dummyhttp-x-default-x-my-vcluster   traefik   vcluster-dummyhttp.lo.mixinet.net   172.20.0.2,172.20.0.3,172.20.0.4   80      120m

In fact, the same problem happens when we try to connect to the vcluster API; the error shown by kubectl is related to the TLS certificate because the 404 page uses the wildcard certificate instead of the self signed one:

❯ vkubectl get pods
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
❯ curl -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/
404 page not found
❯ curl -v -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/ 2>&1 | grep subject
*  subject: CN=lo.mixinet.net
*  subjectAltName: host "my-vcluster-api.lo.mixinet.net" matched cert's "*.lo.mixinet.net"

Resuming the vcluster

When we want to use the virtual cluster again we just need to use the resume command:

❯ vcluster resume my-vcluster
12:03:14 done Successfully resumed vcluster my-vcluster in namespace my-vcluster

Once all the pods are running the virtual cluster goes back to its previous state, although all of them were started, of course.

Cleaning up

The virtual cluster can be removed using the delete command:

❯ vcluster delete my-vcluster
12:09:18 info Delete vcluster my-vcluster...
12:09:18 done Successfully deleted virtual cluster my-vcluster in namespace my-vcluster
12:09:18 done Successfully deleted virtual cluster namespace my-vcluster
12:09:18 info Waiting for virtual cluster to be deleted...
12:09:50 done Virtual Cluster is deleted

That removes everything we used on this post except the sealed secrets and secrets that we put on the my-vcluster-default namespace because it was created by us.

If we delete the namespace all the secrets and sealed secrets on it are also removed:

❯ kubectl delete namespace my-vcluster-default
namespace "my-vcluster-default" deleted

Conclusions

I believe that the use of virtual clusters can be a good option for two of the proposed use cases that I’ve encountered in real projects in the past:

  • need of short lived clusters for developers or teams,
  • execution of integration tests from CI pipelines that require a complete cluster (the tests can be run on virtual clusters that are created on demand or paused and resumed when needed).

For both cases things can be set up using the Apache licensed product, although maybe evaluating the vCluster Platform offering could be interesting.

In any case when everything is not done inside kubernetes we will also have to check how to manage the external services (i.e. if we use databases or message buses as SaaS instead of deploying them inside our clusters we need to have a way of creating, deleting or pause and resume those services).

12 May, 2025 11:00AM

Taavi Väänänen

lua entry thread aborted: runtime error: bad request

The Wikimedia Cloud VPS shared web proxy has an interesting architecture: the management API writes an entry for each proxy to a Redis database, and the web server in use (Nginx with Lua support from ngx_http_lua_module) looks up the backend server URL from Redis for each request. This is maybe not how I would design this today, but the basic design dates back to 2013 and has served us well ever since.

However, with a recent operating system upgrade to Debian 12 (we run Nginx from the packages in Debian's repositories), we started seeing mysterious errors that looked like this:

2025/04/30 07:24:25 [error] 82656#82656: *5612 lua entry thread aborted: runtime error: /etc/nginx/lua/domainproxy.lua:32: bad request
stack traceback:
coroutine 0:
[C]: in function 'set_keepalive'
/etc/nginx/lua/domainproxy.lua:32: in function 'redis_shutdown'
/etc/nginx/lua/domainproxy.lua:48: in main chunk, client: [redacted], server: *.wmcloud.org, request: "GET [redacted] HTTP/2.0", host: "codesearch.wmcloud.org", referrer: "https://codesearch.wmcloud.org/search/"

The code in question seems straightforward enough:

function redis_shutdown()
 -- Use a connection pool of 256 connections with a 32s idle timeout
 -- This also closes the current redis connection.
 red:set_keepalive(1000 * 32, 256) -- line 32
end

When searching for this error online, you'll end up finding advice like "the resty.redis object instance cannot be stored in a Lua variable at the Lua module level". However, our code already stores it as a local variable:

local redis = require 'nginx.redis'
local red = redis:new()
red:set_timeout(1000)
red:connect('127.0.0.1', 6379)

Turns out the issue was with the function definition: functions can also be defined as local. Without that, something somewhere in some situations seems to reference the variables from other requests, instead of using the Redis connection for the current request. (Don't ask me what changed between Debian 12 and 13 making this only break now.) So we needed to change our function definition to this instead:

local function redis_shutdown()
 -- Use a connection pool of 256 connections with a 32s idle timeout
 -- This also closes the current redis connection.
 red:set_keepalive(1000 * 32, 256)
end

I spent almost an entire workday looking for this, ultimately making a two-line patch to fix the issue. Hopefully by publishing this post I can save that time from everyone else stumbling upon the same problem after myself.

12 May, 2025 12:00AM by Taavi Väänänen (hi@taavi.wtf)

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: DebConf 25 preparations, PyPA tools updates, Removing libcrypt-dev from build-essential and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-04

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25 Preparations, by Stefano Rivera and Santiago Ruano Rincón

DebConf 25 preparations continue. In April, the bursary team reviewed and ranked bursary applications. Santiago Ruano Rincón examined the current state of the conference’s finances, to see if we could allocate any more money to bursaries. Stefano Rivera supported the bursary team’s work with infrastructure and advice and added some metrics to assist Santiago’s budget review. Santiago was also involved in different parts of the organization, including Content team matters, as reviewing the first of proposals, preparing public information about the new Academic Track; or coordinating different aspects of the Day trip activities and the Conference Dinner.

PyPA tools updates, by Stefano Rivera

Around the beginning of the freeze (in retrospect, definitely too late) Stefano looked at updating setuptools in the archive to 78.1.0. This brings support for more comprehensive license expressions (PEP-639), that people are expected to adopt soon upstream. While the reverse-autopkgtests all passed, it all came with some unexpected complications, and turned into a mini-transition. The new setuptools broke shebangs for scripts (pypa/setuptools#4952).

It also required a bump of wheel to 0.46 and wheel 0.46 now has a dependency outside the standard library (it de-vendored packaging). This meant it was no longer suitable to distribute a standalone wheel.whl file to seed into new virtualenvs, as virtualenv does by default. The good news here is that setuptools doesn’t need wheel any more, it included its own implementation of the bdist_wheel command, in 70.1. But the world hadn’t adapted to take advantage of this, yet. Stefano scrambled to get all of these issues resolved upstream and in Debian:

We’re now at the point where python3-wheel-whl is no longer needed in Debian unstable, and it should migrate to trixie.

Removing libcrypt-dev from build-essential, by Helmut Grohne

The crypt function was originally part of glibc, but it got separated to libxcrypt. As a result, libc6-dev now depends on libcrypt-dev. This poses a cycle during architecture cross bootstrap. As the number of packages actually using crypt is relatively small, Helmut proposed removing the dependency. He analyzed an archive rebuild kindly performed by Santiago Vila (not affiliated with Freexian) and estimated the necessary changes. It looks like we may complete this with modifications to less than 300 source packages in the forky cycle. Half of the bugs have been filed at this time. They are tracked with libcrypt-* usertags.

Miscellaneous contributions

  • Carles uploaded a new version of simplemonitor.
  • Carles improved the documentation of salsa-ci-team/pipeline regarding piuparts arguments.
  • Carles closed an FTBFS on gcc-15 on qnetload.
  • Carles worked on Catalan translations using po-debconf-manager: reviewed 57 translations and created their merge requests in salsa, created 59 bug reports for packages that didn’t merge in more than 30 days. Followed-up merge requests and comments in bug reports. Managed some translations manually for packages that are not in Salsa.
  • Lucas did some work on the DebConf Content and Bursary teams.
  • Lucas fixed multiple CVEs and bugs involving the upgrade from bookworm to trixie in ruby3.3.
  • Lucas fixed a CVE in valkey in unstable.
  • Stefano updated beautifulsoup4, python-authlib, python-html2text, python-packaging, python-pip, python-soupsieve, and unidecode.
  • Stefano packaged python-dependency-groups, a new vendored library in python-pip.
  • During an afternoon Bug Squashing Party in Montevideo, Santiago uploaded a couple of packages fixing RC bugs #1057226 and #1102487. The latter was a sponsored upload.
  • Thorsten uploaded new upstream versions of brlaser, ptouch-driver and sane-airscan to get the latest upstream bug fixes into Trixie.
  • Raphaël filed an upstream bug on zim for a graphical glitch that he has been experiencing.
  • Colin Watson upgraded openssh to 10.0p1 (also known as 10.0p2), and debugged various follow-up bugs. This included adding riscv64 support to vmdb2 in passing, and enabling native wtmpdb support so that wtmpdb last now reports the correct tty for SSH connections.
  • Colin fixed dput-ng’s –override option, which had never previously worked.
  • Colin fixed a security bug in debmirror.
  • Colin did his usual routine work on the Python team: 21 packages upgraded to new upstream versions, 8 CVEs fixed, and about 25 release-critical bugs fixed.
  • Helmut filed patches for 21 cross build failures.
  • Helmut uploaded a new version of debvm featuring a new tool debefivm-create to generate EFI-bootable disk images compatible with other tools such as libvirt or VirtualBox. Much of the work was prototyped in earlier months. This generalizes mmdebstrap-autopkgtest-build-qemu.
  • Helmut continued reporting undeclared file conflicts and suggested package removals from unstable.
  • Helmut proposed build profiles for libftdi1 and gnupg2. To deal with recently added dependencies in the architecture cross bootstrap package set.
  • Helmut managed the /usr-move transition. He worked on ensuring that systemd would comply with Debian’s policy. Dumat continues to locate problems here and there yielding discussion occasionally. He sent a patch for an upgrade problem in zutils.
  • Anupa worked with the Debian publicity team to publish Micronews and Bits posts.
  • Anupa worked with the DebConf 25 content team to review talk and event proposals for DebConf 25.

12 May, 2025 12:00AM by Anupa Ann Joseph

May 11, 2025

Sergio Durigan Junior

Debian Bug Squashing Party Brazil 2025

With the trixie release approaching, I had the idea back in April to organize a bug squashing party with the Debian Brasil community. I believe the outcome was very positive, and we were able to tackle and fix quite a number of release-critical bugs. This is a brief report of what we did.

A remote BSP

It’s not the first time I organize a BSP: back in 2019, I helped throw another similar party in Toronto. The difference this time is that, because Brazil is a big country and (perhaps most importantly) because I’m not currently living there, the BSP had to be done online.

I’m a fan of social interactions (especially with the Brazilian community), and in my experience we usually can achieve much more when we get together in a physical place, but hey, you gotta do what you gotta do…

Most (if not all) of the folks interested in participating had busy weekdays, so it was decided that we would meet during the weekends and try to work on a few bugs over Jitsi. Nothing stopped people from working on bugs during the week as well, of course.

A tag to rule them all

We used the bsp-2025-04-brazil usertag to mark those bugs that were touched by us. You can see the full list of bugs here, although the current list (as of 2025-05-11) is smaller than the one we had by the end of April. I don’t know what happened; maybe it’s some glitch with the BTS, or maybe someone removed the usertag by mistake.

Stats

In total, we had:

  • 7 participants
  • 37 bugs handled. Of those,
  • 35 bugs fixed

The BSP officially started on 04 April 2025, and ended on 30 April 2025. I was able to attend meetings during two weekends; other people participated more sporadically.

Outcome

As I said above, the Debian Brasil community is great and very engaged in the project. Speaking more specifically about the Debian Brasil Devel group, I can say that We have contributors with strong technical skills, and I really love that we have this inclusive, extremely technical culture where debugging and understanding things is really core to pretty much all our discussions.

We already meet weekly on Thursdays to talk shop and help newcomers, so having a remote BSP with this group seemed like a logical thing to do. I’m really glad to see our results and even happier to hear positive feedback from the community during the last MiniDebConf in Maceió.

There’s some interest in organizing another BSP, this time face-to-face and during the next DebConf. I’m all for it, as I love fixing bugs and having a great time with friends. If you’re interested in attending, let me know.

Thanks, and until next time.

11 May, 2025 10:00PM

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

This is bits from the DPL for April.

End of 10

I am sure I was speaking in the interest of the whole project when joining the "End of 10" campaign. Here is what I wrote to the initiators:

Hi Joseph and all drivers of the "End of 10" campaign, On behalf of the entire Debian project, I would like to say that we proudly join your great campaign. We stand with you in promoting Free Software, defending users' freedoms, and protecting our planet by avoiding unnecessary hardware waste. Thank you for leading this important initiative.

Andreas Tille Debian Project Leader

I have some goals I would like to share with you for my second term.

Ftpmaster delegation

This splits up into tasks that can be done before and after Trixie release.

Before Trixie:

⁣1. Reducing Barriers to DFSG Compliance Checks

Back in 2002, Debian established a way to distribute cryptographic software in the main archive, whereas such software had previously been restricted to the non-US archive. One result of this arrangement which influences our workflow is that all packages uploaded to the NEW queue must remain on the server that hosts it. This requirement means that members of the ftpmaster team must log in to that specific machine, where they are limited to a restricted set of tools for reviewing uploaded code.

This setup may act as a barrier to participation--particularly for contributors who might otherwise assist with reviewing packages for DFSG compliance. I believe it is time to reassess this limitation and work toward removing such hurdles.

In October last year, we had some initial contact with SPI's legal counsel, who noted that US regulations around cryptography have been relaxed somewhat in recent years (as of 2021). This suggests it may now be possible to revisit and potentially revise the conditions under which we manage cryptographic software in the NEW queue.

I plan to investigate this further. If you have expertise in software or export control law and are interested in helping with this topic, please get in touch with me.

The ultimate goal is to make it easier for more people to contribute to ensuring that code in the NEW queue complies with the DFSG.

⁣2. Discussing Alternatives

My chances to reach out to other distributions remained limited. However, regarding the processing of new software, I learned that OpenSUSE uses a Git-based workflow that requires five "LGTM" approvals from a group of trusted developers. As far as I know, Fedora follows a similar approach.

Inspired by this, a recent community initiative--the Gateway to NEW project--enables peer review of new packages for DFSG compliance before they enter the NEW queue. This effort allows anyone to contribute by reviewing packages and flagging potential issues in advance via Git. I particularly appreciate that the DFSG review is coupled with CI, allowing for both license and technical evaluation.

While this process currently results in some duplication of work--since final reviews are still performed by the ftpmaster team--it offers a valuable opportunity to catch issues early and improve the overall quality of uploads. If the community sees long-term value in this approach, it could serve as a basis for evolving our workflows. Integrating it more closely into DAK could streamline the process, and we've recently seen that merge requests reflecting community suggestions can be accepted promptly.

For now, I would like to gather opinions about how such initiatives could best complement the current NEW processing, and whether greater consensus on trusted peer review could help reduce the burden on the team doing DFSG compliance checks. Submitting packages for review and automated testing before uploading can improve quality and encourage broader participation in safeguarding Debian's Free Software principles.

My explicit thanks go out to the Gateway to NEW team for their valuable and forward-looking contribution to Debian.

⁣3. Documenting Critical Workflows

Past ftpmaster trainees have told me that understanding the full set of ftpmaster workflows can be quite difficult. While there is some useful documentation − thanks in particular to Sean Whitton for his work on documenting NEW processing rules – many other important tasks carried out by the ftpmaster team remain undocumented or only partially so.

Comprehensive and accessible documentation would greatly benefit current and future team members, especially those onboarding or assisting in specific workflows. It would also help ensure continuity and transparency in how critical parts of the archive are managed.

If such documentation already exists and I have simply overlooked it, I would be happy to be corrected. Otherwise, I believe this is an area where we need to improve significantly. Volunteers with a talent for writing technical documentation are warmly invited to contact me--I'd be happy to help establish connections with ftpmaster team members who are willing to share their knowledge so that it can be written down and preserved.

Once Trixie is released (hopefully before DebConf):

⁣4. Split of the Ftpmaster Team into DFSG and Archive Teams

As discussed during the "Meet the ftpteam" BoF at DebConf24, I would like to propose a structural refinement of the current Ftpmaster team by introducing two different delegated teams:

  1. DFSG Team
  2. Archive Team (responsible for DAK maintenance and process tooling, including releases)

(Alternative name suggestions are, of course, welcome.) The primary task of the DFSG team would be the processing of the NEW queue and ensuring that packages comply with the DFSG. The Archive team would focus on maintaining DAK and handling the technical aspects of archive management.

I am aware that, in the recent past, the ftpmaster team has decided not to actively seek new members. While I respect the autonomy of each team, the resulting lack of a recruitment pipeline has led to some friction and concern within the wider community, including myself. As Debian Project Leader, it is my responsibility to ensure the long-term sustainability and resilience of our project, which includes fostering an environment where new contributors can join and existing teams remain effective and well-supported. Therefore, even if the current team does not prioritize recruitment, I will actively seek and encourage new contributors for both teams, with the aim of supporting openness and collaboration.

This proposal is not intended as criticism of the current team's dedication or achievements--on the contrary, I am grateful for the hard work and commitment shown, often under challenging circumstances. My intention is to help address the structural issues that have made onboarding and specialization difficult and to ensure that both teams are well-supported for the future.

I also believe that both teams should regularly inform the Debian community about the policies and procedures they apply. I welcome any suggestions for a more detailed description of the tasks involved, as well as feedback on how best to implement this change in a way that supports collaboration and transparency.

My intention with this proposal is to foster a more open and effective working environment, and I am committed to working with all involved to ensure that any changes are made collaboratively and with respect for the important work already being done.

I'm aware that the ideas outlined above touch on core parts of how Debian operates and involve responsibilities across multiple teams. These are not small changes, and implementing them will require thoughtful discussion and collaboration.

To move this forward, I've registered a dedicated BoF for DebConf. To make the most of that opportunity, I'm looking for volunteers who feel committed to improving our workflows and processes. With your help, we can prepare concrete and sensible proposals in advance--so the limited time of the BoF can be used effectively for decision-making and consensus-building.

In short: I need your help to bring these changes to life. From my experience in my last term, I know that when it truly matters, the Debian community comes together--and I trust that spirit will guide us again.

Please also note: we had a "Call for volunteers" five years ago, and much of what was written there still holds true today. I've been told that the response back then was overwhelming--but that training such a large number of volunteers didn't scale well. This time, I hope we can find a more sustainable approach: training a few dedicated people first, and then enabling them to pass on their knowledge. This will also be a topic at the DebCamp sprint.

Dealing with Dormant Packages

Debian was founded on the principle that each piece of software should be maintained by someone with expertise in it--typically a single, responsible maintainer. This model formed the historical foundation of Debian's packaging system and helped establish high standards of quality and accountability. However, as the project has grown and the number of packages has expanded, this model no longer scales well in all areas. Team maintenance has since emerged as a practical complement, allowing multiple contributors to share responsibility and reduce bottlenecks--depending on each team's internal policy.

While working on the Bug of the Day initiative, I observed a significant number of packages that have not been updated in a long time. In the case of team-maintained packages, addressing this is often straightforward: team uploads can be made, or the team can be asked whether the package should be removed. We've also identified many packages that would fit well under the umbrella of active teams, such as language teams like Debian Perl and Debian Python, or blends like Debian Games and Debian Multimedia. Often, no one has taken action--not because of disagreement, but simply due to inattention or a lack of initiative.

In addition, we've found several packages that probably should be removed entirely. In those cases, we've filed bugs with pre-removal warnings, which can later be escalated to removal requests.

When a package is still formally maintained by an individual, but shows signs of neglect (e.g., no uploads for years, unfixed RC bugs, failing autopkgtests), we currently have three main tools:

  1. The MIA process, which handles inactive or unreachable maintainers.
  2. Package Salvaging, which allows contributors to take over maintenance if conditions are met.
  3. Non-Maintainer Uploads (NMUs), which are limited to specific, well-defined fixes (which do not include things like migration to Salsa).

These mechanisms are important and valuable, but they don't always allow us to react swiftly or comprehensively enough. Our tools for identifying packages that are effectively unmaintained are relatively weak, and the thresholds for taking action are often high.

The Package Salvage team is currently trialing a process we've provisionally called "Intend to NMU" (ITN). The name is admittedly questionable--some have suggested alternatives like "Intent to Orphan"--and discussion about this is ongoing on debian-devel. The mechanism is intended for situations where packages appear inactive but aren't yet formally orphaned, introducing a clear 21-day notice period before NMUs, similar in spirit to the existing ITS process. The discussion has sparked suggestions for expanding NMU rules.

While it is crucial not to undermine the autonomy of maintainers who remain actively involved, we also must not allow a strict interpretation of this autonomy to block needed improvements to obviously neglected packages.

To be clear: I do not propose to change the rights of maintainers who are clearly active and invested in their packages. That model has served us well. However, we must also be honest that, in some cases, maintainers stop contributing--quietly and without transition plans. In those situations, we need more agile and scalable procedures to uphold Debian's high standards.

To that end, I've registered a BoF session for DebConf25 to discuss potential improvements in how we handle dormant packages. These discussions will be prepared during a sprint at DebCamp, where I hope to work with others on concrete ideas.

Among the topics I want to revisit is my proposal from last November on debian-devel, titled "Barriers between packages and other people". While the thread prompted substantial discussion, it understandably didn't lead to consensus. I intend to ensure the various viewpoints are fairly summarised--ideally by someone with a more neutral stance than myself--and, if possible, work toward a formal proposal during the DebCamp sprint to present at the DebConf BoF.

My hope is that we can agree on mechanisms that allow us to act more effectively in situations where formerly very active volunteers have, for whatever reason, moved on. That way, we can protect both Debian's quality and its collaborative spirit.

Building Sustainable Funding for Debian

Debian incurs ongoing expenses to support its infrastructure--particularly hardware maintenance and upgrades--as well as to fund in-person meetings like sprints and mini-DebConfs. These investments are essential to our continued success: they enable productive collaboration and ensure the robustness of the operating system we provide to users and derivative distributions around the world.

While DebConf benefits from generous sponsorship, and we regularly receive donated hardware, there is still considerable room to grow our financial base--especially to support less visible but equally critical activities. One key goal is to establish a more constant and predictable stream of income, helping Debian plan ahead and respond more flexibly to emerging needs.

This presents an excellent opportunity for contributors who may not be involved in packaging or technical development. Many of us in Debian are engineers first--and fundraising is not something we've been trained to do. But just like technical work, building sustainable funding requires expertise and long-term engagement.

If you're someone who's passionate about Free Software and has experience with fundraising, donor outreach, sponsorship acquisition, or nonprofit development strategy, we would deeply value your help. Supporting Debian doesn't have to mean writing code. Helping us build a steady and reliable financial foundation is just as important--and could make a lasting impact.

Kind regards Andreas.

PS: In April I also planted my 5000th tree and while this is off-topic here I'm proud to share this information with my fellow Debian friends.

11 May, 2025 10:00PM by Andreas Tille

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSMC 0.2.8 on CRAN: Maintenance

Release 0.2.8 of our RcppSMC package arrived at CRAN yesterday. RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now also features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021.

This release is somewhat procedural and contains solely maintenance, either for items now highlighted by the R and CRAN package checks, or to package internals. We had made those changes at the GitHub repo over time since the last release two years ago, and it seemed like a good time to get them to CRAN now.

The release is summarized below.

Changes in RcppSMC version 0.2.8 (2025-05-10)

  • Updated continuous integration script

  • Updated package metadata now using Authors@R

  • Corrected use of itemized list in one manual page

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More information is on the RcppSMC page and the repo. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 May, 2025 04:25PM

May 10, 2025

Taavi Väänänen

Wikimedia Hackathon Istanbul 2025

It's that time of the year again: the Wikimedia Hackathon 2025 happened last weekend in Istanbul. This year was my third time attending what has quickly become one of my favourite events of the year simply due to the concentration of friends and other like-minded nerds in a single location.1

Valerio, Lucas, me and a shark.

Image by Chlod Alejandro is licensed under CC BY-SA 4.0.

This year I did a short presentation about the MediaWiki packages in Debian (slides), which is something I do but I suspect is fairly obscure to most people in the MediaWiki community. I was hoping to do some work on reproducibility of MediaWiki releases, but other interests (plus lack of people involved in the release process at the hackathon) meant that I didn't end up getting any work done on that (assuming this does not count).

Other long-standing projects did end up getting some work done! MusikAnimal and I ended up fixing the Commons deletion notification bot, which had been broken for well over two years at that point (and was at some point in the hackathon plans for last year for both of us). Other projects that I made progress on include supporting multiple types of two-factor devices, and LibraryUpgrader which gained support for rebasing and updating existing patches2.

In addition to hacking, the other highlight of these events is the hallway track. Some of the crowd is people who I've seen at previous events and/or interact very frequently with, but there are also significant parts of the community and the Foundation that I don't usually get to interact with outside of these events. (Although it still feels extremely weird to heard from various mostly-WMF people with whom I haven't spoken with before that they've heard various (usually positive) rumours stories about me.)

Unfortunately we did not end up having a Cuteness Association meetup this year, but we had an impromptu PGP key signing party which is basically almost as good, right?

However, I did continue a tradition from last year: I ended up nominating Chlod, a friend of mine, to receive +2 access to mediawiki/* during the hackathon. The request is due to be closed sometime tomorrow.

(Usual disclosure: My travel was funded by the Wikimedia Foundation. Thank you! This is my personal blog and these are my own opinions.)

Now that you've read this post, maybe check out posts from others?


  1. Unfortunately you can never have absolutely everyone attending :( ↩︎

  2. Amir, I still have not forgiven you about this. ↩︎

10 May, 2025 12:00AM by Taavi Väänänen (hi@taavi.wtf)

May 09, 2025

Uwe Kleine-König

The Linux kernel's PGP Web of Trust

The Linux kernel's development process makes use of PGP. The most relevant part here is that subsystem maintainers are supposed to use signed tags in their pull requests to Linus Torvalds. As the concept of keyservers is considered broken, Konstantin Ryabitsev maintains a collection of relevant keys in a git repository.

As of today (at commit a0bc65fb27f5033beddf9d1ad97d67c353849be2) there are 602 valid keys tracked in that repository. The requirement for a key to be added there is that there must be at least one trust path from Linus Torvalds' key to this key of length at most 5 within that keyring.

Occasionally it happens that a key loses its trust paths because someone in these paths replaced their key, or keys expired. Currently this affects 2 keys.

However there is a problem on the horizon: GnuPG 2.4.x started to reject third-party key signatures using the SHA-1 hash algorithm. In general that's good, SHA-1 isn't considered secure any more for more than 20 years. This doesn't directly affect the kernel-pgpkeys repo, because the trust path checking doesn't rely on GnuPG trusting the signatures; there is a dedicated tool that parses the keyring contents and currently accepts signatures using SHA-1. Also signatures are not thrown away usually, but there are exceptions: Recently Theodore Ts'o asked to update his certificate. When Konstantin imported the updated certificate GnuPG's "cleaning" was applied which dropped all SHA-1 signatures. So Theodore Ts'o's key lost 168 signatures, among them one by Linus Torvalds on his primary UID.

That made me wonder what would be the effect on the web of trust if all SHA-1 signatures were dropped. Here are the facts:

  • There are 7976 signatures tracked in the korg-pgpkeys repo that are considered valid, 6045 of them use SHA-1.

  • Only considering the primary UID Linus Torvalds directly signed 40 public keys, 38 of these using SHA-1. One of the two keys that is still "properly" signed, doesn't sign any other key. So nearly all trust paths go through a single key.

  • When not considering SHA-1 signatures there are 485 public keys without a trust path from Linus Torvalds of length 5 or less. So today these 485 public keys would not qualify to be added to the pgpkeys git repository. Among the people being dropped are Andrew Morton, Greg Kroah-Hartman, H. Peter Anvin, Ingo Molnar, Junio C Hamano, Konstantin Ryabitsev, Peter Zijlstra, Stephen Rothwell and Thomas Gleixner.

  • The size of the kernel strong set is reduced from 358 to 94.

If you attend Embedded Recipes 2025 next week, there is an opportunity to improve the situation: Together with Ahmad Fatoum I'm organizing a keysigning session. If you want to participate, send your public key to er2025-keysigning@baylibre.com before 2025-05-12 08:00 UTC.

09 May, 2025 07:29PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.22 on CRAN: New Upstream

Version 0.0.22 of RcppSpdlog arrived on CRAN today and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.15.3 of spdlog which was released this morning, and includes version 1.12.0 of fmt.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.22 (2025-05-09)

  • Upgraded to upstream release spdlog 1.15.3 (including fmt 11.2.0)

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

09 May, 2025 06:55PM

Abhijith PA

Bug squashing party, Kochi

Last weekend, 4 people (3 DDs and 1 soon to be, hopefully in coming months) sit together for a Bug squashing party in Kochi. We fixed lot of things including my broken autopkgtest setup.

BSP-Kochi

It all began from a discussion in #debian-in of not having any BSPs in the past in India. Then twisted in to hosting a BSP by me. I fixed the dates to 3rd & 4th May to get packages migrate naturally to testing with NMUs before the hard freeze on 15th May.

Finding a venue was a huge challenge. Unlike other places, we have very limited options on hackerspaces. We also had some company spaces (if we asked), but we may have to follow their office timings and finding accommodation near by was also a challenge.

Later we decided to go with a rental apartment where could hack all night and sleep. We booked a very bare minimal apartment for 3 nights and 3 days. I updated wiki page and sent announcement.

Not even Wi-Fi was there in the apartment, so we setup everything by ourselves (DebConf style :p ). I short listed some newbie bugs, just in case if newcomers joined the party. But it was only we 4 people and Kathara who joined remotely.

We started from May 2nd night, stacked our cabin with snacks, instant noodles and drinks. Arranged beds, tables and started hacking and having discussions. My autopkgtest-lxc setup was broken. I think its related to #1017753, which got fixed magically and now I started using autopkgtest-podman.

stack

I learned

  • reportbug tool can use its own SMTP server by default
  • autoremovals can be extended if we pinged to the bug report.

On last day, we went to a nice restaurant and had food. There was a church festival nearby, so we were able to watch wonderful procession and fireworks at night.

food

All in all we managed to touch 46 bugs of which 35 is now fixed/done and 11 is open, some of this get status done when it reaches testing. It was a fun and productive weekend. More importantly we had fun.

09 May, 2025 04:46PM

Reproducible Builds (diffoscope)

diffoscope 295 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 295. This version includes the following changes:

[ Chris Lamb ]
* Use --walk over the potentially dangerous --scan argument of zipdetails(1).
  (Closes: reproducible-builds/diffoscope#406)

You find out more by visiting the project homepage.

09 May, 2025 12:00AM

May 08, 2025

Thorsten Alteholz

My Debian Activities in April 2025

Debian LTS

This was my hundred-thirtieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4145-1] expat security update of one CVE related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [DLA 4146-1] libxml2 security update to fix two CVEs related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Bookworm.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Unstable.

This month I did a week of FD duties. I also started to work on libxmltok. Adrian suggested to also check the CVEs that might affect the embedded version of expat. Unfortunately these are a bunch of CVEs to check and the month ended before the upload. I hope to finish this in May. Last but not least I continued to work on the second batch of fixes for suricata CVEs.

Debian ELTS

This month was the eighty-first ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1411-1] expat security update to fix one CVE in Stretch and Buster related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [ELA-1412-1] libxml2 security update to fix two CVEs in Jessie, Stretch and Buster related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.

This month I did a week of FD duties.
I also started to work on libxmltok. Normally I work on machines running Bullseye or Bookworm. As the Stretch version of libxmltok needs a debhelper version of 5, which is no longer supported on Bullseye, I had to create a separate Buster VM. Yes, Stretch is becoming old. As well as with LTS I need to also check the CVEs that might affect the embedded version of expat.
Last but not least I started to work on the second batch of fixes for suricata CVEs.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

misc

This month I uploaded new packages or new upstream or bugfix versions of:

bottlerocket was my first upload via debusine. It is a really cool tool and I can only recommend everybody to give it at least a try.
I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

FTP master

This month I accepted 307 and rejected 55 packages. The overall number of packages that got accepted was 308.

08 May, 2025 12:05PM by alteholz

May 07, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

procmail versus exim filters

I’ve been using Procmail to filter mail for a long time. Reading Antoine’s blog post procmail considered harmful, I felt motivated (and shamed) into migrating to something else. Luckily, Enrico's shared a detailed roadmap for moving to Sieve, in particular Dovecot's Sieve implementation (which provides "pipe" and "filter" extensions).

My MTA is Exim, and for my first foray into this, I didn't want to change that1. Exim provides two filtering languages for users: an implementation of Sieve, and its own filter language.

Requirements

A good first step is to look at what I'm using Procmail for:

  1. I invoke external mail filters: processes which read the mail and emit a possibly altered mail (headers added, etc.). In particular, crm114 (which has worked remarkably well for me) to classify mail as spam or not, and dsafilter, to mark up Debian Security Advisories

  2. I file messages into different folders depending on the outcome of the above filters

  3. I drop mail ("killfile") some sender addresses (persistent pests on mailing lists); and mails containing certain hosts in the References header (as an imperfect way of dropping mailing list threads which are replies to someone I've killfiled); and mail encoded in a character set for a language I can't read (Russian, Korean, etc.), and several other simple static rules

  4. I move mailing list mail into folders, semi-automatically (see list filtering)

  5. I strip "tagged" subjects for some mailing lists: i.e., incoming mail has subjects like "[cs-historic-committee] help moving several tons of IBM360", and I don't want the "[cs-historic-committee]" bit.

  6. I file a copy of some messages, the name of which is partly derived from the current calendar year

Exim Filters

I want to continue to do (1), which rules out Exim's implementation of Sieve, which does not support invoking external programs. Exim's own filter language has a pipe function that might do what I need, so let's look at how to achieve the above with Exim Filters.

autolists

Here's an autolist recipe for Debian's mailing lists, in Exim filter language. Contrast with the Procmail in list filtering:

if $header_list-id matches "(debian.*)\.lists\.debian\.org"
then
  save Maildir/l/$1/
  finish
endif

Hands down, the exim filter is nicer (although some of the rules on escape characters in exim filters, not demonstrated here, are byzantine).

killfile

An ideal chunk of configuration for kill-filing a list of addresses is light on boiler plate, and easy to add more addresses to in the future. This is the best I could come up with:

if foranyaddress "someone@example.org,\
                  another@example.net,\
                  especially-bad.example.com,\
                 "
   ($reply_address contains $thisaddress
    or $header_references contains $thisaddress)
then finish endif

I won't bother sharing the equivalent Procmail but it's pretty comparable: the exim filter is no great improvement.

It would be lovely if the list of addresses could be stored elsewhere, such as a simple text file, one line per address, or even a database. Exim's own configuration language (distinct from this filter language) has some nice mechanisms for reading lists of things like addresses from files or databases. Sadly it seems the filter language lacks anything similar.

external filters

With Procmail, I pass the mail to an external program, and then read the output of that program back, as the new content of the mail, which continues to be filtered: subsequent filter rules inspect the headers to see what the outcome of the filter was (is it spam?) and to decide what to do accordingly. Crucially, we also check the return status of the filter, to handle the case when it fails.

With Exim filters, we can use pipe to invoke an external program:

pipe "$home/mail/mailreaver.crm -u $home/mail/"

However, this is not a filter: the mail is sent to the external program, and the exim filter's job is complete. We can't write further filter rules to continue to process the mail: the external program would have to do that; and we have no way of handling errors.

Here's Exim's documentation on what happens when the external command fails:

Most non-zero codes are treated by Exim as indicating a failure of the pipe. This is treated as a delivery failure, causing the message to be returned to its sender.

That is definitely not what I want: if the filter broke (even temporarily), Exim would seemingly generate a bounce to the sender address, which could be anything, and I wouldn't have a copy of the message.

The documentation goes on to say that some shell return codes (defaulting to 73 and 75) cause Exim to treat it as a temporary error, spool the mail and retry later on. That's a much better behaviour for my use-case. Having said that, on the rare occasions I've broken the filter, the thing which made me notice most quickly was spam hitting my inbox, which my Procmail recipe achieves.

removing subject tagging

Here, Exim's filter language gets unstuck. There is no way to add or alter headers for a message in a user filter. Exim uses the same filter language for system-wide message filtering, and in that context, it has some extra functions: headers add <string>, headers remove <string>, but (for reasons I don't know) these are not available for user filters.

copy mail to archive folder

I can't see a way to derive a folder name from the calendar year.

next steps

Exim Sieve implementation and its filter language are ruled out as Procmail replacements because they can't do at least two of the things I need to do.

However, based on Enrico's write-up, it looks like Dovecot's Sieve implementation probably can. I was also recommended maildrop, which I might look at if Dovecot Sieve doesn't pan out.


  1. I should revisit this requirement because I could probably reconfigure exim to run my spam classifier at the system level, obviating the need to do it in a user filter, and also raising the opportunity to do smtp-time rejection based on the outcome

07 May, 2025 10:16AM

May 06, 2025

Enrico Zini

Python-like abspath for c++

Python's os.path.abspath or Path.absolute are great: you give them a path, which might not exist, and you get a path you can use regardless of the current directory. os.path.abspath will also normalize it, while Path will not by default because with Paths a normal form is less needed.

This is great to normalize input, regardless of if it's an existing file you're needing to open, or a new file you're needing to create.

In C++17, there is a filesystem library with methods with enticingly similar names, but which are almost, but not quite, totally unlike Python's abspath.

Because in my C++ code I need to normalize input, regardless of if it's an existing file I'm needing to open or a new file I'm needing to create, here's an apparently working Python-like abspath for C++ implemented on top of the std::filesystem library:

std::filesystem::path abspath(const std::filesystem::path& path)
{
    // weakly_canonical is defined as "the result of calling canonical() with a
    // path argument composed of the leading elements of p that exist (as
    // determined by status(p) or status(p, ec)), if any, followed by the
    // elements of p that do not exist."
    //
    // This means that if no lead components of the path exist then the
    // resulting path is not made absolute, and we need to work around that.
    if (!path.is_absolute())
        return abspath(std::filesystem::current_path() / path);

    // This is further and needlessly complicated because we need to work
    // around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    unsigned retry = 0;
    while (true)
    {
        std::error_code code;
        auto result = std::filesystem::weakly_canonical(path, code);
        if (!code)
        {
            // fprintf(stderr, "%s: ok in %u tries\n", path.c_str(), retry+1);
            return result;
        }

        if (code == std::errc::no_such_file_or_directory)
        {
            ++retry;
            if (retry > 50)
                throw std::system_error(code);
        }
        else
            throw std::system_error(code);
    }

    // Alternative implementation that however may not work on all platforms
    // since, formally, "[std::filesystem::absolute] Implementations are
    // encouraged to not consider p not existing to be an error", but they do
    // not mandate it, and if they did, they might still be affected by the
    // undefined behaviour outlined in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    //
    // return std::filesystem::absolute(path).lexically_normal();
}

I added it to my wobble code repository, which is the thin repository of components I use to ease my C++ systems programming.

06 May, 2025 09:51AM

May 05, 2025

Ravi Dwivedi

A visit to Paris

After attending the 2024 LibreOffice conference in Luxembourg, I visited Paris in October 2024.

If you are wondering whether I needed another visa to cross the border into France— I didn’t! Further, they are both also EU members, which means you don’t need to go through customs either. Thus, crossing the Luxembourg-France border is no different from crossing Indian state borders - like going from Rajasthan to Uttar Pradesh.

I took a TGV train from Luxembourg Central Station, which was within walking distance from my hostel. The train took only 2 hours and 20 minutes to cover the 300 km distance to Paris. It departed from Luxembourg at 10:00 AM and reached Paris at 12:20 PM. The ride was smooth and comfortable, arriving on time. It gave me an opportunity to see the countryside of France. I booked the train ticket online a couple of days prior through the Omio website.

A train standing on a platform

TGV train I rode from Luxembourg to Paris

I planned the first day with my friend Joenio, whom I met upon arriving in Paris’ Gare de l’Est station, along with his wife Mari. We went to my hostel (which was within walking distance from the station) to store my luggage, but we were informed that we needed to wait for a couple of hours before I could check in. Consequently, we went to an Italian restaurant nearby for lunch, where I ordered pasta. My hostel was unbelievably cheap by French standards (25 euros per night) that Joenio was shocked when he learned about it.

Pasta on a plate topped with Ricotta cheese

Pasta I had in Paris

Walking in the city, I noticed it had separate cycling tracks and wide footpaths, just like Luxembourg. The traffic was also organized. For instance, there were traffic lights even for pedestrian crossings, unlike India, where crossing roads can be a nightmare. Car drivers stopping for pedestrians is a big improvement over what I am used to in India. The weather was also pleasant. It was a bit on the cooler side - around 15 degrees Celsius - and I had to wear a jacket.

A cycling track in Paris

A cycling track in Paris

After lunch, we returned to my hostel for my check-in at around 3 o’clock. Then, we went to the Luxembourg Museum (Musée du Luxembourg in French) as Joenio had booked tickets for an exhibition of paintings by the Brazilian painter Tarsila do Amaral. To reach there, we took a subway train from Gare du Nord station. The Paris subway charges 2.15 euros irrespective of the distance (or number of stations) traveled, as opposed to other metro systems I have used.

We reached the museum at around 4 o’clock. I found the paintings beautiful, but I would have appreciated them much more if the descriptions were in English.

A building wit trees on the left and right side of it and sky in the background. People can be seen in front of the building.

Luxembourg Museum

Afterward, we went to a beautiful garden just behind the museum. It served as a great spot to relax and take pictures. Following this, we walked to the Pantheon - a well-known attraction in the city. It is a church built a couple of centuries ago. It has a dome-shaped structure at the top, recognizable from far away.

A building with a garden in front it and people sitting closer to us. Sky can be seen in the background.

A shot of the park near to the Luxembourg Museum

A building with a dome shaped structure on top. Closer to camera, roads can be seen. In the background is blue colored cloudy sky.

Pantheon, one of the attractions of Paris.

Then we went to Notre Dame after having evening snacks and coffee at a nearby bakery. The Notre Dame was just over a kilometer from the Pantheon, so we took a walk. We also crossed the beautiful Seine river. On the way, I sampled Crêpe, a popular French dish. The shop was named Crêperie and had many varieties of Crêpe. I took the one with eggs and Emmental cheese. It was savory and delicious.

Photo with Joenio and Mari

Photo with Joenio and Mari

Notre Dame, another tourist attraction of Paris.

Notre Dame, another tourist attraction of Paris.

By the time we reached Notre Dame, it was 07:30 PM. I learned from Joenio that Notre Dame was closed and being renovated due to a fire a couple of years ago, so we just sat around and clicked photos. It is a catholic cathedral built in French Gothic architecture (I read that on Wikipedia ;)). I read on Wikipedia that it is located on an island named Île de la Cité and I didn’t even realize we are on an island.

At night, we visited the most well-known attraction of Paris, The Eiffel Tower. We again took the subway, alighting at the Bir-Hakeim station, followed by a short walk. We reached the Eiffel Tower at 9 o’clock. It was lit bright yellow. There was not much to do there, so we just clicked photos and hung out. After that, I came back to my hostel.

The Eiffel Tower lit with bright yellow

My photo with Eiffel Tower in the background

Next day, I roamed around the city by walking mostly. France is known for its bakeries, so I checked out a couple of local bakeries. I had espresso a couple of times and sampled croissant, pain au chocolat and lemon meringue tartlet.

Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.Items at a bakery in Paris

Items at a bakery in Paris. Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.

Here are some random shots:

The Paris subway

The Paris subway

Inside a Paris metro train

Inside a Paris subway

A random building and road in Paris

A random building and road in Paris

A shot near Seine river

A shot near Seine river

A view of Seine river

A view of Seine river

On the third day, I had my flight for India. Thus, I checked out of the hostel early in the morning, took an RR train from Gare du Nord station to reach the airport. It costs 11.8 euros.

I heard some of my friends had bad experiences in France. Thus, I had the impression that I would not feel welcomed. Furthermore, I have encountered language problems in my previous Europe trip to Albania and Kosovo. Likewise, I learned a couple of French words, like how to say thank you and good morning, which went a long way.

However, I didn’t have bad experiences in Paris, except for one instance in which I asked my hostel’s reception about my misplaced watch and the person at the reception asked me to be “polite” by being rude. She said, “Excuse me! You don’t know how to say Good Morning?”

Overall, I enjoyed my time in Paris and would like to thank Joenio and Mari for joining me. I would also like to thank Sophie, who gave me a map of Paris.

Let’s end this post here. I’ll meet you in the next one!

Credits: Thanks to contrapunctus for reviewing this post before publishing

05 May, 2025 08:02PM

hackergotchi for Daniel Lange

Daniel Lange

Make `apt` shut up about "modernize-sources" in Trixie

Apt in Trixie (Debian 13) has the annoying function to tell you "Notice: Some sources can be modernized. Run 'apt modernize-sources' to do so." ... every single time you run apt update. Not cool for logs and log monitoring.

And - of course - if you had the option to do this, you ... would have run the indicated apt modernize-sources command to convert your sources.list to "deb822 .sources format" files already. So an information message once or twice would have done.

Well, luckily you can help yourself:

apt -o APT::Get::Update::SourceListWarnings=false will keep apt shut up. This could go into an alias or your systems management tool / update script.

Alternatively add

# Keep apt shut about preferring the "deb822" sources file format
APT::Get::Update::SourceListWarnings "false";

to /etc/apt/apt.conf.d/10quellsourceformatwarnings .

This silences the notices about sources file formats (not only the deb822 one) system-wide. That way you can decide when you can / want to migrate to the new, more verbose, apt sources format yourself.

05 May, 2025 02:14PM by Daniel Lange

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Argo CD Usage Examples

As a followup of my post about the use of argocd-autopilot I’m going to deploy various applications to the cluster using Argo CD from the same repository we used on the previous post.

For our examples we are going to test a solution to the problem we had when we updated a ConfigMap used by the argocd-server (the resource was updated but the application Pod was not because there was no change on the argocd-server deployment); our original fix was to kill the pod manually, but the manual operation is something we want to avoid.

The proposed solution to this kind of issues on the helm documentation is to add annotations to the Deployments with values that are a hash of the ConfigMaps or Secrets used by them, this way if a file is updated the annotation is also updated and when the Deployment changes are applied a roll out of the pods is triggered.

On this post we will install a couple of controllers and an application to show how we can handle Secrets with argocd and solve the issue with updates on ConfigMaps and Secrets, to do it we will execute the following tasks:

  1. Deploy the Reloader controller to our cluster. It is a tool that watches changes in ConfigMaps and Secrets and does rolling upgrades on the Pods that use them from Deployment, StatefulSet, DaemonSet or DeploymentConfig objects when they are updated (by default we have to add some annotations to the objects to make things work).
  2. Deploy a simple application that can use ConfigMaps and Secrets and test that the Reloader controller does its job when we add or update a ConfigMap.
  3. Install the Sealed Secrets controller to manage secrets inside our cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.

Creating the test project for argocd-autopilot

As we did our installation using argocd-autopilot we will use its structure to manage the applications.

The first thing to do is to create a project (we will name it test) as follows:

❯ argocd-autopilot project create test
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (16/16), done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
INFO project created: 'test'

Now that the test project is available we will use it on our argocd-autopilot invocations when creating applications.

Installing the reloader controller

To add the reloader application to the test project as a kustomize application and deploy it on the tools namespace with argocd-autopilot we do the following:

❯ argocd-autopilot app create reloader \
    --app 'github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2' \
    --project test --type kustomize --dest-namespace tools
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (18/18), done.
Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO created 'application namespace' file at '/bootstrap/cluster-resources/in-cluster/tools-ns.yaml'
INFO committing changes to gitops repo...
INFO installed application: reloader

That command creates four files on the argocd repository:

  1. One to create the tools namespace:

    bootstrap/cluster-resources/in-cluster/tools-ns.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        argocd.argoproj.io/sync-options: Prune=false
      creationTimestamp: null
      name: tools
    spec: {}
    status: {}
  2. Another to include the reloader base application from the upstream repository:

    apps/reloader/base/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
  3. The kustomization.yaml file for the test project (by default it includes the same configuration used on the base definition, but we could make other changes if needed):

    apps/reloader/overlays/test/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    namespace: tools
    resources:
    - ../../base
  4. The config.json file used to define the application on argocd for the test project (it points to the folder that includes the previous kustomization.yaml file):

    apps/reloader/overlays/test/config.json
    {
      "appName": "reloader",
      "userGivenName": "reloader",
      "destNamespace": "tools",
      "destServer": "https://kubernetes.default.svc",
      "srcPath": "apps/reloader/overlays/test",
      "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
      "srcTargetRevision": "",
      "labels": null,
      "annotations": null
    }

We can check that the application is working using the argocd command line application:

❯ argocd app get argocd/test-reloader -o tree
Name:               argocd/test-reloader
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          tools
URL:                https://argocd.lo.mixinet.net:8443/applications/test-reloader
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/reloader/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (2893b56)
Health Status:      Healthy

KIND/NAME                                          STATUS  HEALTH   MESSAGE
ClusterRole/reloader-reloader-role                 Synced
ClusterRoleBinding/reloader-reloader-role-binding  Synced
ServiceAccount/reloader-reloader                   Synced           serviceaccount/reloader-reloader created
Deployment/reloader-reloader                       Synced  Healthy  deployment.apps/reloader-reloader created
└─ReplicaSet/reloader-reloader-5b6dcc7b6f                  Healthy
  └─Pod/reloader-reloader-5b6dcc7b6f-vwjcx                 Healthy

Adding flags to the reloader server

The runtime configuration flags for the reloader server are described on the project README.md file, in our case we want to adjust three values:

  • We want to enable the option to reload a workload when a ConfigMap or Secret is created,
  • We want to enable the option to reload a workload when a ConfigMap or Secret is deleted,
  • We want to use the annotations strategy for reloads, as it is the recommended mode of operation when using argocd.

To pass them we edit the apps/reloader/overlays/test/kustomization.yaml file to patch the pod container template, the text added is the following:

patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'

After committing and pushing the updated file the system launches the application with the new options.

The dummyhttp application

To do a quick test we are going to deploy the dummyhttp web server using an image generated using the following Dockerfile:

# Image to run the dummyhttp application <https://github.com/svenstaro/dummyhttp>

# This arg could be passed by the container build command (used with mirrors)
ARG OCI_REGISTRY_PREFIX

# Latest tested version of alpine
FROM ${OCI_REGISTRY_PREFIX}alpine:3.21.3

# Tool versions
ARG DUMMYHTTP_VERS=1.1.1

# Download binary
RUN ARCH="$(apk --print-arch)" && \
  VERS="$DUMMYHTTP_VERS" && \
  URL="https://github.com/svenstaro/dummyhttp/releases/download/v$VERS/dummyhttp-$VERS-$ARCH-unknown-linux-musl" && \
  wget "$URL" -O "/tmp/dummyhttp" && \
  install /tmp/dummyhttp /usr/local/bin && \
  rm -f /tmp/dummyhttp

# Set the entrypoint to /usr/local/bin/dummyhttp
ENTRYPOINT [ "/usr/local/bin/dummyhttp" ]

The kustomize base application is available on a monorepo that contains the following files:

  1. A Deployment definition that uses the previous image but uses /bin/sh -c as its entrypoint (command in the k8s Pod terminology) and passes as its argument a string that runs the eval command to be able to expand environment variables passed to the pod (the definition includes two optional variables, one taken from a ConfigMap and another one from a Secret):

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: dummyhttp
      labels:
        app: dummyhttp
    spec:
      selector:
        matchLabels:
          app: dummyhttp
      template:
        metadata:
          labels:
            app: dummyhttp
        spec:
          containers:
          - name: dummyhttp
            image: forgejo.mixinet.net/oci/dummyhttp:1.0.0
            command: [ "/bin/sh", "-c" ]
            args:
            - 'eval dummyhttp -b \"{\\\"c\\\": \\\"$CM_VAR\\\", \\\"s\\\": \\\"$SECRET_VAR\\\"}\"'
            ports:
            - containerPort: 8080
            env:
            - name: CM_VAR
              valueFrom:
                configMapKeyRef:
                  name: dummyhttp-configmap
                  key: CM_VAR
                  optional: true
            - name: SECRET_VAR
              valueFrom:
                secretKeyRef:
                  name: dummyhttp-secret
                  key: SECRET_VAR
                  optional: true
  2. A Service that publishes the previous Deployment (the only relevant thing to mention is that the web server uses the port 8080 by default):

    apiVersion: v1
    kind: Service
    metadata:
      name: dummyhttp
    spec:
      selector:
        app: dummyhttp
      ports:
      - name: http
        port: 80
        targetPort: 8080
  3. An Ingress definition to allow access to the application from the outside:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: dummyhttp
      annotations:
        traefik.ingress.kubernetes.io/router.tls: "true"
    spec:
      rules:
        - host: dummyhttp.localhost.mixinet.net
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: dummyhttp
                    port:
                      number: 80
  4. And the kustomization.yaml file that includes the previous files:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    
    resources:
    - deployment.yaml
    - service.yaml
    - ingress.yaml

Deploying the dummyhttp application from argocd

We could create the dummyhttp application using the argocd-autopilot command as we’ve done on the reloader case, but we are going to do it manually to show how simple it is.

First we’ve created the apps/dummyhttp/base/kustomization.yaml file to include the application from the previous repository:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0

As a second step we create the apps/dummyhttp/overlays/test/kustomization.yaml file to include the previous file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base

And finally we add the apps/dummyhttp/overlays/test/config.json file to configure the application as the ApplicationSet defined by argocd-autopilot expects:

{
  "appName": "dummyhttp",
  "userGivenName": "dummyhttp",
  "destNamespace": "default",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/dummyhttp/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
}

Once we have the three files we commit and push the changes and argocd deploys the application; we can check that things are working using curl:

❯ curl -s https://dummyhttp.lo.mixinet.net:8443/ | jq -M .
{
  "c": "",
  "s": ""
}

Patching the application

Now we will add patches to the apps/dummyhttp/overlays/test/kustomization.yaml file:

  • One to add annotations for reloader (one to enable it and another one to set the roll out strategy to restart to avoid touching the deployments, as that can generate issues with argocd).
  • Another to change the ingress hostname (not really needed, but something quite reasonable for a specific project).

The file diff is as follows:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,3 +2,22 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+patches:
+# Add reloader annotations
+- target:
+    kind: Deployment
+    name: dummyhttp
+  patch: |-
+    - op: add
+      path: /metadata/annotations
+      value:
+        reloader.stakater.com/auto: "true"
+        reloader.stakater.com/rollout-strategy: "restart"
+# Change the ingress host name
+- target:
+    kind: Ingress
+    name: dummyhttp
+  patch: |-
+    - op: replace
+      path: /spec/rules/0/host
+      value: test-dummyhttp.lo.mixinet.net

After committing and pushing the changes we can use the argocd cli to check the status of the application:

❯ argocd app get argocd/test-dummyhttp -o tree
Name:               argocd/test-dummyhttp
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          default
URL:                https://argocd.lo.mixinet.net:8443/applications/test-dummyhttp
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/dummyhttp/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (fbc6031)
Health Status:      Healthy

KIND/NAME                           STATUS  HEALTH   MESSAGE
Deployment/dummyhttp                Synced  Healthy  deployment.apps/dummyhttp configured
└─ReplicaSet/dummyhttp-55569589bc           Healthy
  └─Pod/dummyhttp-55569589bc-qhnfk          Healthy
Ingress/dummyhttp                   Synced  Healthy  ingress.networking.k8s.io/dummyhttp configured
Service/dummyhttp                   Synced  Healthy  service/dummyhttp unchanged
├─Endpoints/dummyhttp
└─EndpointSlice/dummyhttp-x57bl

As we can see, the Deployment and Ingress where updated, but the Service is unchanged.

To validate that the ingress is using the new hostname we can use curl:

❯ curl -s https://dummyhttp.lo.mixinet.net:8443/
404 page not found
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443/
{"c": "", "s": ""}

Adding a ConfigMap

Now that the system is adjusted to reload the application when the ConfigMap or Secret is created, deleted or updated we are ready to add one file and see how the system reacts.

We modify the apps/dummyhttp/overlays/test/kustomization.yaml file to create the ConfigMap using the configMapGenerator as follows:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,14 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+# Add the config map
+configMapGenerator:
+- name: dummyhttp-configmap
+  literals:
+  - CM_VAR="Default Test Value"
+  behavior: create
+  options:
+    disableNameSuffixHash: true
 patches:
 # Add reloader annotations
 - target:

After committing and pushing the changes we can see that the ConfigMap is available, the pod has been deleted and started again and the curl output includes the new value:

❯ kubectl get configmaps,pods
NAME                             READY   STATUS        RESTARTS   AGE
configmap/dummyhttp-configmap   1      11s
configmap/kube-root-ca.crt      1      4d7h

NAME                            DATA   AGE
pod/dummyhttp-779c96c44b-pjq4d   1/1     Running       0          11s
pod/dummyhttp-fc964557f-jvpkx    1/1     Terminating   0          2m42s
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": ""
}

Using helm with argocd-autopilot

Right now there is no direct support in argocd-autopilot to manage applications using helm (see the issue #38 on the project), but we want to use a chart in our next example.

There are multiple ways to add the support, but the simplest one that allows us to keep using argocd-autopilot is to use kustomize applications that call helm as described here.

The only thing needed before being able to use the approach is to add the kustomize.buildOptions flag to the argocd-cm on the bootstrap/argo-cd/kustomization.yaml file, its contents now are follows:

bootstrap/argo-cd/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
  literals:
  # Enable helm usage from kustomize (see https://github.com/argoproj/argo-cd/issues/2789#issuecomment-960271294)
  - kustomize.buildOptions="--enable-helm"
  - |
    repository.credentials=- passwordSecret:
        key: git_token
        name: autopilot-secret
      url: https://forgejo.mixinet.net/
      usernameSecret:
        key: git_username
        name: autopilot-secret
  name: argocd-cm
  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
- behavior: merge
  literals:
  - "server.insecure=true"
  name: argocd-cmd-params-cm
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
- ingress_route.yaml

On the following section we will explain how the application is defined to make things work.

Installing the sealed-secrets controller

To manage secrets in our cluster we are going to use the sealed-secrets controller and to install it we are going to use its chart.

As we mentioned on the previous section, the idea is to create a kustomize application and use that to deploy the chart, but we are going to create the files manually, as we are not going import the base kustomization files from a remote repository.

As there is no clear way to override helm Chart values using overlays we are going to use a generator to create the helm configuration from an external resource and include it from our overlays (the idea has been taken from this repository, which was referenced from a comment on the kustomize issue #38 mentioned earlier).

The sealed-secrets application

We have created the following files and folders manually:

apps/sealed-secrets/
├── helm
│   ├── chart.yaml
│   └── kustomization.yaml
└── overlays
    └── test
        ├── config.json
        ├── kustomization.yaml
        └── values.yaml

The helm folder contains the generator template that will be included from our overlays.

The kustomization.yaml includes the chart.yaml as a resource:

apps/sealed-secrets/helm/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- chart.yaml

And the chart.yaml file defines the HelmChartInflationGenerator:

apps/sealed-secrets/helm/chart.yaml
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
  name: sealed-secrets
releaseName: sealed-secrets
name: sealed-secrets
namespace: kube-system
repo: https://bitnami-labs.github.io/sealed-secrets
version: 2.17.2
includeCRDs: true
# Add common values to all argo-cd projects inline
valuesInline:
  fullnameOverride: sealed-secrets-controller
# Load a values.yaml file from the same directory that uses this generator
valuesFile: values.yaml

For this chart the template adjusts the namespace to kube-system and adds the fullnameOverride on the valuesInline key because we want to use those settings on all the projects (they are the values expected by the kubeseal command line application, so we adjust them to avoid the need to add additional parameters to it).

We adjust global values as inline to be able to use a the valuesFile from our overlays; as we are using a generator the path is relative to the folder that contains the kustomization.yaml file that calls it, in our case we will need to have a values.yaml file on each overlay folder (if we don’t want to overwrite any values for a project we can create an empty file, but it has to exist).

Finally, our overlay folder contains three files, a kustomization.yaml file that includes the generator from the helm folder, the values.yaml file needed by the chart and the config.json file used by argocd-autopilot to install the application.

The kustomization.yaml file contents are:

apps/sealed-secrets/overlays/test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Uncomment if you want to add additional resources using kustomize
#resources:
#- ../../base
generators:
- ../../helm

The values.yaml file enables the ingress for the application and adjusts its hostname:

apps/sealed-secrets/overlays/test/values.yaml
ingress:
  enabled: true
  hostname: test-sealed-secrets.lo.mixinet.net

And the config.json file is similar to the ones used with the other applications we have installed:

apps/sealed-secrets/overlays/test/config.json
{
  "appName": "sealed-secrets",
  "userGivenName": "sealed-secrets",
  "destNamespace": "kube-system",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/sealed-secrets/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
}

Once we commit and push the files the sealed-secrets application is installed in our cluster, we can check it using curl to get the public certificate used by it:

❯ curl -s https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----

The dummyhttp-secret

To create sealed secrets we need to install the kubeseal tool:

❯ arkade get kubeseal

Now we create a local version of the dummyhttp-secret that contains some value on the SECRET_VAR key (the easiest way for doing it is to use kubectl):

❯ echo -n "Boo" | kubectl create secret generic dummyhttp-secret \
    --dry-run=client --from-file=SECRET_VAR=/dev/stdin -o yaml \
    >/tmp/dummyhttp-secret.yaml

The secret definition in yaml format is:

apiVersion: v1
data:
  SECRET_VAR: Qm9v
kind: Secret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret

To create a sealed version using the kubeseal tool we can do the following:

❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml

That invocation needs to have access to the cluster to do its job and in our case it works because we modified the chart to use the kube-system namespace and set the controller name to sealed-secrets-controller as the tool expects.

If we need to create the secrets without credentials we can connect to the ingress address we added to retrieve the public key:

❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml \
    --cert https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem

Or, if we don’t have access to the ingress address, we can save the certificate on a file and use it instead of the URL.

The sealed version of the secret looks like this:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret
  namespace: default
spec:
  encryptedData:
    SECRET_VAR: [...]
  template:
    metadata:
      creationTimestamp: null
      name: dummyhttp-secret
      namespace: default

This file can be deployed to the cluster to create the secret (in our case we will add it to the argocd application), but before doing that we are going to check the output of our dummyhttp service and get the list of Secrets and SealedSecrets in the default namespace:

❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": ""
}
❯ kubectl get sealedsecrets,secrets
No resources found in default namespace.

Now we add the SealedSecret to the dummyapp copying the file and adding it to the kustomization.yaml file:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+- dummyhttp-sealed-secret.yaml
 # Create the config map value
 configMapGenerator:
 - name: dummyhttp-configmap

Once we commit and push the files Argo CD creates the SealedSecret and the controller generates the Secret:

❯ kubectl apply -f /tmp/dummyhttp-sealed-secret.yaml
sealedsecret.bitnami.com/dummyhttp-secret created
❯ kubectl get sealedsecrets,secrets
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     3s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      3s

If we check the command output we can see the new value of the secret:

❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": "Boo"
}

Using sealed-secrets in production clusters

If you plan to use sealed-secrets look into its documentation to understand how it manages the private keys, how to backup things and keep in mind that, as the documentation explains, you can rotate your sealed version of the secrets, but that doesn’t change the actual secrets.

If you want to rotate your secrets you have to update them and commit the sealed version of the updates (as the controller also rotates the encryption keys your new sealed version will also be using a newer key, so you will be doing both things at the same time).

Final remarks

On this post we have seen how to deploy applications using the argocd-autopilot model, including the use of helm charts inside kustomize applications and how to install and use the sealed-secrets controller.

It has been interesting and I’ve learnt a lot about argocd in the process, but I believe that if I ever want to use it in production I will also review the native helm support in argocd using a separate repository to manage the applications, at least to be able to compare it to the model explained here.

05 May, 2025 05:50AM

May 04, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#47: r2u at its Third Birthday

Welcome to post 47 in the $R^4 series!

r2u provides Ubuntu binaries for all CRAN packages for the R system. It started three years ago, and offers for Linux users on Ubuntu what windows and macOS users already experience: fast, easy and reliable installation of binary packages. But by integrating with the system package manager (which is something that cannot be done on those other operating systems) we can fully and completely integrate it with underlying system. External libraries are resolved as shared libraries and handled by the system package manager. This offers fully automatic installation both at the initial installation and all subsequent upgrades. R users just say, e.g., install.packages("sf") and spatial libraries proj, gdal, geotiff (as well as several others) are automatically installed as dependencies in the correct versions. And they remain installed along with sf as the system manager now knows of the dependency.

Work on r2u began as a quick weekend experiment in March 2022, and by May 4 a first release was marked in the NEWS file after a few brave alpha testers kicked tires quite happily. This makes today the third anniversary of that first release, and marks a good time to review where we are. This short post does this, and stresses three aspects: overall usage, current versions, and new developments.

Steadily Growing Usage at 42 Million Packages Shipped

r2u ships from two sites. Its main repository is at the University of Illinois campus providing ample and heavily redundant bandwidth. We remain very grateful for the sponsorship from Atlas. It also still ships from my own server though that may be discontinued or could be spotty as it is on retail fiber connectivity. As we have access to the both sets of server logs, we can tabulate and chart usage. As of yesterday, total downloads were north of 42 million with current weekly averages around 500 thousand. These are quite staggering numbers for what started as a small hobby project, and are quite humbling.

Usage is driven by deployment in continuous integration (as for example the Ubuntu-use at GitHub makes this both an easy and obvious choice), cloud computing (as it is easy to spin up Ubuntu instances, it is as easy to add r2u via four simple commands or one short script), explorative use (for example on Google Colab) or of course in general laptop, desktop, or server settings.

Current Versions

Since r2u began, we added two Ubuntu LTS releases, three annual R releases as well as multiple BioConductor releases. BioConductor support is on a ‘best-efforts’ basis motivated primarily to support the CRAN packages having dependencies. It has grown to around 500 packages and includes the top-250 by usage.

Right now, current versions R 4.5.0 and BioConductor 3.21, both released last month, are supported.

New Development: arm64

A recent change is the support of the arm64 platform. As discussed in the introductory post, it is a popular and increasingly common CPU choice seen anywhere from the Raspberry Pi 5 and it Cortex CPU to in-house cloud computing platforms (called, respectively, Graviton at AWS and Axiom at GCS), general server use via Ampere CPUs, Cortex-based laptops that start to appears and last but not least on the popular M1 to M4-based macOS machines. (For macOS, one key appeal is in use of ‘lighterweight’ Docker use as these M1 to M4 cpus can run arm64-based containers without a translation layer making it an attractive choice.)

This is currently supported only for the ‘noble’ aka 24.04 release. GitHub Actions, where we compile these packages, now also supports ‘jammy’ aka 22.04 but it may not be worth it to expand there as the current ‘latest’ release is available. We have not yet added BioConductor support but may do so. Drop us a line (maybe via an issue) if this of interest.

With the provision of arm64 binaries, we also started to make heavier use of GitHub Actions. The BioConductor 3.21 release binaries were also created there. This makes the provision more transparent as well as the configuration repo as well as the two builder repos (arm64, bioc) are public, as is of course the main r2u repo.

Summing Up

This short post summarised the current state of r2u along with some recent news. If you are curious, head over to the r2u site and try it, for example in a rocker/r2u container.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

04 May, 2025 09:04PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in April 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Request for OpenSSH debugging help

Following the OpenSSH work described below, I have an open report about the sshd server sometimes crashing when clients try to connect to it. I can’t reproduce this myself, and arm’s-length debugging is very difficult, but three different users have reported it. For the time being I can’t pass it upstream, as it’s entirely possible it’s due to a Debian patch.

Is there anyone reading this who can reproduce this bug and is capable of doing some independent debugging work, most likely involving bisecting changes to OpenSSH? I’d suggest first seeing whether a build of the unmodified upstream 10.0p2 release exhibits the same bug. If it does, then bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian patches. This would be extremely helpful, since at the moment it’s a bit like trying to look for a needle in a haystack from the next field over by sending instructions to somebody with a magnifying glass.

OpenSSH

I upgraded the Debian packaging to OpenSSH 10.0p1 (now designated 10.0p2 by upstream due to a mistake in the release process, but they’re the same thing), fixing CVE-2025-32728. This also involved a diffoscope bug report due to the version number change.

I enabled the new --with-linux-memlock-onfault configure option to protect sshd against being swapped out, but this turned out to cause test failures on riscv64, so I disabled it again there. Debugging this took some time since I needed to do it under emulation, and in the process of setting up a testbed I added riscv64 support to vmdb2.

In coordination with the wtmpdb maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH, so wtmpdb last now reports the correct tty.

I fixed a couple of packaging bugs:

I reviewed and merged several packaging contributions from others:

dput-ng

Since we added dput-ng integration to Debusine recently, I wanted to make sure that it was in good condition in trixie, so I fixed dput-ng: will FTBFS during trixie support period. Previously a similar bug had been fixed by just using different Ubuntu release names in tests; this time I made the tests independent of the current supported release data returned by distro_info, so this shouldn’t come up again.

We also ran into dput-ng: —override doesn’t override profile parameters, which needed somewhat more extensive changes since it turned out that that option had never worked. I fixed this after some discussion with Paul Tagliamonte to make sure I understood the background properly.

man-db

I released man-db 2.13.1. This just included various small fixes and a number of translation updates, but I wanted to get it into trixie in order to include a contribution to increase the MAX_NAME constant, since that was now causing problems for some pathological cases of manual pages in the wild that documented a very large number of terms.

debmirror

I fixed one security bug: debmirror prints credentials with —progress.

Python team

I upgraded these packages to new upstream versions:

In bookworm-backports, I updated these packages:

  • python-django to 3:4.2.20-1 (issuing BSA-123)
  • python-django-pgtrigger to 4.13.3

I dropped a stale build-dependency from python-aiohttp-security that kept it out of testing (though unfortunately too late for the trixie freeze).

I fixed or helped to fix various other build/test failures:

I packaged python-typing-inspection, needed for a new upstream version of pydantic.

I documented the architecture field in debian/tests/autopkgtest-pkg-pybuild.conf files.

I fixed other odds and ends of bugs:

Science team

I fixed various build/test failures:

04 May, 2025 03:38PM by Colin Watson

Russ Allbery

Review: The Book That Held Her Heart

Review: The Book That Held Her Heart, by Mark Lawrence

Series: Library Trilogy #3
Publisher: ACE
Copyright: 2025
ISBN: 0-593-43799-3
Format: Kindle
Pages: 367

The Book That Held Her Heart is the third and final book of the Library fantasy trilogy and a direct sequel to The Book That Broke the World. Lawrence provides a much-needed summary of the previous volumes at the start of this book (thank you to every author who does this!), but I was still struggling a bit with the blizzard of character names. I recommend reading this series entry in relatively close proximity to the other two.

At the end of the previous book, and following some rather horrific violence, the cast split into four groups. Three of those are pursuing different resolutions to the moral problem of the Library's existence. The fourth group opens the book still stuck with the series villains, who were responsible for the over-the-top morality that undermined my enjoyment of The Book That Broke the World. Lawrence follows all four groups in interwoven chapters, maintaining that complex structure through most of this book. I thought this was a questionable structural decision that made this book feel choppy, disconnected, and unnecessarily confusing.

The larger problem, though, is that this is the payoff book, the book where we find out if Lawrence is equal to the tricky ethical questions he's raised and the world-building masterpiece that The Book That Wouldn't Burn kicked off. The answer, unfortunately, is "not really." This is not a total failure; there are some excellent set pieces and world-building twists, and the characters remain likable and enjoyable to read about (although the regrettable sidelining of Livira continues). But the grand finale is weirdly conservative and not particularly grand, and Lawrence's answer to the moral questions he raised is cliched and wholly unsatisfying.

I was really hoping Lawrence was going somewhere more interesting than "Nazis bad." I am entirely sympathetic to this moral position, but so is every other likely reader of this series, and we all know how that story goes. What a waste of a compelling setup.

Sadly, "Nazis bad" isn't even a metaphor for the black-and-white morality that Lawrence first introduced at the end of the previous book. It's a literal description of the main moral thrust of this book. Lawrence introduces yet another new character and timeline so that he can write about thinly-disguised Nazis persecuting even more thinly-disguised Jews, and this conflict is roughly half this book. It's also integral to the ending, which uses obvious, stock secular sainthood as a sort of trump card to resolve ideological conflicts at the heart of the series.

This is one of the things I was worried about after I read the short stories that Lawrence published between the volumes of this series. All of them were thuddingly trite, which did not make me optimistic that Lawrence would find a sufficiently interesting answer to his moral trilemma to satisfy the high expectations created by the build-up. That is, I am sad to report, precisely the failure mode of this book. The resolution of the moral question of the series is arguably radical within the context of the prior world-building, but in a way that effectively reduces it to the boring, small-c conservative bromides of everyday reality. This is precisely the opposite of why I read fantasy, and I did not find Lawrence's arguments for it at all convincing. Neither, I think, did Lawrence, given that the critical debate takes place off camera so that he could avoid having to present the argument.

This is, unfortunately, another series where the author's reach exceeded their grasp. The world-building of The Book That Wouldn't Burn is a masterpiece that created one of the most original and compelling settings that I have read in fantasy for a long time, but unfortunately Lawrence did not have an equally original plan for how to use the setting. This is a common problem and I'm not going to judge it too harshly; it's much harder to end a series than it is to start one. I thought the occasional flashes of brilliance was worth the journey, and they continue into this book with some elaborations on the Library's mythic structure that are going to stick in my mind.

You can sense the story slipping away from the hoped-for conclusion as you read, though. The story shifts more and more away from the setting and the world-building and towards character stories, and while Lawrence's characters are fine, they're not that novel. I am happy to read about Clovis and Arpix, but I can read variations of that story in a lot of places. Livira never recovers her dynamism and drive from the first book, and there is much less beneath Yute's thoughtful calm than I was hoping to find. I think Lawrence knows that the story was not entirely working because the narrative voice becomes more strident as the morality becomes less interesting. I know of only one fantasy author who can make this type of overbearing and freighted narrative style work, and Lawrence is sadly not Guy Gavriel Kay.

This is not a bad book. It is an enjoyable adventure story on its own terms, with some moments of real beauty and awe and a handful of memorable characters, somewhat undermined by a painfully obvious and unoriginal moral frame. It's only a disappointment in the context of what came before it, and it is far from the first series conclusion that doesn't quite live up to the earlier volumes. I'm glad that I read it, and the series as a whole, and I do appreciate that Lawrence brought the whole series to a firm and at least somewhat satisfying conclusion in the promised number of volumes. But I do wish the series as a whole had been as special as the first book.

Rating: 6 out of 10

04 May, 2025 04:48AM

May 03, 2025

Russell Coker

Silly Job Titles

Many years ago I was on a programming project porting code from OS/2 1.x to NT. When I was there they suddenly decided to make a database of all people and get job titles for everyone – apparently the position description used when advertising the jobs wasn’t sufficient. When I got given a clipboard with a form to write my details I looked at what everyone else had done, It was a heap of ridiculous propaganda with everyone trying to put in synonyms for “senior” or “skillful” and listing things that they were allegedly in charge of. There were even some people trying to create impressive titles for their managers to try and suck up.

I chose the job title “coder” as the shortest and most accurate description of what I was doing. I had to confirm that yes I really did want to put a one word title and not a paragraph of frippery. Part of my intent was to mock the ridiculously long job titles used by others but I don’t think anyone realised that.

I was reminded of that company when watching a video of a Trump cabinet meeting where everyone had to tell Trump how great he is. I think that a programmer who wants to be known as a “Principal Solutions Architect of Advanced Algorithmic Systems and Digital Innovation Strategy” (suggested by ChatGPT because I can’t write such ridiculous things) is showing a Trump level of lack of self esteem.

When job titles are discussed there’s always someone who will say “what if my title isn’t impressive enough and I don’t get a pay rise”. If a company bases salaries on how impressive job titles are and not on whether people actually do good work then it’s a very dysfunctional workplace. But dysfunctional companies aren’t uncommon so it’s something you might reasonably have to do. In the company in question I could have described my work as “lead debugger” as I ended up doing most of the debugging on that project (as on many programming projects). The title “lead debugger” accurately described a significant part of my work and it’s work that is essential to project completion.

What do you think are the worst job titles?

03 May, 2025 07:40AM by etbe

Russ Allbery

Review: Paper Soldiers

Review: Paper Soldiers, by Saleha Mohsin

Publisher: Portfolio
Copyright: 2024
ISBN: 0-593-53912-5
Format: Kindle
Pages: 250

The subtitle of Paper Soldiers is "How the Weaponization of the Dollar Changed the World Order," which may give you the impression that this book is about US use of the dollar system for political purposes such as sanctions. Do not be fooled like I was; this subtitle is, at best, deceptive. Coverage of the weaponization of the dollar is superficial and limited to a few chapters. This book is, instead, a history of the strong dollar policy told via a collection of hagiographies of US Treasury Secretaries and written with all of the skeptical cynicism of a poleaxed fawn.

There is going to be some grumbling about the state of journalism in this review.

Per the author's note, Saleha Mohsin is the Bloomberg News beat reporter for the US Department of the Treasury. That is, sadly, exactly what this book reads like: routine beat reporting. Mohsin asked current and former Treasury officials what they were thinking at various points in history and then wrote down their answers without, so far as I can tell, considering any contradictory evidence or wondering whether they were telling the truth. Paper Soldiers does contain extensive notes (those plus the index fill about forty pages), so I guess you could do the cross-checking yourself, although apparently most of the interviews for this book were "on background" and are therefore unattributed. (Is this weird? I feel like this is weird.) Mohsin adds a bit of utterly conventional and uncritical economic framing and casts the whole project in the sort of slightly breathless and dramatized prose style that infests routine news stories in the US.

I find this style of book unbelievably frustrating because it represents such a wasted opportunity. To me, the point of book-length journalism is precisely to not write in this style. When you're trying to crank out two or three articles a week covering current events, I understand why there isn't always space or time to go deep into background, skepticism, and contrary opinions. But when you expand that material into a book, surely the whole point is to take the time to do some real reporting. Dig into what people told you, see if they're lying, talk to the people who disagree with them, question the conventional assumptions, and show your work on the page so that the reader is smarter after finishing your book than they were before they started. International political economics is not a sequence of objective facts. It's a set of decisions made in pursuit of economic and political theories that are disputed and arguable, and I think you owe the reader some sense of the argument and, ideally, some defensible position on the merits that is more than a transcription of your interviews.

This is... not that.

It's a power loop that the United States still enjoys today: trust in America's dollar (and its democratic government) allows for cheap debt financing, which buys health care built on the most advanced research and development and inventions like airplanes and the iPhone. All of this is propelled by free market innovation and the superpowered strength to keep the nation safe from foreign threats. That investment boosts the nation's economic, military, and technological prowess, making its economy (and the dollar) even more attractive.

Let me be precise about my criticism. I am not saying that every contention in the above excerpt is wrong. Some of them are probably correct; more of them are at least arguable. This book is strictly about the era after Bretton Woods, so using airplanes as an example invention is a bizarre choice, but sure, whatever, I get the point. My criticism is that paragraphs like this, as written in this book, are not introductions to deeper discussions that question or defend that model of economic and political power. They are simple assertions that stand entirely unsupported. Mohsin routinely writes paragraphs like the above as if they are self-evident, and then immediately moves on to the next anecdote about Treasury dollar policy.

Take, for example, the role of the US dollar as the world's reserve currency, which roughly means that most international transactions are conducted in dollars and numerous countries and organizations around the world hold large deposits in dollars instead of in their native currency. The conventional wisdom holds that this is a great boon to the US economy, but there are also substantive critiques and questions about that conventional wisdom. You would never know that from this book; Mohsin asserts the conventional wisdom about reserve currencies without so much as a hint that anyone might disagree.

For example, one common argument, repeated several times by Mohsin, is that the US can only get away with the amount of deficit spending and cheap borrowing that it does because the dollar is the world's reserve currency. Consider two other countries whose currencies are clearly not the international reserve currency: Japan and the United Kingdom. The current US debt to GDP ratio is about 125% and the current interest rate on US 10-year bonds is about 4.2%. The current Japanese debt to GDP ratio is about 260% and the current interest rate on Japanese 10-year bonds is about 1.2%. The current UK debt to GDP ratio is 160% and the current interest rate on UK 10-year bonds is 4.5%. Are you seeing the dramatic effects of the role of the dollar as reserve currency? Me either.

Again, I am not saying that this is a decisive counter-argument. I am not an economist; I'm just some random guy on the Internet who finds macroeconomics interesting and reads a few newsletters. I know the Japanese bond market is unusual in ways I'm not accounting for. There may well be compelling arguments for why reserve currency status matters immensely for US borrowing capacity. My point is not that Mohsin is wrong; my point is that you have to convince me and she doesn't even try.

Nowhere in this book is a serious effort to view conventional wisdom with skepticism or confront it with opposing arguments. Instead, this book is full of blithe assertions that happen to support the narrative the author was fed by a bunch of former Treasury officials and does not appear to question in any way. I want books like this to increase my understanding of the world. To do that, they need to show me multiple sides of debates and teach me how to evaluate evidence, not simply reinforce a superficial conventional wisdom.

It doesn't help that whatever fact-checking process this book went through left some glaring errors. For example, on the Plaza Accord:

With their central banks working in concert, enough dollars were purchased on the open market to weaken the currency, making American goods more affordable for foreign buyers.

I don't know what happened after the Plaza Accord (I read books like this to find out!), but clearly it wasn't that. This is utter nonsense. Buying dollars on the open market would increase the value of the dollar, not weaken it; this is basic supply and demand that you learn in the first week of a college economics class. This is the type of error that makes me question all the other claims in the book that I can't easily check.

Mohsin does offer a more credible explanation of the importance of a reserve currency late in the book, although it's not clear to me that she realizes it: The widespread use of the US dollar gives US government sanctions vast international reach, allowing the US to punish and coerce its enemies through the threat of denying them access to the international financial system. Now we're getting somewhere! This is a more believable argument than a small and possibly imaginary effect on government borrowing costs. It is clear why a bellicose US government, particularly one led by advocates of a unitary executive theory that elevates the US president to a status of near-emperor, want to turn the dollar into a weapon of international control. It's much less obvious how comfortable the rest of the world should be with that concentration of power.

This would be a fascinating topic for a journalistic non-fiction book. Some reporter should dive deep into the mechanics of sanctions and ask serious questions about the moral, practical, and diplomatic consequences of this aggressive wielding of US power. One could give it a title like Paper Soldiers that reflected the use of banks and paper currency as foot soldiers enforcing imperious dictates on the rest of the world. Alas, apart from a brief section in which the US scared other countries away from questioning the dollar, Mohsin does not tug at this thread. Maybe someone should write that book someday.

As you will have gathered by now, I think this is a bad book and I do not recommend that you read it. Its worst flaw is one that it shares with far too much mainstream US print and TV journalism: the utter credulity of the author. I have the old-fashioned belief that a journalist should be more than a transcriptionist for powerful people. They should be skeptical, they should assume public figures may be lying, they should look for ulterior motives, and they should try to bring the reader closer to some objective truths about the world, wherever they may lie.

I have no solution for this degradation of journalism. I'm not even sure that it's a change. There were always reporters eager to transcribe the voice of power into the newspaper, and if we remember the history of journalism differently, that may be because we have elevated the rare exceptions and forgotten the average. But after watching too many journalists I once respected start parroting every piece of nonsense someone tells them, from NFTs to UFOs to the existential threat of AI, I've concluded that the least I can do as a reader is to stop rewarding reporters who cannot approach powerful subjects with skepticism, suspicion, and critical research.

I failed in this case, but perhaps I can serve as a warning to others.

Rating: 3 out of 10

03 May, 2025 03:56AM

May 02, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Korg Minilogue XD

I didn't buy the Arturia Microfreak or the Behringer Model-D; I bought a Korg Minilogue XD.

Korg Minilogue XD, and Zoom R8

Korg Minilogue XD, and Zoom R8

I wanted an all-in-one unit which meant a built-in keyboard. I was keen on analogue oscillators, partly for the sound, but mostly to ensure that most of the controls were immediately accessible. The Minilogue-XD has two analogue oscillators and an analogue filter. It also has some useful, pure digital stuff: post-effects (chorus, flanger, echo, etc.); and a third, digital oscillator.

The digital oscillator is programmable. There's an SDK, shared between the Minilogue-XD and some other Korg synths (at least the Prologue and NTS-1). There's a cottage industry of independent musicians writing and selling digital patches, e.g. STRING User Oscillator. Here's an example of a drone programmed using the SDK for the NTS-1:

Eventually I expect to have fun exploring the SDK, but for now I'm keeping it firmly away from computers (hence the Zoom R8 multitrack recorder in the above image: more on that in a future blog post). The Korg has been gathering dust whilst I was writing up, but now I hope to find some time to play.

02 May, 2025 08:04PM

hackergotchi for Daniel Lange

Daniel Lange

Compiling and installing the Gentoo Linux kernel on emerge without genkernel (part 2)

The first install of a Gentoo kernel needs to be somewhat manual if you want to optimize the kernel for the (virtual) system it boots on.

In part 1 I laid out how to improve the subsequent emerges of sys-kernel/gentoo-sources with a small drop in script to build the kernel as part of the ebuild.

Since end of last year Gentoo also supports a less manual way of emerging a kernel:

The following kernel blends are available:

  • sys-kernel/gentoo-kernel (the Gentoo kernel you can configure and compile locally - typically this is what you want if you run Gentoo)
  • sys-kernel/gentoo-kernel-bin (a pre-compiled Gentoo kernel similar to what genkernel would get you)
  • sys-kernel/vanilla-kernel (the upstream Linux kernel, again configurable and locally compiled)

So a quick walk-through for the gentoo-kernel variant:

1. Set up the correct package USE flags

We do not want an initrd and we want our own config to be re-used so:

echo "sys-kernel/gentoo-kernel -initramfs savedconfig" >> /etc/portage/package.use/gentoo-kernel

2. Preseed the saved config

The current kernel config needs to be saved as the initial savedconfig so it is found and applied for our emerge below:

mkdir -p /etc/portage/savedconfig/sys-kernel
cp -n "/usr/src/linux-$(uname -r)/.config" /etc/portage/savedconfig/sys-kernel/gentoo-kernel

3. Emerge the new kernel

emerge sys-kernel/gentoo-kernel

4. Update grub and reboot

Unfortunately this ebuild does not update grub, so we have to run grub-mkconfig manually. This can again be automated via a post_pkg_postinst() script. See the step 7 below.

But for now, let's do it manually:

grub-mkconfig -o /boot/grub/grub.cfg
# All fine? Time to reboot the machine:
reboot

5. (Optional) Prepare for the next kernel build

Run etc-update and merge the new kernel config entries into your savedconfig.

Screenshot of etc-update

The kernel should auto-build once new versions become available via portage.

Again the etc-update can be automated if you feel that is sufficiently safe to do in your environment. See step 7 below for details.

6. (Optional) Remove the old kernel sources

If you want to switch from the method based on gentoo-sources to the gentoo-kernel one, you can remove the kernel sources:

emerge -C "=sys-kernel/gentoo-sources-5*"

Be sure to update the /usr/src/linux symlink to the new kernel sources directory from gentoo-kernel, e.g.:

rm /usr/src/linux; ln -s "/usr/src/$(uname -r)" /usr/src/linux

This may be a good time for a bit more house-keeping: Clean up a bit in /usr/src/ to remove old build artefacts, /boot/ to remove old kernels and /lib/modules/ to get rid of old kernel modules.

7. (Optional) Further automate the ebuild

In part 1 we automated the kernel compile, install and a bit more via a helper function for post_pkg_postinst().

We can do the similarly for what is (currently) missing from the gentoo-kernel ebuilds:

Create /etc/portage/env/sys-kernel/gentoo-kernel with the following:

post_pkg_postinst() {
        etc-update --automode -5 /etc/portage/savedconfig/sys-kernel
        grub-mkconfig -o /boot/grub/grub.cfg
}

The upside of gentoo-kernel over gentoo-sources is that you can put "config override files" in /etc/kernel/config.d/. That way you theoretically profit from config improvements made by the upstream developers. See the Gentoo distribution kernel documentation for a sample snippet. I am fine with savedconfig for now but it is nice that Gentoo provides the flexibility to support both approaches.

02 May, 2025 05:41PM by Daniel Lange

Netatalk 3.1.9 .debs for Debian Jessie available (Apple Timemachine backup to Linux servers)

Netatalk 3.1.9 has been released with two interesting fixes / amendments:

  • FIX: afpd: fix "admin group" option
  • NEW: afpd: new options "force user" and "force group"

Here are the full release notes for 3.1.9 for your reading pleasure.

Due to upstream now differentiating between SysVinit and systemd packages I've followed that for simplicity's sake and built libgcrypt-only builds. If you need the openssl-based tools continue to use the 3.1.8 openssl build until you have finished your migration to a safer password storage.

Warning: Read the original blog post before installing for the first time. Be sure to read the original blog post if you are new to Netatalk3 on Debian Jessie!
You'll get nowhere if you install the .debs below and don't know about the upgrade path. So RTFA.

Now with that out of the way:

Continue reading "Netatalk 3.1.9 .debs for Debian Jessie available (Apple Timemachine backup to Linux servers)"

02 May, 2025 05:40PM by Daniel Lange

Creating iPhone/iPod/iPad notes from the shell

I found a very nice script to create Notes on the iPhone from the command line by hossman over at Perlmonks.

For some weird reason Perlmonks does not allow me to reply with amendments even after I created an account. I can "preview" a reply at Perlmonks but after "create" I get "Permission Denied". Duh. vroom, if you want screenshots, contact me on IRC :-).

As I wrote everything up for the Perlmonks reply anyways, I'll post it here instead.

Against hossman's version 32 from 2011-02-22 I changed the following:

  • removed .pl from filename and documentation
  • added --list to list existing notes
  • added --hosteurope for Hosteurope mail account preferences and with it a sample how to add username and password into the script for unattended use
  • made the "Notes" folder the default (so -f Notes becomes obsolete)
  • added some UTF-8 conversions to make Umlauts work better (this is a mess in perl, see Jeremy Zawodny's writeup and Ivan Kurmanov's blog entry for some further solutions). Please try combinations of utf8::encode and ::decode, binmode utf8 for STDIN and/or STDOUT and the other hints from these linked blog entries in your local setup to get Umlauts and other non-7bit ASCII characters working. Be patient. There's more than one way to do it :-).

I /msg'd hossman the URL of this blog entry.

Continue reading "Creating iPhone/iPod/iPad notes from the shell"

02 May, 2025 05:39PM by Daniel Lange

The Stallman wars

So, 2021 isn't bad enough yet, but don't despair, people are working to fix that:

Welcome to the Stallman wars

Team Cancel: https://rms-open-letter.github.io/ (repo)

Team Support: https://rms-support-letter.github.io/ (repo)

Current Final stats are:

Team Cancel:  3019 signers from 1415 individual commit authors
Team Support: 6853 signers from 5418 individual commit authors

Git shortlog (Top 10):

rms_cancel.git (Last update: 2021-08-16 00:11:15 (UTC))
  1230  Neil McGovern
   251  Joan Touzet
    99  Elana Hashman
    73  Molly de Blanc
    36  Shauna
    19  Juke
    18  Stefano Zacchiroli
    17  Alexey Mirages
    16  Devin Halladay
    14  Nader Jafari

rms_support.git (Last update: 2021-09-29 07:14:39 (UTC))
  1821  shenlebantongying
  1585  nukeop
  1560  Ivanq
  1057  Victor
   880  Job Bautista
   123  nekonee
   101  Victor Gridnevsky
    41  Patrick Spek
    25  Borys Kabakov
    17  KIM Taeyeob

(data as of 2021-10-01)

Technical info:
Signers are counted from their "Signed / Individuals" sections. Commits are counted with git shortlog -s.
Team Cancel also has organizational signatures with Mozilla, Suse and X.Org being among the notable signatories. The 16 original signers of the Cancel petition are added in their count. Neil McGovern, Juke and shenlebantongying need .mailmap support as they have committed with different names.

Further reading:

12.04.2021 Statements from the accused

18.04.2021 Debian General Resolution

The Debian General Resolution (GR) vote of the developers has concluded to not issue a public statement at all, see https://www.debian.org/vote/2021/vote_002#outcome for the results.

It is better to keep quiet and seem ignorant than to speak up and remove all doubt.

See Quote Investigator for the many people that rephrased these words over the centuries. They still need to be recalled more often as too many people in the FLOSS community have forgotten about that wisdom...

01.10.2021 Final stats

It seems enough dust has settled on this unfortunate episode of mob activity now. Hence I stopped the cronjob that updated the stats above regularly. Team Support has kept adding signature all the time while Team Cancel gave up very soon after the FSF decided to stand with Mr. Stallman. So this battle was decided within two months. The stamina of the accused and determined support from some dissenting web devs trumped the orchestrated outrage of well known community figures and their publicity power this time. But history teaches us that does not mean the war is over. There will a the next opportunity to call for arms. And people will call. Unfortunately.

01.11.2024 Team Cancel is opening a new round; Team Support responds with exposing the author of "The Stallman report"

I hate to be right. Three years later than the above:

An anonymous member of team Cancel has published https://stallman-report.org/ [local pdf mirror, 504kB] to "justify our unqualified condemnation of Richard Stallman". It contains a detailed collection of quotes that are used to allege supporting (sexual) misconduct. The demand is again that Mr. Stallman "step[s] down from all positions at the FSF and the GNU project". Addressing him: "the scope and extent of your misconduct disqualifies you from formal positions of power within our community indefinitely".

Team Support has not issues a rebuttal (yet?) but has instead identified the anonymous author as Drew "sircmpwn" DeVault, a gifted software developer, but also a vocal and controversial figure in the Open Source / Free Software space. Ironically quite similar to Richard "rms" Stallman. Their piece is published at https://dmpwn.info/ [local pdf mirror, 929kB]. They also allege a proximity of Mr. DeVault to questionable "Lolita" anime preferences and societal positions to disqualify him.

02 May, 2025 05:38PM by Daniel Lange

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in April 2025

I also co-organised a Debian BSP (Bug-Squashing Party) last weekend, for which I will post a separate report later.

02 May, 2025 04:06PM by Ben Hutchings

Russ Allbery

Review: Sixteen Ways to Defend a Walled City

Review: Sixteen Ways to Defend a Walled City, by K.J. Parker

Series: Siege #1
Publisher: Orbit
Copyright: April 2019
ISBN: 0-316-27080-6
Format: Kindle
Pages: 349

Sixteen Ways to Defend a Walled City is... hm, honestly, I'm not sure what the genre of this novel is. It is a story about medieval engineering and siege weapons in a Rome-inspired secondary world that so far as I can tell is not meant to match ours. There is not a hint of magic. It's not technically a fantasy, but it's marketed like a fantasy, and it's not historical fiction nor is it attempting to be alternate history. The most common description is a fantasy of logistics, so I guess I'll go with that, as long as you understand that the fantasy here is of the non-magical sort.

K.J. Parker is a pen name for Tom Holt.

Orhan is Colonel-in-Chief of the Engineers for the Robur empire, even though he's a milkface, not a blueskin like a proper Robur. (Both of those racial terms are quite offensive.) He started out as a slave, learned a trade, joined the navy as a shipwright, and worked his way up the ranks through luck and enemy action. He's canny, practical, highly respected by his men, happy to cheat and steal to get material for his projects and wages for his people, and just wants to build literal bridges. Nice, sturdy bridges that let people get from one place to another the short way.

When this book opens, Orhan is in Classis trying to requisition some rope. He is saved from discovery of his forged paperwork by pirates burning down the warehouse that held all of the rope, and then saved from the pirates by the sorts of coincidences that seem to happen to Orhan all the time. A few subsequent discoveries about what the pirates were after, and news of another unexpected attack on the empire, make Orhan nervous enough that he takes his men to do a job as far away from the City at the heart of the empire as possible. It's just his luck to return in time to find slaughtered troops and to have to sneak his men into a City already under siege.

Sixteen Ways to Defend a Walled City is told in the first person by Orhan, with an internal justification that the reader only discovers at the end of the book. That means your enjoyment of this book is going to depend a lot on how much you like Orhan's voice. This mostly worked for me; his voice is an odd combination of chatty, self-deprecating, and brusque, and it took a bit for me to get used to it, but I came around. This book is clearly competence porn — nearly all the fun of this book is seeing what desperate plan Orhan will come up with next — so it helps that Orhan does indeed come across as competent.

The part that did not work for me was the morality. You would think from the title that would be straightforward: The City is under siege, people want to capture it and kill everyone, Orhan is on the inside, and his job is to keep them out. That would have been the morality of simplistic military fiction, but most of the appeal was in watching the problem-solving anyway.

That's how the story starts, but then Parker started dropping hints of more complexity. Orhan is a disfavored minority and the Robur who run the empire are racist assholes, even though Orhan mostly gets along with the ones who work with him closely. Orhan says a few things that make the reader wonder whether the City warrants defending, and it becomes less clear whether Orhan's loyalties were as solid as they appeared to be. Parker then offers a few moral dilemmas and has Orhan not follow them in the expected directions, making me wonder where Parker was going with the morality of this story.

And then we find out that the answer is nowhere. Parker is going nowhere. None of that setup has a payoff, and the ending is deeply unsatisfying and arguably pointless.

I am not sure this is an objective analysis. This is one of those books where I would not be surprised to see someone else praise its realism. Orhan is in some ways a more likely figure than the typical hero of a book. He likes accomplishing things, he's a cheat and a liar when that serves his purposes, he's loyal to the people he considers friends in a way that often doesn't involve consulting them about what they want, and he makes decisions mostly on vibes and stubbornness. Both his cynicism and his idealism are different types of masks; beneath both, he's an incoherent muddle. You could argue that we're all that sort of muddle, deep down, and the consistent idealists are the unrealistic (and frightening) ones, and I think Parker may be attempting exactly that argument. I know some readers like this sort of fallibly human incoherence.

But wow did I ever loathe this ending because I was not reading this book for a realistic psychological profile of an average guy. I was here for the competence porn, for the fantasy of logistics, for the experience of watching someone have a plan and get shit done. Apparently that extends to needing him to be competent at morality as well, or at least think about it as hard as he thinks about siege weapons.

One of the reasons why I am primarily a genre reader is that I don't read books for depressing psychological profiles. There are enough of those in the news. I read books to spend some time in a world better than mine, where things work out the way that they are supposed to, or at least in a way that's satisfying.

The other place where this book interfered with my vibes is that it's about a war, and a lot of Orhan's projects are finding more efficient ways to kill people. Parker takes a "war is hell" perspective, and Orhan gets deeply upset at the graphic sights of mangled human bodies that are the frequent results of his plans. I feel weird complaining about this because yes, it's good to be aware of the horrific things that we do to other people in wars, but man, I just wanted to watch some effective project management. I want to enjoy unexpected lateral thinking, appreciate the friendly psychological manipulation involved in getting a project to deliver on deadline, and watch someone solve logistical problems. Battlefields provide an endless supply of interesting challenges, but then Parker feels compelled to linger on the brutal consequences of Orhan's ideas and now I'm depressed and sickened rather than enjoying myself.

I really wanted to like this book, and for a lot of the book I did, but that ending was a bottomless pit that sucked away all my enjoyment and retroactively made the rest of the book feel worse. I so wanted Parker to be going somewhere clever and surprising, and the disappointment when none of that happened was intense. This is probably an excessively negative reaction, and I will not be surprised when other people get along with this book better than I did, but not only will I not be recommending it, I'm now rather dubious about reading any more Parker.

Followed by How to Rule an Empire and Get Away With It.

Rating: 5 out of 10

02 May, 2025 04:30AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Spending my Golden Week in boredom.

Spending my Golden Week in boredom. That's nice.

02 May, 2025 01:53AM by Junichi Uekawa

May 01, 2025

Ian Jackson

Free Software, internal politics, and governance

There is a thread of opinion in some Free Software communities, that we shouldn’t be doing “politics”, and instead should just focus on technology.

But that’s impossible. This approach is naive, harmful, and, ultimately, self-defeating, even on its own narrow terms.

Today I’m talking about small-p politics

In this article I’m using “politics” in the very wide sense: us humans managing our disagreements with each other.

I’m not going to talk about culture wars, woke, racism, trans rights, and so on. I am not going to talk about how Free Software has always had explicitly political goals; or how it’s impossible to be neutral because choosing not to take a stand is itself to take a stand.

Those issues are all are important and Free Software definitely must engage with them. Many of the points I make are applicable there too. But those are not my focus today.

Today I’m talking in more general terms about politics, power, and governance.

Many people working together always entails politics

Computers are incredibly complicated nowadays. Making software is a joint enterprise. Even if an individual program has only a single maintainer, it fits into an ecosystem of other software, maintained by countless other developers. Larger projects can have thousands of maintainers and hundreds of thousands of contributors.

Humans don’t always agree about everything. This is natural. Indeed, it’s healthy: to write the best code, we need a wide range of knowledge and experience.

When we can’t come to agreement, we need a way to deal with that: a way that lets us still make progress, but also leaves us able to work together afterwards. A way that feels OK for everyone.

Providing a framework for disagreement is the job of a governance system. The rules say which people make which decisions, who must be consulted, how the decisions are made, and, how, if any, they can be reviewed.

This is all politics.

Consensus is great but always requiring it is harmful

Ideally a discussion will converge to a synthesis that satisfies everyone, or at least a consensus.

When consensus can’t be achieved, we can hope for compromise: something everyone can live with. Compromise is achieved through negotiation.

If every decision requires consensus, then the proponents of any wide-ranging improvement have an almost insurmountable hurdle: those who are favoured by the status quo and find it convenient can always object. So there will never be consensus for change. If there is any objection at all, no matter how ill-founded, the status quo will always win.

This is where governance comes in.

Governance is like backups: we need to practice it

Governance processes are the backstop for when discussions, and then negotiations, fail, and people still don’t see eye to eye.

In a healthy community, everyone needs to know how the governance works and what the rules are. The participants need to accept the system’s legitimacy. Everyone, including the losing side, must be prepared to accept and implement (or, at least not obstruct) whatever the decision is, and hopefully live with it and stay around.

That means we need to practice our governance processes. We can’t just leave them for the day we have a huge and controversial decision to make. If we do that, then when it comes to the crunch we’ll have toxic rows where no-one can agree the rules; where determined people bend the rules to fit their outcome; and where afterwards people feel like the whole thing was horrible and unfair.

So our decisionmaking bodies and roles need to be making decisions, as a matter of routine, and we need to get used to that.

First-line decisionmaking bodies should be making decisions frequently. Last-line appeal mechanisms (large-scale votes, for example) are naturally going to be exercised more rarely, but they must happen, be seen as legitimate, and their outcomes must be implemented in full.

Governance should usually be routine and boring

When governance is working well it’s quite boring.

People offer their input, and are heard. Angles are debated, and concerns are addressed. If agreement still isn’t reached, the committee, or elected leader, makes a decision.

Hopefully everyone thinks the leadership is legitimate, and that it properly considered and heard their arguments, and made the decision for good reasons.

Hopefully the losing side can still get their work done (and make their own computer work the way they want); so while they will be disappointed, they can live with the outcome.

Many human institutions manage this most of the time. It does take some knowledge about principles of governance, and ideally some experience.

Governance means deciding, not just mediating

By making decisions I mean exercising their authority to rule on an actual disagreement: one that wasn’t resolved by debate or negotiation. Governance processes by definition involve deciding, not just mediating. It’s not governance if we’re advising or cajoling: in that case, we’re back to demanding consensus. Governance is necessary precisely when consensus is not achieved.

If the governance systems are to mean anything, they must be able to (over)rule; that means (over)ruling must be normal and accepted.

Otherwise, when the we need to overrule, we’ll find that we can’t, because we lack the collective practice.

To be legitimate (and seen as legitimate) decisions must usually be made based on the merits, not on participants’ status, and not only on process questions.

On the autonomy of the programmer

Many programmers seem to find the very concept of governance, and binding decisionmaking, deeply uncomfortable.

Ultimately, it means sometimes overruling someone’s technical decision. As programmers and maintainers we naturally see how this erodes our autonomy.

But we have all seen projects where the maintainers are unpleasant, obstinate, or destructive. We have all found this frustrating. Software is all interconnected, and one programmer’s bad decisions can cause problems for many of the rest of us. We exasperate, “why won’t they just do the right thing”. This is futile. People have never “just”ed and they’re not going to start “just”ing now. So often the boot is on the other foot.

More broadly, as software developers, we have a responsibility to our users, and a duty to write code that does good rather than ill in the world. We ought to be accountable. (And not just to capitalist bosses!)

Governance mechanisms are the answer.

(No, forking anything but the smallest project is very rarely a practical answer.)

Mitigate the consequences of decisions — retain flexibility

In software, it is often possible to soften the bad social effects of a controversial decision, by retaining flexibility. With a bit of extra work, we can often provide hooks, non-default configuration options, or plugin arrangements.

If we can convert the question from “how will the software always behave” into merely “what should the default be”, we can often save ourselves a lot of drama.

So it is often worth keeping even suboptimal or untidy features or options, if people want to use them and are willing to maintain them.

There is a tradeoff here, of course. But Free Software projects often significantly under-value the social benefits of keeping everyone happy. Wrestling software — even crusty or buggy software — is a lot more fun than having unpleasant arguments.

But don’t do decisionmaking like a corporation

Many programmers’ experience of formal decisionmaking is from their boss at work. But corporations are often a very bad example.

They typically don’t have as much trouble actually making decisions, but the actual decisions are often terrible, and not just because corporations’ goals are often bad.

You get to be a decisionmaker in a corporation by spouting plausible nonsense, sounding confident, buttering up the even-more-vacuous people further up the chain, and sometimes by sabotaging your rivals. Corporate senior managers are hardly ever held accountable — typically the effects of their tenure are only properly felt well after they’ve left to mess up somewhere else.

We should select our leaders more wisely, and base decisions on substance.

If you won’t do politics, politics will do you

As a participant in a project, or a society, you can of course opt out of getting involved in politics.

You can opt out of learning how to do politics generally, and opt out of understanding your project’s governance structures. You can opt out of making judgements about disputed questions, and tell yourself “there’s merit on both sides”.

You can hate politicians indiscriminately, and criticise anyone you see doing politics.

If you do this, then you are abdicating your decisionmaking authority, to those who are the most effective manipulators, or the most committed to getting their way. You’re tacitly supporting the existing power bases. You’re ceding power to the best liars, to those with the least scruples, and to the people who are most motivated by dominance. This is precisely the opposite of what you wanted.

If enough people won’t do politics, and hate anyone who does, your discussion spaces will be reduced to a battleground of only the hardiest and the most toxic.

If you don’t see the politics, it’s still happening

If your governance systems don’t work, then there is no effective redress against bad or even malicious decisions. Your roleholders and subteams are unaccountable power centres.

Power radically distorts every human relationship, and it takes great strength of character for an unaccountable power centre not to eventually become an unaccountable toxic cabal.

So if you have a reasonable sized community, but don’t see your formal governance systems working — people debating things, votes, leadership making explicit decisions — that doesn’t mean everything is fine, and all the decisions are great, and there’s no politics happening.

It just means that most of your community have given up on the official process. It also probably means that some parts of your project have formed toxic and unaccountable cabals. Those who won’t put up with that will leave.

The same is true if the only governance actions that ever happen are massive drama. That means that only the most determined victim of a bad decision, will even consider using such a process.

Conclusions

  • Respect and support the people who are trying to fix things with politics.

  • Be informed, and, where appropriate, involved.

  • If you are in a position of authority, be willing to exercise that authority. Do more than just mediating to try to get consensus.



comment count unavailable comments

01 May, 2025 10:15PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 2: Speech to Text and back

Having setup an ATOM Echo Voice Satellite and hooked it up to Home Assistant we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the Wyoming Protocol, which describes itself as essentially JSONL + PCM audio. It works nicely in terms of meaning everything can exist as separate modules that then just communicate over network sockets, and there are a whole bunch of Python implementations of the pieces necessary.

The first bit I looked at was speech to text; how do I get what I say to the voice satellite into something that Home Assistant can try and parse? There is a nice self contained speech recognition tool called whisper.cpp, which is a low dependency implementation of inference using OpenAI’s Whisper model. This is wrapped up for Wyoming as part of wyoming-whisper-cpp. Here we get into something that unfortunately seems common in this space; the repo contains a forked copy of whisper.cpp with enough differences that I couldn’t trivially make it work with regular whisper.cpp. That means missing out on new development, and potential improvements (the fork appears to be at v1.5.4, upstream is up to v1.7.5 at the time of writing). However it was possible to get up and running easily enough.

[I note there is a Wyoming Whisper API client that can use the whisper.cpp server, and that might be a cleaner way to go in the future, especially if whisper.cpp ends up in Debian.]

I stated previously I wanted all of this to be as clean an installed on Debian stable as possible. Given most of this isn’t packaged, that’s meant I’ve packaged things up as I go. I’m not at the stage anything is suitable for upload to Debian proper, but equally I’ve tried to make them a reasonable starting point. No pre-built binaries available, just Salsa git repos. https://salsa.debian.org/noodles/wyoming-whisper-cpp in this case. You need python3-wyoming from trixie if you’re building for bookworm, but it doesn’t need rebuilt.

You need a Whisper model that’s been converts to ggml format; they can be found on Hugging Face. I’ve ended up using the base.en model. I found small.en gave more accurate results, but took a little longer, when doing random testing, but it doesn’t seem to make much of a difference for voice control rather than plain transcribing.

[One of the open questions about uploading this to Debian is around the use of a prebuilt AI model. I don’t know what the right answer is here, and whether the voice infrastructure could ever be part of Debian proper, but the current discussion on the interpretation of the DFSG on AI models is very relevant.]

I run this in the same container as my Home Assistant install, using a systemd unit file dropped in /etc/systemd/system/wyoming-whisper-cpp.service:

[Unit]
Description=Wyoming whisper.cpp server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=wyoming-whisper-cpp --uri tcp://localhost:10030 --model base.en

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

It needs the Wyoming Protocol integration enabled in Home Assistant; you can “Add Entry” and enter localhost + 10030 for host + port and it’ll get added. Then in the Voice Assistant configuration there’ll be a whisper.cpp option available.

Text to speech turns out to be weirdly harder. The right answer is something like Wyoming Piper, but that turns out to be hard on bookworm. I’ll come back to that in a future post. For now I took the easy option and used the built in “Google Translate” option in Home Assistant. That needed an extra stanza in configuration.yaml that wasn’t entirely obvious:

media_source:

With this, and the ATOM voice satellite, I could now do basic voice control of my Home Assistant setup, with everything except the text-to-speech piece happening locally! Things such as “Hey Jarvis, turn on the study light” work out of the box. I haven’t yet got into defining my own phrases, partly because I know some of the things I want (“What time is it?”) are already added in later Home Assistant versions than the one I’m running.

Overall I found this initially complicated to setup given my self-imposed constraints about actually understanding the building blocks and compiling them myself, but I’ve been pretty impressed with the work that’s gone into it all. Next step, running a voice satellite on a Debian box.

01 May, 2025 06:05PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities April 2025

Another short status update of what happened on my side last month. Notable might be the Cell Broadcast support for Qualcomm SoCs, the rest is smaller fixes and QoL improvements.

phosh

  • Fix splash spinner icon regression with newer GTK >= 3.24.49 (MR)
  • Update adaptive app list (MR)
  • Fix missing icon when editing folders (MR)
  • Use StartupWMClass for better app-id matching (MR)
  • Fix failing CI tests, fix inverted logic, and add tests (MR)
  • Fix a sporadic test failure (MR)
  • Add support for "do not disturb" by adding a status page to feedback quick settings (MR)
  • monitor: Don't track make/model (MR)
  • Wi-Fi status page: Correctly show tick mark with multiple access points (MR)
  • Avoid broken icon in polkit prompts (MR)
  • Lockscreen auth cleanups (MR)
  • Sync mobile data toggle to sim lock too (MR)
  • Don't let the OSD display cover whole output with a transparent window (MR)

phoc

  • Allow to specify listening socket (MR)
  • Continue to catch up with wlroots git (MR)
  • Disconnect input-method signals on destroy (MR)
  • Disconnect gtk-shell and output signals on destroy (MR)
  • Don't init decorations too early (MR)
  • Allow to disable XWayland on the command line (MR)

phosh-mobile-settings

  • Allow to set overview wallpaper (MR)
  • Ask for confirmation before resetting favorits (MR)
  • Add separate volume controls for notifictaions, multimedia and alerts (MR)
  • Tweak warnings (MR)

pfs

  • Fix build on a single CPU (MR)

feedbackd

  • Move to fdo (MR)
  • Allow to set media-role (MR)
  • Doc updates (MR)
  • Sort LEDs by "usefulness" (MR)
  • Ensure multicolor LEDs have multiple components (MR)
  • Add example wireplumber config (MR)

feedbackd-device-themes

  • Release 0.8.2
  • Move to fdo (MR)
  • Override notification-missed-generic on fajita (MR)
  • Run ci-fairy here too (MR)
  • fajita: Add notification-missed-generic (MR)

gmobile

  • Build Vala support (vapi files) too (MR)
  • Add support for timers that can take the system out of suspend (MR)

Debian

git-buildpackage

  • Don't suppress dch errors (MR)
  • Release 0.9.38

wlroots

  • Get text-input-v3 a bit more in line with other protocols (MR)

ModemManager

  • Cell broadcast support for QMI modems (MR)

Libqmi

  • QMI channel setting (MR)
  • Switch to gi-docgen (MR)
  • loc: Fix since annotations (MR)

gnome-clocks

  • Add wakeup timer to take device out of suspend (MR)

gnome-calls

  • CallBox: Switch between text entry (for SIP) and dialpad (MR)

qmi-parse-kernel-dump

  • Allow to filer on message types and some other small improvements (MR)

xwayland-run

  • Support phoc (MR)

osmo-cbc

  • Small error handling improvements to osmo-cbc (MR)

phosh-nightly

  • Handle feedbackd fdo move (MR)

Blog posts

Bugs

  • Resuming of video streams fails with newer gstreamer (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 May, 2025 12:30PM

Paul Wise

FLOSS Activities April 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Patches: notmuch-mutt patchset

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

01 May, 2025 04:00AM

Russ Allbery

Review: Beyond Pain

Review: Beyond Pain, by Kit Rocha

Series: Beyond #3
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 328

Beyond Pain is a science fiction dystopian erotic romance novel and a direct sequel to Beyond Control. Following the romance series convention, each book features new protagonists who were supporting characters in the previous book. You could probably start here if you wanted, but there are significant spoilers here for earlier books in the series. I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Six has had a brutally hard life. She was rescued from an awful situation in a previous book and is now lurking around the edges of the Sector Four gang, oddly fascinated (as are we all) with their constant sexuality and trying to decide if she wants to, and can, be part of their world. Bren is one of the few people she lets get close: a huge bruiser who likes cage fights and pain but treats Six with a protective, careful respect that she finds comforting. This book is the story of Six and Bren getting to the bottom of each other's psychological hangups while the O'Kanes start taking over Six's former sector.

Yes, as threatened, I read another entry in the dystopian erotica series because I keep wondering how these people will fuck their way into a revolution. This is not happening very quickly, but it seems obvious that is the direction the series is going.

It's been a while since I've reviewed one of these, so here's another variation of the massive disclaimer: I think erotica is harder to review than any other genre because what people like is so intensely personal and individual. This is not even an attempt at an erotica review. I'm both wholly unqualified and also less interested in that part of the book, which should lead you to question my reading choices since that's a good half of the book.

Rather, I'm reading these somewhat for the plot and mostly for the vibes. This is not the most competent collection of individuals, and to the extent that they are, it's mostly because the men (who are, as a rule, charismatic but rather dim) are willing to listen to the women. What they are good at is communication, or rather, they're good about banging their heads (and other parts) against communication barriers until they figure out a way around them. Part of this is an obsession with consent that goes quite a bit deeper than the normal simplistic treatment. When you spend this much time trying to understand what other people want, you have to spend a lot of time communicating about sex, and in these books that means spending a lot of time communicating about everything else as well.

They are also obsessively loyal and understand the merits of both collective action and in making space for people to do the things that they are the best at, while still insisting that people contribute when they can. On the surface, the O'Kanes are a dictatorship, but they're run more like a high-functioning collaboration. Dallas leads because Dallas is good at playing the role of leader (and listening to Lex), which is refreshingly contrary to how things work in the real world right now.

I want to be clear that not only is this erotica, this is not the sort of erotica where there's a stand-alone plot that is periodically interrupted by vaguely-motivated sex scenes that you can skim past. These people use sex to communicate, and therefore most of the important exchanges in the book are in the middle of a sex scene. This is going to make this novel, and this series, very much not to the taste of a lot of people, and I cannot be emphatic enough about that warning.

But, also, this is such a fascinating inversion. It's common in media for the surface plot of the story to be full of sexual tension, sometimes to the extent that the story is just a metaphor for the sex that the characters want to have. This is the exact opposite of that: The sex is a metaphor for everything else that's going on in the story. These people quite literally fuck their way out of their communication problems, and not in an obvious or cringy way. It's weirdly fascinating?

It's also possible that my reaction to this series is so unusual as to not be shared by a single other reader.

Anyway, the setup in this story is that Six has major trust issues and Bren is slowly and carefully trying to win her trust. It's a classic hurt/comfort setup, and if that had played out in the way that this story often does, Bren would have taken the role of the gentle hero and Six the role of the person he rescued. That is not at all where this story goes. Six doesn't need comfort; Six needs self-confidence and the ability to demand what she wants, and although the way Beyond Pain gets her there is a little ham-handed, it mostly worked for me. As with Beyond Shame, I felt like the moral of the story is that the O'Kane men are just bright enough to stop doing stupid things at the last possible moment. I think Beyond Pain worked a bit better than the previous book because Bren is not quite as dim as Dallas, so the reader doesn't have to suffer through quite as many stupid decisions.

The erotica continues to mostly (although not entirely) follow traditional gender roles, with dangerous men and women who like attention. Presumably most people are reading these books for the sex, which I am wholly unqualified to review. For whatever it's worth, the physical descriptions are too mechanical for me, too obsessed with the precise structural assemblage of parts in novel configurations. I am not recommending (or disrecommending) these books, for a whole host of reasons. But I think the authors deserve to be rewarded for understanding that sex can be communication and that good communication about difficult topics is inherently interesting in a way that (at least for me) transcends the erotica.

I bet I'm going to pick up another one of these about a year from now because I'm still thinking about these people and am still curious about how they are going to succeed.

Followed by Beyond Temptation, an interstitial novella. The next novel is Beyond Jealousy.

Rating: 6 out of 10

01 May, 2025 03:46AM

April 30, 2025

Russell Coker

Simon Josefsson

Building Debian in a GitLab Pipeline

After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:

Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It “only” had to orchestrate building up to around 500 packages for each distribution and per architecture.

Differential reproducible rebuilds doesn’t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs.

Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline.

One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build.

My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org.

I realized that these rebuilds would be not be sufficient for me: it doesn’t solve the problem of how to trust the toolchain. Let’s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would “only” have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code — that’s what I think we all would prefer to focus on, to be able to improve upstream source code.

The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version.

See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I’m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters.

The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don’t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal.

My approach to reach trusted binaries on my laptop appears to be a three-step effort:

  • Encourage an idempotently rebuildable Debian archive, i.e., a Debian archive that can be 100% bit-by-bit identically rebuilt using Debian itself.
  • Construct a smaller number of binary *.deb packages based on Guix binaries that when used as build inputs (potentially iteratively) leads to 100% bit-by-bit identical packages as in step 1.
  • Encourage a freedom respecting distribution, similar to Trisquel, from this idempotently rebuildable Debian.

How to go about achieving this? Today’s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project?

If you want to contribute to some GitHub or GitLab project, you click the ‘Fork’ button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we’ve seen with many software supply-chain security incidents for the past years, where the “magic” is involved is a good place to introduce malicious code.

To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project.

Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one “build” job definition and one “deploy” job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually.

The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage essentially by invoking the following commands.

sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
    apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/

The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:

if ! grep -q '^pool/**' .gitattributes; then
    git lfs track 'pool/**'
    git add .gitattributes
    git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE" | cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE" | cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "${DDB_GIT_TOKEN:-}" = ""; then
    echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
    git push -o ci.skip
fi

That’s it! The actual implementation is a bit longer, but the major difference is for log and error handling.

You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/-style scripts implementing the build.d/ process and the deploy.d/ commands.

There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I’m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS.

To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential packages on every build job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I’m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven’t been bothered to set up non-public access.

Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit.

What’s next in this venture? Some ideas include:

  • Optimize the stage-N build process by identifying the transitive closure of build dependencies from some initial set of packages.
  • Create a build orchestrator that launches pipelines based on the previous list of packages, as necessary to fill the archive with necessary packages. Currently I’m using a basic /bin/sh for loop around curl to trigger GitLab CI/CD pipelines with names derived from https://popcon.debian.org/.
  • Create and publish a dists/ sub-directory, so that it is possible to use the newly built packages in the stage-1 build phase.
  • Produce diffoscope-style differences of built packages, both stage0 against official binaries and between stage0 and stage1.
  • Create the stage-1 build containers and stage-1 archive.
  • Review build failures. On amd64 and arm64 the list is small (below 10 out of ~5000 builds), but on riscv64 there is some icache-related problem that affects Java JVM that triggers build failures.
  • Provide GitLab pipeline based builds of the Debian docker container images, cloud-images, debian-live CD and debian-installer ISO’s.
  • Provide integration with Sigstore and Sigsum for signing of Debian binaries with transparency-safe properties.
  • Implement a simple replacement for dpkg and apt using /bin/sh for use during bootstrapping when neither packaging tools are available.

What do you think?

30 April, 2025 09:25AM by simon

Utkarsh Gupta

FOSS Activites in April 2025

Here’s my 67th monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 76th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I do, both, technical and non-technical. Here’s what I did:

  • Updating Matomo to v5.3.1.
  • Lots of bursary stuff for DC25. We rolled out the results for the first batch.
  • Helping Andreas Tille with and around FTP team bits.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 51st month of actively contributing to Ubuntu. I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did (there’s so much and some of it might not be public…yet!), here’s a quick TL;DR of what I did:

  • Released 25.04 Plucky Puffin! \o/
  • Helped open the 25.10 Questing Quokka archive. Let the development begin!
  • Jon, VP of Engineering, asked me to lead the Canonical Release team - that was definitely not something I saw coming. :)
  • We’re now doing Ubuntu monthly releases for the devel releases - I’ll be the tech lead for the project.
  • Preparing for the May sprints - too many new things and new responsibilities. :)

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).

This was my 67th month as a Debian LTS and 54th month as a Debian ELTS paid contributor.
Due to DC25 bursary work, Ubuntu 25.04 release, and other travel bits, I only worked for 2.00 hours for LTS and 4.50 hours for ELTS.

I did the following things:

  • [ELTS] Had already backported patches for adminer for the following CVEs:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Salsa repository: https://salsa.debian.org/lts-team/packages/adminer.
    • As the same CVEs are affected LTS, we decided to release for LTS first and then for ELTS but since I had no hours for LTS, I decided to do a bit more of testing for ELTS to make sure things don’t regress in buster.
    • Will prepare LTS (and also s-p-u, sigh) updates this month and get back to ELTS thereafter.
  • [LTS] Started to prepare the LTS update for adminer for the same CVEs as for ELTS:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Haven’t fully backported the patch yet but this is what I intend to do for this month (now that I have hours :D).
  • [LTS] Partially attended the LTS meeting on Jitsi. Summary here.
    • “Partially” because I was fighting SSO auth issues with Jitsi. Looks like there were some upstream issues/activity and it was resulting in gateway crashes but all good now.
    • I was following the running notes and keeping up with things as much as I could. :)

Until next time.
:wq for today.

30 April, 2025 05:41AM

April 29, 2025

Petter Reinholdtsen

OpenSnitch 1.6.8 is now in Trixie

After some days of effort, I am happy to report that the great interactive application firewall OpenSnitch got a new version in Trixie, now with the Linux kernel based ebpf sniffer included for better accuracy. This new version made it possible for me to finally track down the rule required to avoid a deadlock when using it on a machine with the user home directory on NFS. The problematic connection originated from the Linux kernel itself, causing the /proc based version in Debian 12 to fail to properly attribute the connection and cause the OpenSnitch daemon to block while waiting for the Python GUI, which was unable to continue because the home directory was blocked waiting for the OpenSnitch daemon. A classic deadlock reported upstream for a more permanent solution.

I really love the control over all the programs and web pages calling home that OpenSnitch give me. Just today I discovered a strange connection to sb-ssl.google.com when I pulled up a PDF passed on to me via a Mattermost installation. It is some times hard to know which connections to block and which to go through, but after running it for a few months, the default rule set start to handle most regular network traffic and I only have to have a look at the more unusual connections.

If you would like to know more about what your machines programs are doing, install OpenSnitch today. It is only a apt install opensnitch away. :)

I hope to get the 1.6.9 version in experimental into Trixie before the archive enter hard freeze. This new version should have no relevant changes not already in the 1.6.8-11 edition, as it mostly contain Debian patches, but will give it a few days testing to see if there are any surprises. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

29 April, 2025 02:30PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Freexian partners with Invisible Things Lab to extend security support for Xen hypervisor

Freexian is pleased to announce a partnership with Invisible Things Lab to extend the security support of the Xen type-1 hypervisor version 4.17. Three years after its initial release, Xen 4.17, the version available in Debian 12 “bookworm”, will reach end-of-security-support status upstream on December 2025. The aim of our partnership with Invisible Things is to extend the security support until, at least, July 2027. We may also explore a possibility of extending the support until June 2028, to coincide with the end of Debian 12 LTS support-period.

The security support of Xen in Debian, since Debian 8 “jessie” until Debian 11 “bullseye”, reached its end before the end of the life cycle of the release. We aim then to significantly improve the situation of Xen in Debian 12. As with similar efforts, we would like to mention that this is an experiment and that we will do our best to make it a success. We are aiming to try and to extend the security support for Xen versions included in future Debian releases, including Debian 13 “trixie”.

In the long term, we hope that this effort will ultimately allow the Xen Project to increase the official security support period for Xen releases from the current three years to at least five years, with the extra work being funded by the community of companies benefiting from the longer support period.

If your company relies on Xen and wants to help sustain LTS versions of Xen, please reach out to us. For companies using Debian, the simplest way is to subscribe to Freexian’s Debian LTS offer at a gold level (or above) and let us know that you want to contribute to Xen LTS when you send in your subscription form. For others, please reach out to us at sales@freexian.com and we will figure out a way to help you contribute.

In the mean time, this initiative has been made possible thanks to the current LTS sponsors and ELTS customers. We hope the entire community of Debian and Xen users will benefit from this initiative.

For any queries you might have, please don’t hesitate to contact us at sales@freexian.com.

About Invisible Things Lab

Invisible Things Lab (ITL) offers low-level security consulting auditing services for x86 virtualization technologies; C, C++, and assembly codebases; Intel SGX; binary exploitation and mitigations; and more. ITL also specializes in Qubes OS and Gramine consulting, including deployment, debugging, and feature development.

29 April, 2025 12:00AM

April 28, 2025

Scarlett Gately Moore

KDE Snaps and life. Spirits are up, but I need a little help please

I was just released from the hospital after a 3 day stay for my ( hopefully ) last surgery. There was concern with massive blood loss and low heart rate. I have stabilized and have come home. Unfortunately, they had to prescribe many medications this round and they are extremely expensive and used up all my funds. I need gas money to get to my post-op doctors appointments, and food would be cool. I would appreciate any help, even just a dollar!

I am already back to work, and continued work on the crashy KDE snaps in a non KDE env. ( Also affects anyone using kde-neon extensions such as FreeCAD) I hope to have a fix in the next day or so.

Fixed kate bug https://bugs.kde.org/show_bug.cgi?id=503285

Thanks for stopping by.

28 April, 2025 01:04PM by sgmoore

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

ArgoCD Autopilot

For a long time I’ve been wanting to try GitOps tools, but I haven’t had the chance to try them for real on the projects I was working on.

As now I have some spare time I’ve decided I’m going to play a little with Argo CD, Flux and Kluctl to test them and be able to use one of them in a real project in the future if it looks appropriate.

On this post I will use Argo-CD Autopilot to install argocd on a k3d local cluster installed using OpenTofu to test the autopilot approach of managing argocd and test the tool (as it manages argocd using a git repository it can be used to test argocd as well).

Installing tools locally with arkade

Recently I’ve been using the arkade tool to install kubernetes related applications on Linux servers and containers, I usually get the applications with it and install them on the /usr/local/bin folder.

For this post I’ve created a simple script that checks if the tools I’ll be using are available and installs them on the $HOME/.arkade/bin folder if missing (I’m assuming that docker is already available, as it is not installable with arkade):

#!/bin/sh

# TOOLS LIST
ARKADE_APPS="argocd argocd-autopilot k3d kubectl sops tofu"

# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Install or update arkade
if command -v arkade >/dev/null; then
  echo "Trying to update the arkade application"
  sudo arkade update
else
  echo "Installing the arkade application"
  curl -sLS https://get.arkade.dev | sudo sh
fi

echo ""
echo "Installing tools with arkade"
echo ""
for app in $ARKADE_APPS; do
  app_path="$(command -v $app)" || true
  if [ "$app_path" ]; then
    echo "The application '$app' already available on '$app_path'"
  else
    arkade get "$app"
  fi
done

cat <<EOF

Add the ~/.arkade/bin directory to your PATH if tools have been installed there

EOF

The rest of scripts will add the binary directory to the PATH if missing to make sure things work if something was installed there.

Creating a k3d cluster with opentofu

Although using k3d directly will be a good choice for the creation of the cluster, I’m using tofu to do it because that will probably be the tool used to do it if we were working with Cloud Platforms like AWS or Google.

The main.tf file is as follows:

terraform {
  required_providers {
    k3d = {
      source  = "moio/k3d"
      version = "0.0.12"
    }
    sops = {
      source = "carlpett/sops"
      version = "1.2.0"
    }
  }
}

data "sops_file" "secrets" {
    source_file = "secrets.yaml"
}

resource "k3d_cluster" "argocd_cluster" {
  name    = "argocd"
  servers = 1
  agents  = 2

  image   = "rancher/k3s:v1.31.5-k3s1"
  network = "argocd"
  token   = data.sops_file.secrets.data["token"]

  port {
    host_port      = 8443
    container_port = 443
    node_filters = [
      "loadbalancer",
    ]
  }

  k3d {
    disable_load_balancer     = false
    disable_image_volume      = false
  }

  kubeconfig {
    update_default_kubeconfig = true
    switch_current_context    = true
  }

  runtime {
    gpu_request = "all"
  }
}

The k3d configuration is quite simple, as I plan to use the default traefik ingress controller with TLS I publish the 443 port on the hosts 8443 port, I’ll explain how I add a valid certificate on the next step.

I’ve prepared the following script to initialize and apply the changes:

#!/bin/sh

set -e

# VARIABLES
# Default token for the argocd cluster
K3D_CLUSTER_TOKEN="argocdToken"
# Relative PATH to install the k3d cluster using terr-iaform
K3D_TF_RELPATH="k3d-tf"
# Secrets yaml file
SECRETS_YAML="secrets.yaml"
# Relative PATH to the workdir from the script directory
WORK_DIR_RELPATH=".."

# Compute WORKDIR
SCRIPT="$(readlink -f "$0")"
SCRIPT_DIR="$(dirname "$SCRIPT")"
WORK_DIR="$(readlink -f "$SCRIPT_DIR/$WORK_DIR_RELPATH")"

# Update the PATH to add the arkade bin directory
# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Go to the k3d-tf dir
cd "$WORK_DIR/$K3D_TF_RELPATH" || exit 1

# Create secrets.yaml file and encode it with sops if missing
if [ ! -f "$SECRETS_YAML" ]; then
  echo "token: $K3D_CLUSTER_TOKEN" >"$SECRETS_YAML"
  sops encrypt -i "$SECRETS_YAML"
fi

# Initialize terraform
tofu init

# Apply the configuration
tofu apply

Adding a wildcard certificate to the k3d ingress

As an optional step, after creating the k3d cluster I’m going to add a default wildcard certificate for the traefik ingress server to be able to use everything with HTTPS without certificate issues.

As I manage my own DNS domain I’ve created the lo.mixinet.net and *.lo.mixinet.net DNS entries on my public and private DNS servers (both return 127.0.0.1 and ::1) and I’ve created a TLS certificate for both entries using Let’s Encrypt with Certbot.

The certificate is updated automatically on one of my servers and when I need it I copy the contents of the fullchain.pem and privkey.pem files from the /etc/letsencrypt/live/lo.mixinet.net server directory to the local files lo.mixinet.net.crt and lo.mixinet.net.key.

After copying the files I run the following file to install or update the certificate and configure it as the default for traefik:

#!/bin/sh
# Script to update the
secret="lo-mixinet-net-ingress-cert"
cert="${1:-lo.mixinet.net.crt}"
key="${2:-lo.mixinet.net.key}"
if [ -f "$cert" ] && [ -f "$key" ]; then
  kubectl -n kube-system create secret tls $secret \
    --key=$key \
    --cert=$cert \
    --dry-run=client --save-config -o yaml  | kubectl apply -f -
  kubectl apply -f - << EOF
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
  name: default
  namespace: kube-system

spec:
  defaultCertificate:
    secretName: $secret
EOF
else
  cat <<EOF
To add or update the traefik TLS certificate the following files are needed:

- cert: '$cert'
- key: '$key'

Note: you can pass the paths as arguments to this script.
EOF
fi

Once it is installed if I connect to https://foo.lo.mixinet.net:8443/ I get a 404 but the certificate is valid.

Installing argocd with argocd-autopilot

Creating a repository and a token for autopilot

I’ll be using a project on my forgejo instance to manage argocd, the repository I’ve created is on the URL https://forgejo.mixinet.net/blogops/argocd and I’ve created a private user named argocd that only has write access to that repository.

Logging as the argocd user on forgejo I’ve created a token with permission to read and write repositories that I’ve saved on my pass password store on the mixinet.net/argocd@forgejo/repository-write entry.

Bootstrapping the installation

To bootstrap the installation I’ve used the following script (it uses the previous GIT_REPO and GIT_TOKEN values):

#!/bin/sh

set -e

# VARIABLES
# Relative PATH to the workdir from the script directory
WORK_DIR_RELPATH=".."

# Compute WORKDIR
SCRIPT="$(readlink -f "$0")"
SCRIPT_DIR="$(dirname "$SCRIPT")"
WORK_DIR="$(readlink -f "$SCRIPT_DIR/$WORK_DIR_RELPATH")"

# Update the PATH to add the arkade bin directory
# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Go to the working directory
cd "$WORK_DIR" || exit 1

# Set GIT variables
if [ -z "$GIT_REPO" ]; then
  export GIT_REPO="https://forgejo.mixinet.net/blogops/argocd.git"
fi
if [ -z "$GIT_TOKEN" ]; then
  GIT_TOKEN="$(pass mixinet.net/argocd@forgejo/repository-write)"
  export GIT_TOKEN
fi

argocd-autopilot repo bootstrap --provider gitea

The output of the execution is as follows:

❯ bin/argocd-bootstrap.sh
INFO cloning repo: https://forgejo.mixinet.net/blogops/argocd.git
INFO empty repository, initializing a new one with specified remote
INFO using revision: "", installation path: ""
INFO using context: "k3d-argocd", namespace: "argocd"
INFO applying bootstrap manifests to cluster...
namespace/argocd created
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created
secret/autopilot-secret created

INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
application.argoproj.io/autopilot-bootstrap created
INFO running argocd login to initialize argocd config
Context 'autopilot' updated

INFO argocd initialized. password: XXXXXXX-XXXXXXXX
INFO run:

    kubectl port-forward -n argocd svc/argocd-server 8080:80

Now we have the argocd installed and running, it can be checked using the port-forward and connecting to https://localhost:8080/ (the certificate will be wrong, we are going to fix that in the next step).

Updating the argocd installation in git

Now that we have the application deployed we can clone the argocd repository and edit the deployment to disable TLS for the argocd server (we are going to use TLS termination with traefik and that needs the server running as insecure, see the Argo CD documentation)

❯ ssh clone ssh://git@forgejo.mixinet.net/blogops/argocd.git
❯ cd argocd
❯ edit bootstrap/argo-cd/kustomization.yaml
❯ git commit -m 'Disable TLS for the argocd-server'

The changes made to the kustomization.yaml file are the following:

--- a/bootstrap/argo-cd/kustomization.yaml
+++ b/bootstrap/argo-cd/kustomization.yaml
@@ -11,6 +11,11 @@ configMapGenerator:
         key: git_username
         name: autopilot-secret
   name: argocd-cm
+  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
+- behavior: merge
+  literals:
+  - "server.insecure=true"
+  name: argocd-cmd-params-cm
 kind: Kustomization
 namespace: argocd
 resources:

Once the changes are pushed we sync the argo-cd application manually to make sure they are applied:

argo cd sync

As a test we can download the argocd-cmd-params-cm ConfigMap to make sure everything is OK:

apiVersion: v1
data:
  server.insecure: "true"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"server.insecure":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"argo-cd","app.kubernetes.io/name":"argocd-cmd-params-cm","app.kubernetes.io/part-of":"argocd"},"name":"argocd-cmd-params-cm","namespace":"argocd"}}
  creationTimestamp: "2025-04-27T17:31:54Z"
  labels:
    app.kubernetes.io/instance: argo-cd
    app.kubernetes.io/name: argocd-cmd-params-cm
    app.kubernetes.io/part-of: argocd
  name: argocd-cmd-params-cm
  namespace: argocd
  resourceVersion: "16731"
  uid: a460638f-1d82-47f6-982c-3017699d5f14

As this simply changes the ConfigMap we have to restart the argocd-server to read it again, to do it we delete the server pods so they are re-created using the updated resource:

❯ kubectl delete pods -n argocd -l app.kubernetes.io/name=argocd-server

After doing this the port-forward command is killed automatically, if we run it again the connection to get to the argocd-server has to be done using HTTP instead of HTTPS.

Instead of testing that we are going to add an ingress definition to be able to connect to the server using HTTPS and GRPC against the address argocd.lo.mixinet.net using the wildcard TLS certificate we installed earlier.

To do it we to edit the bootstrap/argo-cd/kustomization.yaml file to add the ingress_route.yaml file to the deployment:

--- a/bootstrap/argo-cd/kustomization.yaml
+++ b/bootstrap/argo-cd/kustomization.yaml
@@ -20,3 +20,4 @@ kind: Kustomization
 namespace: argocd
 resources:
 - github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
+- ingress_route.yaml

The ingress_route.yaml file contents are the following:

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: argocd-server
  namespace: argocd
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`argocd.lo.mixinet.net`)
      priority: 10
      services:
        - name: argocd-server
          port: 80
    - kind: Rule
      match: Host(`argocd.lo.mixinet.net`) && Header(`Content-Type`, `application/grpc`)
      priority: 11
      services:
        - name: argocd-server
          port: 80
          scheme: h2c
  tls:
    certResolver: default

After pushing the changes and waiting a little bit the change is applied and we can access the server using HTTPS and GRPC, the first way can be tested from a browser and the GRPC using the command line interface:

❯ argocd --grpc-web login argocd.lo.mixinet.net:8443
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.lo.mixinet.net:8443' updated
❯ argocd app list -o name
argocd/argo-cd
argocd/autopilot-bootstrap
argocd/cluster-resources-in-cluster
argocd/root

So things are working fine …​ and that is all on this post, folks!

28 April, 2025 07:50AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, March 2025 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In March, 20 contributors have been paid to work on Debian LTS, their reports are available:

  • Adrian Bunk did 51.5h (out of 0.0h assigned and 51.5h from previous period).
  • Andreas Henriksson did 20.0h (out of 20.0h assigned).
  • Andrej Shadura did 6.0h (out of 10.0h assigned), thus carrying over 4.0h to the next month.
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 12.0h (out of 12.0h assigned and 12.0h from previous period), thus carrying over 12.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 26.0h (out of 23.0h assigned and 3.0h from previous period).
  • Emilio Pozuelo Monfort did 37.0h (out of 36.5h assigned and 0.75h from previous period), thus carrying over 0.25h to the next month.
  • Guilhem Moulin did 8.25h (out of 11.0h assigned and 9.0h from previous period), thus carrying over 11.75h to the next month.
  • Jochen Sprickerhof did 18.0h (out of 24.25h assigned and 3.0h from previous period), thus carrying over 9.25h to the next month.
  • Lee Garrett did 10.25h (out of 0.0h assigned and 42.0h from previous period), thus carrying over 31.75h to the next month.
  • Lucas Kanashiro did 4.0h (out of 0.0h assigned and 56.0h from previous period), thus carrying over 52.0h to the next month.
  • Markus Koschany did 27.25h (out of 27.25h assigned).
  • Roberto C. Sánchez did 8.25h (out of 7.0h assigned and 17.0h from previous period), thus carrying over 15.75h to the next month.
  • Santiago Ruano Rincón did 17.5h (out of 19.75h assigned and 5.25h from previous period), thus carrying over 7.5h to the next month.
  • Sean Whitton did 7.0h (out of 7.0h assigned).
  • Sylvain Beucler did 32.0h (out of 31.0h assigned and 1.25h from previous period), thus carrying over 0.25h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 7.75h (out of 12.0h assigned), thus carrying over 4.25h to the next month.
  • Utkarsh Gupta did 15.0h (out of 15.0h assigned).

Evolution of the situation

In March, we have released 31 DLAs.

  • Notable security updates:
    • linux-6.1 (1 2)and linux, prepared by Ben Hutchings, fixed an extensive list of vulnerabilities
    • firefox-esr, prepared by Emilio Pozuelo Monfort, fixed a variety of vulnerabilities
    • intel-microcode, prepared by Tobias Frost, fixed several local privilege escalation, denial of service, and information disclosure vulnerabilities
    • vim, prepared by Sean Whitton, fixed a multitude of vulnerabilities, including many application crashes, buffer overflows, and out-of-bounds reads

The recent trend of contributions from contributors external to the formal LTS team has continued. LTS contributor Sylvain Beucler reviewed and facilitated an update to openvpn proposed by Aquila Macedo, resulting in the publication of DLA 4079-1. Thanks a lot to Aquila for preparing the update.

The LTS Team continues to make contributions to the current stable Debian release, Debian 12 (codename “bookworm”). LTS contributor Bastien Roucariès prepared a stable upload of krb5 to ensure that fixes made in the LTS release, Debian 11 (codename “bullseye”) were also made available to stable users. Additional stable updates, for tomcat10 and jetty9, were prepared by LTS contributor Markus Koschany. And, finally, LTS contributor Utkarsh Gupta prepared stable updates for rails and ruby-rack.

LTS contributor Emilio Pozuelo Monfort has continued his ongoing improvements to the Debian security tracker and its associated tooling, making the data contained in the tracker more reliable and easing interaction with it.

The ckeditor3 package, which has been EOL by upstream for some time, is still depended upon by the PHP Horde packages in Debian. Sylvain, along with Bastien, did monumental work in coordinating with maintainers, security team fellows, and other Debian teams, to formally declare the EOL of the ckeditor3 package in Debian 11 and in Debian 12. Additionally, as a result of this work Sylvain has worked towards the removal of ckeditor3 as a dependency by other packages in order to facilitate the complete removal of ckeditor3 from all future Debian releases.

Thanks to our sponsors

Sponsors that joined recently are in bold.

28 April, 2025 12:00AM by Roberto C. Sánchez

Valhalla's Things

POLARVIDE modular jacket

Posted on April 28, 2025
Tags: madeof:atoms, craft:sewing

A woman with early morning hair wearing a knee-length grey polar fleece jacket; it is closed at the waist with a twine belt that makes the bound front edge go in a smooth curve between the one side of the neck to the other side of the waist and then flare back outwards on the hips. The sleeves are long enough to go over the hands, and both them and the hem are cut

Years ago I made myself a quick dressing gown from a white fleece IKEA throw and often wore it in the morning between waking up and changing into day clothes.

One day I want to make myself a fancy victorian wrapper, to use in its place, but that’s still in the early planning stage, and will require quite some work.

a free cat sitting half asleep on an old couch, with a formerly white piece of fabric draped between the armrest and the seat. A piece of cardboard between two seat pillows provides additional protection from the wind.

Then last autumn I discovered that the taxes I owed to the local lord (who provides protection from mice and other small animals) included not just a certain amount of kibbles, but also some warm textiles, and the dressing gown (which at this time was definitely no longer pristine) had to go.

For a while I had to do without a dressing gown, but then in the second half of this winter I had some time for a quick machine sewing project. I could not tackle the big victorian thing, but I still had a second POLARVIDE throw from IKEA (this time in a more sensible dark grey) I had bought with sewing intents.

The fabric in a throw isn’t that much, so I needed something pretty efficient, and rather than winging it as I had done the first time I decided I wanted to try the Modular Jacket from A Year of Zero Waste Sewing (which I had bought in the zine instalments: the jacket is in the March issue).

After some measuring and decision taking, I found that I could fit most of the pieces and get a decent length, but I had no room for the collar, and probably not for the belt nor the pockets, but I cut all of the main pieces. I had a possible idea for a contrasting collar, but I decided to start sewing the main pieces and decide later, before committing to cutting the other fabric.

As I was assembling the jacket I decided that as a dressing gown I could do without the collar, and noticed that with the fraying-free plastic fleece I didn’t really need the front facings, so I cut those in half lengthwise, pieced them together, and used them as binding to finish the front end.

the back of the worn jacket, other than being clinched in by the belt it is pretty straight.

Since I didn’t have enough fabric for the belt I also skipped the belt loops, but I have been wearing this with random belts and I don’t feel the need for them anyway. I’ve also been thinking about adding a button just above the bust and use that to keep it closed, but I’m still not 100% sure about it.

Another thing I still need to do is to go through the few scraps of fleece that are left and see if I can piece together a serviceable pocket or two.

folding the sleeves back by a good 10 cm to show the hands.

Because of the size of the fabric, I ended up having quite long sleeves: I’m happy with them because they mean that I can cover my hands when it’s cold, or fold them back to make a nice cuff.

If I’ll make a real jacket with this patter I’ll have to take this in consideration, and either make the sleeves shorter or finish the seam in a way that looks nice when folded back.

Will I make a real jacket? I’m not sure, it’s not really my style of outer garment, but as a dressing gown it has already been used quite a bit (as in, almost every morning since I’ve made it :) ) and will continue to be used until too worn to be useful, and that’s a good thing.

28 April, 2025 12:00AM

April 27, 2025

hackergotchi for Marco d'Itri

Marco d'Itri

On the use of SaaS in systems engineering

We want to use an hyperscaler cloud because it is cheaper to delegate operating a scalable and redundant database to an hyperscaler is something that can be debated from business and technical points of view.

We want to use an hyperscaler cloud because our developers do not want to operate a scalable and redundant database just means that you need to hire competent developers and/or system administrators.

We must stop normalizing the idea that the people whose only skill is gluing together a few dozens of AWS services can continue calling themselves developers. We should also find a sufficiently demeaning name to refer to them...

27 April, 2025 02:55PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Random IS-IS interop notes

Some random stuff about running IS-IS between FRR (on Linux) and IOS-XE (Cisco 3650 in my case):

Cisco uses the newer “key chain” idea, but FRR doesn't for IS-IS yet (it's supported for OSPF, though?), so the right way to interop seems to be:

# Cisco
key chain my-key
 key 100
   key-string password123

interface Vlan101
  ...
  isis authentication key-chain my-key

router isis
  ...
  authentication mode md5 level-2
  authentication key-chain my-key level-2

# FRR
interface vlan101
  ...
  isis password md5 password123

router isis null
  ...
  area-password md5 password123 authenticate snp validate

Simple enough stuff, except for “authenticate snp validate”; without it (or at least “authenticate snp send-only”), you'll get messages on the Cisco saying that PSNP messages failed auth.

Second, you can run “ipv6 unnumbered” (i.e., directly over link-local, no link nets needed), but “ip unnumbered” seems to... crash the Cisco? Couldn't get up the neighbor relation even with a point-to-point setting, at least. Perhaps safest to stay with a /31 link net. :-)

Also, FRR sometimes originates default route but I think this is a bug. “default-information originate ipv4 level-2 always” (and similar for ipv6) seems prudent on the upstream router.

27 April, 2025 09:51AM

Valhalla's Things

Stickerses

Posted on April 27, 2025
Tags: madeof:atoms, madeof:bits, craft:graphics, topic:stickers

After just a few years of procrastination, I’ve given a wash of git-filter-repo to the repository where I keep my hexagonal sticker designs, removed a few failed experiments and stuff with dubious licensing and was able to finally publish it among my public git repositories

This repo includes the template I’m using, most of the stickers I’ve had printed, some that have been published elsewhere and have been printed by other people, as well as some that have never been printed and I may or may not print in the future.

The licensing details are in the metadata of each file, and mostly depend on the logos or cliparts used. Most, but not all, are under a free culture license.

My server is not setup to correctly serve the SVG files, yet: downloading them (from the “plain” links) should work, but I need to fix the content type that is provided. I will probably procrastinate doing it for quite some time, but eventually it will be done. Of course cloning the repository from the public https URL also works.

BRB, need to add MOAR stickerses.

27 April, 2025 12:00AM

April 26, 2025

John Goerzen

Memoirs of the Early Internet

The Internet is an amazing place, and occasionally you can find things on the web that have somehow lingered online for decades longer than you might expect.

Today I’ll take you on a tour of some parts of the early Internet.

The Internet, of course, is a “network of networks” and part of its early (and continuing) promise was to provide a common protocol that all sorts of networks can use to interoperate with each other. In the early days, UUCP was one of the main ways universities linked with each other, and eventually UUCP and the Internet sort of merged (but that’s a long story).

Let’s start with some Usenet maps, which were an early way to document the UUCP modem links between universities. Start with this PDF. The first page is a Usenet map (which at the time mostly flowed over UUCP) from April of 1981. Notice that ucbvax, a VAX system at Berkeley, was central to the map.

ucbvax continued to be a central node for UUCP for more than a decade; on page 5 of that PDF, you’ll see that it asks for a “Path from a major node (eg, ucbvax, devcax, harpo, duke)”. Pre-Internet email addresses used a path; eg, mark@ucbvax was duke!decvax!ucbvax!mark to someone. You had to specify the route from your system to the recipient on your email To line. If you gave out your email address on a business card, you would start it from a major node like ucbvax, and the assumption was that everyone would know how to get from their system to the major node.

On August 19, 1994, ucbvax was finally turned off. TCP/IP had driven UUCP into more obscurity; by then, it was mostly used by people without a dedicated Internet connection to get on the Internet, rather than an entire communication network of its own. A few days later, Cliff Frost posted a memoir of ucbvax; an obscurbe bit of Internet lore that is fun to read.

UUCP was ad-hoc, and by 1984 there was an effort to make a machine-parsable map to help automate routing on UUCP. This was called the pathalias project, and there was a paper about it. The Linux network administration guide even includes a section on pathalias.

Because UUCP mainly flowed over phone lines, long distance fees made it quite expensive. In 1985, the Stargate Project was formed, with the idea of distributing Usenet by satellite. The satellite link was short-lived, but the effort eventually morphed into UUNET. It was initially a non-profit, but eventually became a commercial backbone provider, and later ISP. Over a long series of acquisitions, UUNET is now part of Verizon. An article in ;login: is another description of this history.

IAPS has an Internet in 1990 article, which includes both pathalias data and an interesting map of domain names to UUCP paths.

As I was pondering what interesting things a person could do with NNCPNET Internet email, I stumbled across a page on getting FTP files via e-mail. Yes, that used to be a thing! I remember ftpmail@decwrl.dec.com.

It turns out that page is from a copy of EFF’s (Extended) Guide to the Internet from 1994. Wow, what a treasure! It has entries such as A Slice of Life in my Virtual Community, libraries with telnet access, Gopher, A Statement of Principle by Bruce Sterling, and I could go on. You can also get it as a PDF from Internet Archive.

UUCP is still included with modern Linux and BSD distributions. It was part of how I experienced the PC and Internet revolution in rural America. It lacks modern security, but NNCP is to UUCP what ssh is to telnet.

26 April, 2025 06:22PM by John Goerzen

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.4.2-1 on CRAN: Another Small Upstream Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1245 other packages on CRAN, downloaded 39.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 628 times according to Google Scholar.

A new release arriveed at CRAN yesterday with a fix for expmat() and adjustments for clang++-20. These changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.4.2-1 (2025-04-25)

  • Upgraded to Armadillo release 14.4.2 (Filtered Espresso)

    • Fix for expmat()

    • Workaround for bugs in clang 20 compiler

    • Micro-cleanup in one test file

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

26 April, 2025 01:33PM

John Goerzen

NNCPNET Can Optionally Exchange Internet Email

A few days ago, I announced NNCPNET, the email network based atop NNCP. NNCPNET lets anyone run a real mail server on a network that supports all sorts of topologies for transport, from Internet to USB drives. And verification is done at the NNCP protocol level, so a whole host of Internet email bolt-ons (SPF, DMARC, DKIM, etc.) are unnecessary.

Shortly after announcing NNCPNET, I added an Internet bridge. This lets you get your own DOMAIN.nncpnet.org domain, and from there route email to and from the Internet using a gateway node. Simple, effective, and a way to get real email to and from your laptop or Raspberry Pi without having to have a static IP, SPF, DMARC, DKIM, etc.

It’s a volunteer-run, free, service. Give it a try!

26 April, 2025 01:01AM by John Goerzen

April 25, 2025

Simon Josefsson

GitLab Runner with Rootless Privilege-less Capability-less Podman on riscv64

I host my own GitLab CI/CD runners, and find that having coverage on the riscv64 CPU architecture is useful for testing things. The HiFive Premier P550 seems to be a common hardware choice. The P550 is possible to purchase online. You also need a (mini-)ATX chassi, power supply (~500W is more than sufficient), PCI-to-M2 converter and a NVMe storage device. Total cost per machine was around $8k/€8k for me. Assembly was simple: bolt everything, connect ATX power, connect cables for the front-panel, USB and and Audio. Be sure to toggle the physical power switch on the P550 before you close the box. Front-panel power button will start your machine. There is a P550 user manual available.

Below I will guide you to install the GitLab Runner on the pre-installed Ubuntu 24.04 that ships with the P550, and configure it to use Podman in root-less mode and without the --privileged flag, without any additional capabilities like SYS_ADMIN. Presumably you want to migrate to some other OS instead; hey Trisquel 13 riscv64 I’m waiting for you! I wouldn’t recommend using this machine for anything sensitive, there is an awful lot of non-free and/or vendor-specific software installed, and the hardware itself is young. I am not aware of any riscv64 hardware that can run a libre OS, all of them appear to require non-free blobs and usually a non-mainline kernel.

  • Login on console using username ‘ubuntu‘ and password ‘ubuntu‘. You will be asked to change the password, so do that.
  • Start a terminal, gain root with sudo -i and change the hostname:
    echo jas-p550-01 > /etc/hostname
  • Connect ethernet and run: apt-get update && apt-get dist-upgrade -u.
  • If your system doesn’t have valid MAC address (they show as MAC ‘8c:00:00:00:00:00 if you run ‘ip a’), you can fix this to avoid collisions if you install multiple P550’s on the same network. Connect the Debug USB-C connector on the back to one of the hosts USB-A slots. Use minicom (use Ctrl-A X to exit) to talk to it.
apt-get install minicom
minicom -o -D /dev/ttyUSB3
#cmd: ifconfig
inet 192.168.0.2 netmask: 255.255.240.0
gatway 192.168.0.1
SOM_Mac0: 8c:00:00:00:00:00
SOM_Mac1: 8c:00:00:00:00:00
MCU_Mac: 8c:00:00:00:00:00
#cmd: setmac 0 CA:FE:42:17:23:00
The MAC setting will be valid after rebooting the carrier board!!!
MAC[0] addr set to CA:FE:42:17:23:00(ca:fe:42:17:23:0)
#cmd: setmac 1 CA:FE:42:17:23:01
The MAC setting will be valid after rebooting the carrier board!!!
MAC[1] addr set to CA:FE:42:17:23:01(ca:fe:42:17:23:1)
#cmd: setmac 2 CA:FE:42:17:23:02
The MAC setting will be valid after rebooting the carrier board!!!
MAC[2] addr set to CA:FE:42:17:23:02(ca:fe:42:17:23:2)
#cmd:
  • For reference, if you wish to interact with the MCU you may do that via OpenOCD and telnet, like the following (as root on the P550). You need to have the Debug USB-C connected to a USB-A host port.
apt-get install openocd
wget https://raw.githubusercontent.com/sifiveinc/hifive-premier-p550-tools/refs/heads/master/mcu-firmware/stm32_openocd.cfg
echo 'acc115d283ff8533d6ae5226565478d0128923c8a479a768d806487378c5f6c3 stm32_openocd.cfg' | sha256sum -c
openocd -f stm32_openocd.cfg &
telnet localhost 4444
...
  • Reboot the machine and login remotely from your laptop. Gain root and set up SSH public-key authentication and disable SSH password logins.
echo 'ssh-ed25519 AAA...' > ~/.ssh/authorized_keys
sed -i 's;^#PasswordAuthentication.*;PasswordAuthentication no;' /etc/ssh/sshd_config
service ssh restart
  • With a NVME device in the PCIe slot, create a LVM partition where the GitLab runner will live:
parted /dev/nvme0n1 print
blkdiscard /dev/nvme0n1
parted /dev/nvme0n1 mklabel gpt
parted /dev/nvme0n1 mkpart jas-p550-nvm-02 ext2 1MiB 100% align-check optimal 1
parted /dev/nvme0n1 set 1 lvm on
partprobe /dev/nvme0n1
pvcreate /dev/nvme0n1p1
vgcreate vg0 /dev/nvme0n1p1
lvcreate -L 400G -n glr vg0
mkfs.ext4 -L glr /dev/mapper/vg0-glr

Now with a reasonable setup ready, let’s install the GitLab Runner. The following is adapted from gitlab-runner’s official installation instructions documentation. The normal installation flow doesn’t work because they don’t publish riscv64 apt repositories, so you will have to perform upgrades manually.

# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner_riscv64.deb
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner-helper-images.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner_riscv64.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner-helper-images.deb
echo '68a4c2a4b5988a5a5bae019c8b82b6e340376c1b2190228df657164c534bc3c3 gitlab-runner-helper-images.deb' | sha256sum -c
echo 'ee37dc76d3c5b52e4ba35cf8703813f54f536f75cfc208387f5aa1686add7a8c gitlab-runner_riscv64.deb' | sha256sum -c
dpkg -i gitlab-runner-helper-images.deb gitlab-runner_riscv64.deb

Remember the NVMe device? Let’s not forget to use it, to avoid wear and tear of the internal MMC root disk. Do this now before any files in /home/gitlab-runner appears, or you have to move them manually.

gitlab-runner stop
echo 'LABEL=glr /home/gitlab-runner ext4 defaults,noatime 0 1' >> /etc/fstab
systemctl daemon-reload
mount /home/gitlab-runner

Next install gitlab-runner and configure it. Replace token glrt-REPLACEME below with the registration token you get from your GitLab project’s Settings -> CI/CD -> Runners -> New project runner. I used the tags ‘riscv64‘ and a runner description of the hostname.

gitlab-runner register --non-interactive --url https://gitlab.com --token glrt-REPLACEME --name $(hostname) --executor docker --docker-image debian:stable

We install and configure gitlab-runner to use podman, and to use non-root user.

apt-get install podman
gitlab-runner stop
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 gitlab-runner

You need to run some commands as the gitlab-runner user, but unfortunately some interaction between sudo/su and pam_systemd makes this harder than it should be. So you have to setup SSH for the user and login via SSH to run the commands. Does anyone know of a better way to do this?

# on the p550:
cp -a /root/.ssh/ /home/gitlab-runner/
chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/.ssh/
# on your laptop:
ssh gitlab-runner@jas-p550-01
systemctl --user --now enable podman.socket
systemctl --user --now start podman.socket
loginctl enable-linger gitlab-runner gitlab-runner
systemctl status --user podman.socket

We modify /etc/gitlab-runner/config.toml as follows, replace 997 with the user id shown by systemctl status above. See feature flags documentation for more documentation.

...
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
...
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"

Note that unlike the documentation I do not add the ‘privileged = true‘ parameter here. I will come back to this later.

Restart the system to confirm that pushing a .gitlab-ci.yml with a job that uses the riscv64 tag like the following works properly.

dump-env-details-riscv64:
stage: build
image: riscv64/debian:testing
tags: [ riscv64 ]
script:
- set

Your gitlab-runner should now be receiving jobs and running them in rootless podman. You may view the log using journalctl as follows:

journalctl --follow _SYSTEMD_UNIT=gitlab-runner.service

To stop the graphical environment and disable some unnecessary services, you can use:

systemctl set-default multi-user.target
systemctl disable openvpn cups cups-browsed sssd colord

At this point, things were working fine and I was running many successful builds. Now starts the fun part with operational aspects!

I had a problem when running buildah to build a new container from within a job, and noticed that aardvark-dns was crashing. You can use the Debian ‘aardvark-dns‘ binary instead.

wget http://ftp.de.debian.org/debian/pool/main/a/aardvark-dns/aardvark-dns_1.14.0-3_riscv64.deb
echo 'df33117b6069ac84d3e97dba2c59ba53775207dbaa1b123c3f87b3f312d2f87a aardvark-dns_1.14.0-3_riscv64.deb' | sha256sum -c
mkdir t
cd t
dpkg -x ../aardvark-dns_1.14.0-3_riscv64.deb .
mv /usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.ubuntu
mv usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.debian

My setup uses podman in rootless mode without passing the –privileged parameter or any –add-cap parameters to add non-default capabilities. This is sufficient for most builds. However if you try to create container using buildah from within a job, you may see errors like this:

Writing manifest to image destination
Error: mounting new container: mounting build container "8bf1ec03d967eae87095906d8544f51309363ddf28c60462d16d73a0a7279ce1": creating overlay mount to /var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/merged, mount_data="lowerdir=/var/lib/containers/storage/overlay/l/I3TWYVYTRZ4KVYCT6FJKHR3WHW,upperdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/diff,workdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/work,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory
: exit status 1

According to GitLab runner security considerations, you should not enable the ‘privileged = true’ parameter, and the alternative appears to run Podman as root with privileged=false. Indeed setting privileged=true as in the following example solves the problem, and I suppose running podman as root would too.

[[runners]]
[runners.docker]
privileged = true

Can we do better? After some experimentation, and reading open issues with suggested capabilities and configuration snippets, I ended up with the following configuration. It runs podman in rootless mode (as the gitlab-runner user) without --privileged, but add the CAP_SYS_ADMIN capability and exposes the /dev/fuse device. Still, this is running as non-root user on the machine, so I think it is an improvement compared to using --privileged and also compared to running podman as root.

[[runners]]
[runners.docker]
privileged = false
cap_add = ["SYS_ADMIN"]
devices = ["/dev/fuse"]

Still I worry about the security properties of such a setup, so I only enable these settings for a separately configured runner instance that I use when I need this docker-in-docker (oh, I meant buildah-in-podman) functionality. I found one article discussing Rootless Podman without the privileged flag that suggest –isolation=chroot but I have yet to make this work. Suggestions for improvement are welcome.

Happy Riscv64 Building!

Update 2025-05-05: I was able to make it work without the SYS_ADMIN capability too, with a GitLab /etc/gitlab-runner/config.toml like the following:

[[runners]]
  [runners.docker]
    privileged = false
    devices = ["/dev/fuse"]

And passing --isolation chroot to Buildah like this:

buildah build --isolation chroot -t $CI_REGISTRY_IMAGE:name image/

I’ve updated the blog title to add the word “capability-less” as well. I’ve confirmed that the same recipe works on podman on a ppc64el platform too. Remaining loop-holes are escaping from the chroot into the non-root gitlab-runner user, and escalating that privilege to root. The /dev/fuse and sub-uid/gid may be privilege escalation vectors here, otherwise I believe you’ve found a serious software security issue rather than a configuration mistake.

25 April, 2025 06:30PM by simon

Ian Wienand

Avoiding layer shift on Ender V3 KE after pause

With (at least) the V1.1.0.15 firmware on the Ender V3 KE 3d printer the PAUSE macro will cause the print head to run too far on the Y axis, which causes a small layer shift when the print returns. I guess the idea is to expose the build plate as much as possible by moving the head as far to the side and back as possible, but the overrun and consequent belt slip unfortunately makes it mostly useless; the main use of this probably being to switch filaments for two colour prints.

Luckily you can fairly easily enable root access on the control pad from the settings menu. After doing this you can ssh to it's IP address with the default password Creality2023.

From there you can modify the /usr/data/printer_data/config/gcode_macro.cfg file (vi is available) to change the details of the PAUSE macro. Find the section [gcode_macro PAUSE] and modify {% set y_park = 255 %} to a more reasonable value like 150. Save the file and reboot the pad so the printing daemons restart.

On PAUSE this then moves the head to the far left about half-way down, which works fine for filament changes. Hopefully a future firmware version will update this; I will update this post if I find it does.

c.f. Ender 3 V3 KE shifting layers after pause

25 April, 2025 11:30AM by Ian Wienand

hackergotchi for Bits from Debian

Bits from Debian

Debian Project Leader election 2025 is over, Andreas Tille re-elected!

The voting period and tally of votes for the Debian Project Leader election has just concluded and the winner is Andreas Tille, who has been elected for the second time. Congratulations!

Out of a total of 1,030 developers, 362 voted. As usual in Debian, the voting method used was the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2025 page.

Many thanks to Andreas Tille, Gianfranco Costamagna, Julian Andres Klode, and Sruthi Chandran for their campaigns, and to our Developers for voting.

The new term for the project leader started on April 21st and will expire on April 20th 2026.

25 April, 2025 10:05AM by Jean-Pierre Giraud

April 24, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.26 on CRAN: Small Updates

A new minor release 0.4.26 of RQuantLib arrived on CRAN this morning, and has just now been uploaded to Debian too.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for nearly twenty-two years (!!) as it was one of the first packages I uploaded to CRAN.

This release of RQuantLib brings updated Windows build support taking advantage of updated Rtools, thanks to a PR by Tomas Kalibera. We also updated expected results for three of the ‘schedule’ tests (in a way that is dependent on the upstream library version) as the just-released QuantLib 1.38 differs slightly.

Changes in RQuantLib version 0.4.26 (2025-04-24)

  • Use system QuantLib (if found by pkg-config) on Windows too (Tomas Kalibera in #192)

  • Accommodate same test changes for schedules in QuantLib 1.38

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

24 April, 2025 10:27PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 1: An ATOM Echo voice satellite

Back when I setup my home automation I ended up with one piece that used an external service: Amazon Alexa. I’d rather not have done this, but voice control is extremely convenient, both for us, and guests. Since then Home Assistant has done a lot of work in developing the capability of a local voice assistant - 2023 was their Year of Voice. I’ve had brief looks at this in the past, but never quite had the time to dig into setting it up, and was put off by the fact a lot of the setup instructions were just “Download our prebuilt components”. While I admire the efforts to get Home Assistant fully packaged for Debian I accept that’s a tricky proposition, and settle for running it in a venv on a Debian stable container. Voice requires a lot more binary components, and I want to have “voice satellites” in more than one location, so I set about trying to understand a bit better what I was deploying, and actually building the binary bits myself.

This is the start of a write-up of that. I’ll break it into a bunch of posts, trying to cover one bit in each, because otherwise this will get massive. Let’s start with some requirements:

  • All local processing; no call-outs to external services
  • Ability to have multiple voice satellites in the house
  • A desire to do wake word detection on the satellites, to avoid lots of network audio traffic all the time
  • As clean an install on a Debian stable based system as possible
  • Binaries built locally
  • No need for a GPU

My house server is an AMD Ryzen 7 5700G, so my expectation was that I’d have enough local processing power to be able to do this. That turned out to be a valid assumption - speech to text really has come a long way in recent years. I’m still running Home Assistant 2024.3.3 - the last one that supports (but complains about) Python 3.11. Trixie has started the freeze process, so once it releases I’ll look at updating the HA install. For now what I have has turned out to be Good Enough, but I know there have been improvements upstream I’m missing.

Finally, before I get into the details, I should point out that if you just want to get started with a voice assistant on Home Assistant and don’t care about what’s under the hood, there are a bunch of more user friendly details on Home Assistant’s site itself, and they have pre-built images you can just deploy.

My first step was sorting out a “voice satellite”. This is the device that actually has a microphone and speaker and communicates with the main Home Assistant setup. I’d seen the post about a $13 voice assistant, and as a result had an ATOM Echo sitting on my desk I hadn’t got around to setting up.

Here, we ignore a bit about delving into exactly what’s going on under the hood, even if we’re compiling locally. This is a constrained embedded device and while I’m familiar with the ESP32 IDF build system I just accepted that using ESPHome and letting it do it’s thing was the quickest way to get up and running. It is possible to do this all via the web with a pre-built image, but I wanted to change the wake word to “Hey Jarvis” rather than the default “Okay Nabu”, and that was a good reason to bother doing a local build. We’ll get into actually building a voice satellite on Debian in later posts.

I started with the default upstream assistant config and tweaked it a little for my setup:

diff of my configuration tweaks
$ diff -u m5stack-atom-echo.yaml assistant.yaml
--- m5stack-atom-echo.yaml    2025-04-18 13:41:21.812766112 +0100
+++ assistant.yaml  2025-01-20 17:33:24.918585244 +0000
@@ -1,7 +1,7 @@
 substitutions:
-  name: m5stack-atom-echo
+  name: study-atom-echo
   friendly_name: M5Stack Atom Echo
-  micro_wake_word_model: okay_nabu  # alexa, hey_jarvis, hey_mycroft are also supported
+  micro_wake_word_model: hey_jarvis  # alexa, hey_jarvis, hey_mycroft are also supported
 
 esphome:
   name: ${name}
@@ -16,15 +16,26 @@
     version: 4.4.8
     platform_version: 5.4.0
 
+# Enable logging
 logger:
+
+# Enable Home Assistant API
 api:
+  encryption:
+    key: "TGlrZVRoaXNJc1JlYWxseUl0Rm9vbGlzaFBlb3BsZSE="
 
 ota:
   - platform: esphome
-    id: ota_esphome
+    password: "itsnotarealthing"
 
 wifi:
+  ssid: "My Wifi Goes Here"
+  password: "AndThePasswordGoesHere"
+
+  # Enable fallback hotspot (captive portal) in case wifi connection fails
   ap:
+    ssid: "Study-Atom-Echo Fallback Hotspot"
+    password: "ThisIsRandom"
 
 captive_portal:


(I note that the current upstream config has moved on a bit since I first did this, but I double checked the above instructions still work at the time of writing. I end up pinning ESPHome to the right version below due to that.)

It turns out to be fairly easy to setup ESPHome in a venv and get it to build + flash the image for you:

Instructions for building + flashing ESPHome to ATOM Echo
noodles@sevai:~$ python3 -m venv esphome-atom-echo
noodles@sevai:~$ . esphome-atom-echo/bin/activate
(esphome-atom-echo) noodles@sevai:~$ cd esphome-atom-echo/
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$  pip install esphome==2024.12.4
Collecting esphome==2024.12.4
  Using cached esphome-2024.12.4-py3-none-any.whl (4.1 MB)
…
Successfully installed FontTools-4.57.0 PyYAML-6.0.2 appdirs-1.4.4 attrs-25.3.0 bottle-0.13.2 defcon-0.12.1 esphome-2024.12.4 esphome-dashboard-20241217.1 freetype-py-2.5.1 fs-2.4.16 gflanguages-0.7.3 glyphsLib-6.10.1 glyphsets-1.0.0 openstep-plist-0.5.0 pillow-10.4.0 platformio-6.1.16 protobuf-3.20.3 puremagic-1.27 ufoLib2-0.17.1 unicodedata2-16.0.0
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome compile assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
Linking .pioenvs/study-atom-echo/firmware.elf
/home/noodles/.platformio/packages/toolchain-xtensa-esp32@8.4.0+2021r2-patch5/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: missing --end-group; added as last command line option
RAM:   [=         ]  10.6% (used 34632 bytes from 327680 bytes)
Flash: [========  ]  79.8% (used 1463813 bytes from 1835008 bytes)
Building .pioenvs/study-atom-echo/firmware.bin
Creating esp32 image...
Successfully created esp32 image.
esp32_create_combined_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
Wrote 0x176fb0 bytes to file /home/noodles/esphome-atom-echo/.esphome/build/study-atom-echo/.pioenvs/study-atom-echo/firmware.factory.bin, ready to flash to offset 0x0
esp32_copy_ota_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
==================================================================================== [SUCCESS] Took 130.57 seconds ====================================================================================
INFO Successfully compiled program.
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome upload --device /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0 assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v4.7.0
Serial port /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
Connecting....
Chip is ESP32-PICO-D4 (revision v1.1)
Features: WiFi, BT, Dual Core, 240MHz, Embedded Flash, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
MAC: 64:b7:08:8a:1b:c0
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Flash will be erased from 0x00010000 to 0x00176fff...
Flash will be erased from 0x00001000 to 0x00007fff...
Flash will be erased from 0x00008000 to 0x00008fff...
Flash will be erased from 0x00009000 to 0x0000afff...
Compressed 1470384 bytes to 914252...
Wrote 1470384 bytes (914252 compressed) at 0x00010000 in 82.0 seconds (effective 143.5 kbit/s)...
Hash of data verified.
Compressed 25632 bytes to 16088...
Wrote 25632 bytes (16088 compressed) at 0x00001000 in 1.8 seconds (effective 113.1 kbit/s)...
Hash of data verified.
Compressed 3072 bytes to 134...
Wrote 3072 bytes (134 compressed) at 0x00008000 in 0.1 seconds (effective 383.7 kbit/s)...
Hash of data verified.
Compressed 8192 bytes to 31...
Wrote 8192 bytes (31 compressed) at 0x00009000 in 0.1 seconds (effective 813.5 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...
INFO Successfully uploaded program.


And then you can watch it boot (this is mine already configured up in Home Assistant):

Watching the ATOM Echo boot
$ picocom --quiet --imap lfcrlf --baud 115200 /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
I (29) boot: ESP-IDF 4.4.8 2nd stage bootloader
I (29) boot: compile time 17:31:08
I (29) boot: Multicore bootloader
I (32) boot: chip revision: v1.1
I (36) boot.esp32: SPI Speed      : 40MHz
I (40) boot.esp32: SPI Mode       : DIO
I (45) boot.esp32: SPI Flash Size : 4MB
I (49) boot: Enabling RNG early entropy source...
I (55) boot: Partition Table:
I (58) boot: ## Label            Usage          Type ST Offset   Length
I (66) boot:  0 otadata          OTA data         01 00 00009000 00002000
I (73) boot:  1 phy_init         RF data          01 01 0000b000 00001000
I (81) boot:  2 app0             OTA app          00 10 00010000 001c0000
I (88) boot:  3 app1             OTA app          00 11 001d0000 001c0000
I (96) boot:  4 nvs              WiFi data        01 02 00390000 0006d000
I (103) boot: End of partition table
I (107) esp_image: segment 0: paddr=00010020 vaddr=3f400020 size=58974h (362868) map
I (247) esp_image: segment 1: paddr=0006899c vaddr=3ffb0000 size=03400h ( 13312) load
I (253) esp_image: segment 2: paddr=0006bda4 vaddr=40080000 size=04274h ( 17012) load
I (260) esp_image: segment 3: paddr=00070020 vaddr=400d0020 size=f5cb8h (1006776) map
I (626) esp_image: segment 4: paddr=00165ce0 vaddr=40084274 size=112ach ( 70316) load
I (665) boot: Loaded app from partition at offset 0x10000
I (665) boot: Disabling RNG early entropy source...
I (677) cpu_start: Multicore app
I (677) cpu_start: Pro cpu up.
I (677) cpu_start: Starting app cpu, entry point is 0x400825c8
I (0) cpu_start: App cpu up.
I (695) cpu_start: Pro cpu start user code
I (695) cpu_start: cpu freq: 160000000
I (695) cpu_start: Application information:
I (700) cpu_start: Project name:     study-atom-echo
I (705) cpu_start: App version:      2024.12.4
I (710) cpu_start: Compile time:     Apr 18 2025 17:29:39
I (716) cpu_start: ELF file SHA256:  1db4989a56c6c930...
I (722) cpu_start: ESP-IDF:          4.4.8
I (727) cpu_start: Min chip rev:     v0.0
I (732) cpu_start: Max chip rev:     v3.99 
I (737) cpu_start: Chip rev:         v1.1
I (742) heap_init: Initializing. RAM available for dynamic allocation:
I (749) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM
I (755) heap_init: At 3FFB8748 len 000278B8 (158 KiB): DRAM
I (761) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM
I (767) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM
I (774) heap_init: At 40095520 len 0000AAE0 (42 KiB): IRAM
I (781) spi_flash: detected chip: gd
I (784) spi_flash: flash io: dio
I (790) cpu_start: Starting scheduler on PRO CPU.
I (0) cpu_start: Starting scheduler on APP CPU.
[I][logger:171]: Log initialized
[C][safe_mode:079]: There have been 0 suspected unsuccessful boot attempts
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 0 cached, 1 written, 0 failed
[I][app:029]: Running through setup()...
[C][esp32_rmt_led_strip:021]: Setting up ESP32 LED Strip...
[D][template.select:014]: Setting up Template Select
[D][template.select:023]: State from initial (could not load stored index): On device
[D][select:015]: 'Wake word engine location': Sending state On device (index 1)
[D][esp-idf:000]: I (100) gpio: GPIO[39]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 

[D][binary_sensor:034]: 'Button': Sending initial state OFF
[C][light:021]: Setting up light 'M5Stack Atom Echo 8a1bc0'...
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:041]:   Color mode: RGB
[D][template.switch:046]:   Restored state ON
[D][switch:012]: 'Use listen light' Turning ON.
[D][switch:055]: 'Use listen light': Sending state ON
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:047]:   State: ON
[D][light:051]:   Brightness: 60%
[D][light:059]:   Red: 100%, Green: 89%, Blue: 71%
[D][template.switch:046]:   Restored state OFF
[D][switch:016]: 'timer_ringing' Turning OFF.
[D][switch:055]: 'timer_ringing': Sending state OFF
[C][i2s_audio:028]: Setting up I2S Audio...
[C][i2s_audio.microphone:018]: Setting up I2S Audio Microphone...
[C][i2s_audio.speaker:096]: Setting up I2S Audio Speaker...
[C][wifi:048]: Setting up WiFi...
[D][esp-idf:000]: I (206) wifi:
[D][esp-idf:000]: wifi driver task: 3ffc8544, prio:23, stack:6656, core=0
[D][esp-idf:000]: 

[D][esp-idf:000][wifi]: I (1238) system_api: Base MAC address is not set

[D][esp-idf:000][wifi]: I (1239) system_api: read default base MAC address from EFUSE

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi firmware version: ff661c3
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi certification version: v7.0
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1286) wifi:
[D][esp-idf:000][wifi]: config NVS flash: enabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1297) wifi:
[D][esp-idf:000][wifi]: config nano formating: disabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1317) wifi:
[D][esp-idf:000][wifi]: Init data frame dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1338) wifi:
[D][esp-idf:000][wifi]: Init static rx mgmt buffer num: 5
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1348) wifi:
[D][esp-idf:000][wifi]: Init management short buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1368) wifi:
[D][esp-idf:000][wifi]: Init dynamic tx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1389) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer size: 1600
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1399) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer num: 10
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1419) wifi:
[D][esp-idf:000][wifi]: Init dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000]: I (1441) wifi_init: rx ba win: 6

[D][esp-idf:000]: I (1441) wifi_init: tcpip mbox: 32

[D][esp-idf:000]: I (1450) wifi_init: udp mbox: 6

[D][esp-idf:000]: I (1450) wifi_init: tcp mbox: 6

[D][esp-idf:000]: I (1460) wifi_init: tcp tx win: 5760

[D][esp-idf:000]: I (1471) wifi_init: tcp rx win: 5760

[D][esp-idf:000]: I (1481) wifi_init: tcp mss: 1440

[D][esp-idf:000]: I (1481) wifi_init: WiFi IRAM OP enabled

[D][esp-idf:000]: I (1491) wifi_init: WiFi RX IRAM OP enabled

[C][wifi:061]: Starting WiFi...
[C][wifi:062]:   Local MAC: 64:B7:08:8A:1B:C0
[D][esp-idf:000][wifi]: I (1513) phy_init: phy_version 4791,2c4672b,Dec 20 2023,16:06:06

[D][esp-idf:000][wifi]: I (1599) wifi:
[D][esp-idf:000][wifi]: mode : sta (64:b7:08:8a:1b:c0)
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1600) wifi:
[D][esp-idf:000][wifi]: enable tsf
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1605) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[D][wifi:482]: Starting scan...
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 1 cached, 0 written, 0 failed
[W][micro_wake_word:151]: Wake word detection can't start as the component hasn't been setup yet
[D][esp-idf:000][wifi]: I (1646) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[W][component:157]: Component wifi set Warning flag: scanning for networks
…
[I][wifi:617]: WiFi Connected!
…
[D][wifi:626]: Disabling AP...
[C][api:026]: Setting up Home Assistant API server...
[C][micro_wake_word:062]: Setting up microWakeWord...
[C][micro_wake_word:069]: Micro Wake Word initialized
[I][app:062]: setup() finished successfully!
[W][component:170]: Component wifi cleared Warning flag
[W][component:157]: Component api set Warning flag: unspecified
[I][app:100]: ESPHome version 2024.12.4 compiled on Apr 18 2025, 17:29:39
…
[C][logger:185]: Logger:
[C][logger:186]:   Level: DEBUG
[C][logger:188]:   Log Baud Rate: 115200
[C][logger:189]:   Hardware UART: UART0
[C][esp32_rmt_led_strip:187]: ESP32 RMT LED Strip:
[C][esp32_rmt_led_strip:188]:   Pin: 27
[C][esp32_rmt_led_strip:189]:   Channel: 0
[C][esp32_rmt_led_strip:214]:   RGB Order: GRB
[C][esp32_rmt_led_strip:215]:   Max refresh rate: 0
[C][esp32_rmt_led_strip:216]:   Number of LEDs: 1
[C][template.select:065]: Template Select 'Wake word engine location'
[C][template.select:066]:   Update Interval: 60.0s
[C][template.select:069]:   Optimistic: YES
[C][template.select:070]:   Initial Option: On device
[C][template.select:071]:   Restore Value: YES
[C][gpio.binary_sensor:015]: GPIO Binary Sensor 'Button'
[C][gpio.binary_sensor:016]:   Pin: GPIO39
[C][light:092]: Light 'M5Stack Atom Echo 8a1bc0'
[C][light:094]:   Default Transition Length: 0.0s
[C][light:095]:   Gamma Correct: 2.80
[C][template.switch:068]: Template Switch 'Use listen light'
[C][template.switch:091]:   Restore Mode: restore defaults to ON
[C][template.switch:057]:   Optimistic: YES
[C][template.switch:068]: Template Switch 'timer_ringing'
[C][template.switch:091]:   Restore Mode: always OFF
[C][template.switch:057]:   Optimistic: YES
[C][factory_reset.button:011]: Factory Reset Button 'Factory reset'
[C][factory_reset.button:011]:   Icon: 'mdi:restart-alert'
[C][captive_portal:089]: Captive Portal:
[C][mdns:116]: mDNS:
[C][mdns:117]:   Hostname: study-atom-echo-8a1bc0
[C][esphome.ota:073]: Over-The-Air updates:
[C][esphome.ota:074]:   Address: study-atom-echo.local:3232
[C][esphome.ota:075]:   Version: 2
[C][esphome.ota:078]:   Password configured
[C][safe_mode:018]: Safe Mode:
[C][safe_mode:020]:   Boot considered successful after 60 seconds
[C][safe_mode:021]:   Invoke after 10 boot attempts
[C][safe_mode:023]:   Remain in safe mode for 300 seconds
[C][api:140]: API Server:
[C][api:141]:   Address: study-atom-echo.local:6053
[C][api:143]:   Using noise encryption: YES
[C][micro_wake_word:051]: microWakeWord:
[C][micro_wake_word:052]:   models:
[C][micro_wake_word:015]:     - Wake Word: Hey Jarvis
[C][micro_wake_word:016]:       Probability cutoff: 0.970
[C][micro_wake_word:017]:       Sliding window size: 5
[C][micro_wake_word:021]:     - VAD Model
[C][micro_wake_word:022]:       Probability cutoff: 0.500
[C][micro_wake_word:023]:       Sliding window size: 5

[D][api:103]: Accepted 192.168.39.6
[W][component:170]: Component api cleared Warning flag
[W][component:237]: Component api took a long time for an operation (58 ms).
[W][component:238]: Components should block for at most 30 ms.
[D][api.connection:1446]: Home Assistant 2024.3.3 (192.168.39.6): Connected successfully
[D][ring_buffer:034]: Created ring buffer with size 2048
[D][micro_wake_word:399]: Resetting buffers and probabilities
[D][micro_wake_word:195]: State changed from IDLE to START_MICROPHONE
[D][micro_wake_word:107]: Starting Microphone
[D][micro_wake_word:195]: State changed from START_MICROPHONE to STARTING_MICROPHONE
[D][esp-idf:000]: I (11279) I2S: DMA Malloc info, datalen=blocksize=1024, dma_buf_count=4

[D][micro_wake_word:195]: State changed from STARTING_MICROPHONE to DETECTING_WAKE_WORD


That’s enough to get a voice satellite that can be configured up in Home Assistant; you’ll need the ESPHome Integration added, then for the noise_psk key you use the same string as I have under api/encryption/key in my diff above (obviously do your own, I used dd if=/dev/urandom bs=32 count=1 | base64 to generate mine).

If you’re like me and a compulsive VLANer and firewaller even within your own network then you need to allow Home Assistant to connect on TCP port 6053 to the ATOM Echo, and also allow access to/from UDP port 6055 on the Echo (it’ll send audio from that port to Home Assistant, then receive back audio to the same port).

At this point you can now shout “Hey Jarvis, what time is it?” at the Echo, and the white light will turn flashing blue (indicating it’s heard the wake word). Which means we’re ready to teach Home Assistant how to do something with the incoming audio.

24 April, 2025 06:34PM

April 23, 2025

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.4 and new ISO images available

The new FAI release 6.4 comes with some nice new features.

It now supports installing the Xfce edition of Linux Mint 22.1 'Xia'. There's now an additional Linux Mint ISO [1] which does an unattended Linux Mint installation via FAI and does not need a network connection because all packages are available on the ISO.

The package_config configurations now support arbitrary boolean expressions with FAI classes like this:

PACKAGES install UBUNTU && XORG && ! MINT

If you use the command ifclass in customization scripts you can now also use these expressions.

The tool fai-kvm for starting a KVM virtual machine now uses UEFI variables if the VM is started with an UEFI environment, so boot settings are saved during a reboot.

For the installation of Rocky Linux and Almalinux in an UEFI environment some configuration files were added.

New ISO images [2] are available but it may take some time until the FAIme service [3] will supports customized Linux Mint images.

23 April, 2025 01:21PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

hackergotchi for Michael Prokop

Michael Prokop

Lessons learned from running an open source project for 20 years @ GLT25

Time flies by so quickly, it’s >20 years since I started the Grml project.

I’m giving a (german) talk about the lessons learned from 20 years of running the Grml project this Saturday, 2025-04-26 at the Grazer Linuxtage (Graz/Austria). Would be great to see you there!

23 April, 2025 06:11AM by mika

Russell Coker

Last Post About the Yoga Gen3

Just over a year ago I bought myself a Thinkpad Yoga Gen 3 [1]. That is a nice machine and I really enjoyed using it. But a few months ago it started crashing and would often play some music on boot. The music is a diagnostic code that can be interpreted by the Lenovo Android app. Often the music translated to “code 0284 TCG-compliant functionality-related error” which suggests a motherboard problem. So I bought a new motherboard.

The system still crashes with the new motherboard. It seems to only crash when on battery so that indicates that it might be a power issue causing the crashes. I configured the BIOS to disable the TPM and that avoided the TCG messages and tunes on boot but it still crashes.

An additional problem is that the design of the Yoga series is that the keys retract when the system is opened past 180 degrees and when the lid is closed. After the motherboard replacement about half the keys don’t retract which means that they will damage the screen more when the lid is closed (the screen was already damaged from the keys when I bought it).

I think that spending more money on trying to fix this would be a waste. So I’ll use it as a test machine and I might give it to a relative who needs a portable computer to be used when on power only.

For the moment I’m back to the Thinkpad X1 Carbon Gen 5 [2]. Hopefully the latest kernel changes to zswap and the changes to Chrome to suspend unused tabs will make up for more RAM use in other areas. Currently it seems to be giving decent performance with 8G of RAM and I usually don’t notice any difference from the Yoga Gen 3.

Now I’m considering getting a Thinkpad X1 Carbon Extreme with a 4K display. But they seem a bit expensive at the moment. Currently there’s only one on ebay Australia for $1200ono.

23 April, 2025 05:11AM by etbe

April 22, 2025

Melissa Wen

2025 FOSDEM: Don't let your motivation go, save time with kworkflow

2025 was my first year at FOSDEM, and I can say it was an incredible experience where I met many colleagues from Igalia who live around the world, and also many friends from the Linux display stack who are part of my daily work and contributions to DRM/KMS. In addition, I met new faces and recognized others with whom I had interacted on some online forums and we had good and long conversations.

During FOSDEM 2025 I had the opportunity to present about kworkflow in the kernel devroom. Kworkflow is a set of tools that help kernel developers with their routine tasks and it is the tool I use for my development tasks. In short, every contribution I make to the Linux kernel is assisted by kworkflow.

The goal of my presentation was to spread the word about kworkflow. I aimed to show how the suite consolidates good practices and recommendations of the kernel workflow in short commands. These commands are easily configurable and memorized for your current work setup, or for your multiple setups.

For me, Kworkflow is a tool that accommodates the needs of different agents in the Linux kernel community. Active developers and maintainers are the main target audience for kworkflow, but it is also inviting for users and user-space developers who just want to report a problem and validate a solution without needing to know every detail of the kernel development workflow.

Something I didn’t emphasize during the presentation but would like to correct this flaw here is that the main author and developer of kworkflow is my colleague at Igalia, Rodrigo Siqueira. Being honest, my contributions are mostly on requesting and validating new features, fixing bugs, and sharing scripts to increase feature coverage.

So, the video and slide deck of my FOSDEM presentation are available for download here.

And, as usual, you will find in this blog post the script of this presentation and more detailed explanation of the demo presented there.


Kworkflow at FOSDEM 2025: Speaker Notes and Demo

Hi, I’m Melissa, a GPU kernel driver developer at Igalia and today I’ll be giving a very inclusive talk to not let your motivation go by saving time with kworkflow.

So, you’re a kernel developer, or you want to be a kernel developer, or you don’t want to be a kernel developer. But you’re all united by a single need: you need to validate a custom kernel with just one change, and you need to verify that it fixes or improves something in the kernel.

And that’s a given change for a given distribution, or for a given device, or for a given subsystem…

Look to this diagram and try to figure out the number of subsystems and related work trees you can handle in the kernel.

So, whether you are a kernel developer or not, at some point you may come across this type of situation:

There is a userspace developer who wants to report a kernel issue and says:

  • Oh, there is a problem in your driver that can only be reproduced by running this specific distribution. And the kernel developer asks:
  • Oh, have you checked if this issue is still present in the latest kernel version of this branch?

But the userspace developer has never compiled and installed a custom kernel before. So they have to read a lot of tutorials and kernel documentation to create a kernel compilation and deployment script. Finally, the reporter managed to compile and deploy a custom kernel and reports:

  • Sorry for the delay, this is the first time I have installed a custom kernel. I am not sure if I did it right, but the issue is still present in the kernel of the branch you pointed out.

And then, the kernel developer needs to reproduce this issue on their side, but they have never worked with this distribution, so they just created a new script, but the same script created by the reporter.

What’s the problem of this situation? The problem is that you keep creating new scripts!

Every time you change distribution, change architecture, change hardware, change project - even in the same company - the development setup may change when you switch to a different project, you create another script for your new kernel development workflow!

You know, you have a lot of babies, you have a collection of “my precious scripts”, like Sméagol (Lord of the Rings) with the precious ring.

Instead of creating and accumulating scripts, save yourself time with kworkflow. Here is a typical script that many of you may have. This is a Raspberry Pi 4 script and contains everything you need to memorize to compile and deploy a kernel on your Raspberry Pi 4.

With kworkflow, you only need to memorize two commands, and those commands are not specific to Raspberry Pi. They are the same commands to different architecture, kernel configuration, target device.

What is kworkflow?

Kworkflow is a collection of tools and software combined to:

  • Optimize Linux kernel development workflow.
  • Reduce time spent on repetitive tasks, since we are spending our lives compiling kernels.
  • Standardize best practices.
  • Ensure reliable data exchange across kernel workflow. For example: two people describe the same setup, but they are not seeing the same thing, kworkflow can ensure both are actually with the same kernel, modules and options enabled.

I don’t know if you will get this analogy, but kworkflow is for me a megazord of scripts. You are combining all of your scripts to create a very powerful tool.

What is the main feature of kworflow?

There are many, but these are the most important for me:

  • Build & deploy custom kernels across devices & distros.
  • Handle cross-compilation seamlessly.
  • Manage multiple architecture, settings and target devices in the same work tree.
  • Organize kernel configuration files.
  • Facilitate remote debugging & code inspection.
  • Standardize Linux kernel patch submission guidelines. You don’t need to double check documentantion neither Greg needs to tell you that you are not following Linux kernel guidelines.
  • Upcoming: Interface to bookmark, apply and “reviewed-by” patches from mailing lists (lore.kernel.org).

This is the list of commands you can run with kworkflow. The first subset is to configure your tool for various situations you may face in your daily tasks.

# Manage kw and kw configurations
kw init             - Initialize kw config file
kw self-update (u)  - Update kw
kw config (g)       - Manage kernel .config files

The second subset is to build and deploy custom kernels.

# Build & Deploy custom kernels
kw kernel-config-manager (k) - Manage kernel .config files
kw build (b)        - Build kernel
kw deploy (d)       - Deploy kernel image (local/remote)
kw bd               - Build and deploy kernel

We have some tools to manage and interact with target machines.

# Manage and interact with target machines
kw ssh (s)          - SSH support
kw remote (r)       - Manage machines available via ssh
kw vm               - QEMU support

To inspect and debug a kernel.

# Inspect and debug
kw device           - Show basic hardware information
kw explore (e)      - Explore string patterns in the work tree and git logs
kw debug            - Linux kernel debug utilities
kw drm              - Set of commands to work with DRM drivers

To automatize best practices for patch submission like codestyle, maintainers and the correct list of recipients and mailing lists of this change, to ensure we are sending the patch to who is interested in it.

# Automatize best practices for patch submission
kw codestyle (c)    - Check code style
kw maintainers (m)  - Get maintainers/mailing list
kw send-patch       - Send patches via email

And the last one, the upcoming patch hub.

# Upcoming
kw patch-hub        - Interact with patches (lore.kernel.org)

How can you save time with Kworkflow?

So how can you save time building and deploying a custom kernel?

First, you need a .config file.

  • Without kworkflow: You may be manually extracting and managing .config files from different targets and saving them with different suffixes to link the kernel to the target device or distribution, or any descriptive suffix to help identify which is which. Or even copying and pasting from somewhere.
  • With kworkflow: you can use the kernel-config-manager command, or simply kw k, to store, describe and retrieve a specific .config file very easily, according to your current needs.

Then you want to build the kernel:

  • Without kworkflow: You are probably now memorizing a combination of commands and options.
  • With kworkflow: you just need kw b (kw build) to build the kernel with the correct settings for cross-compilation, compilation warnings, cflags, etc. It also shows some information about the kernel, like number of modules.

Finally, to deploy the kernel in a target machine.

  • Without kworkflow: You might be doing things like: SSH connecting to the remote machine, copying and removing files according to distributions and architecture, and manually updating the bootloader for the target distribution.
  • With kworkflow: you just need kw d which does a lot of things for you, like: deploying the kernel, preparing the target machine for the new installation, listing available kernels and uninstall them, creating a tarball, rebooting the machine after deploying the kernel, etc.

You can also save time on debugging kernels locally or remotely.

  • Without kworkflow: you do: ssh, manual setup and traces enablement, copy&paste logs.
  • With kworkflow: more straighforward access to debug utilities: events, trace, dmesg.

You can save time on managing multiple kernel images in the same work tree.

  • Without kworkflow: now you can be cloning multiple times the same repository so you don’t lose compiled files when changing kernel configuration or compilation options and manually managing build and deployment scripts.
  • With kworkflow: you can use kw env to isolate multiple contexts in the same worktree as environments, so you can keep different configurations in the same worktree and switch between them easily without losing anything from the last time you worked in a specific context.

Finally, you can save time when submitting kernel patches. In kworkflow, you can find everything you need to wrap your changes in patch format and submit them to the right list of recipients, those who can review, comment on, and accept your changes.

This is a demo that the lead developer of the kw patch-hub feature sent me. With this feature, you will be able to check out a series on a specific mailing list, bookmark those patches in the kernel for validation, and when you are satisfied with the proposed changes, you can automatically submit a reviewed-by for that whole series to the mailing list.


Demo

Now a demo of how to use kw environment to deal with different devices, architectures and distributions in the same work tree without losing compiled files, build and deploy settings, .config file, remote access configuration and other settings specific for those three devices that I have.

Setup

  • Three devices:
    • laptop (debian x86 intel local)
    • SteamDeck (steamos x86 amd remote)
    • RaspberryPi 4 (raspbian arm64 broadcomm remote)
  • Goal: To validate a change on DRM/VKMS using a single kernel tree.
  • Kworkflow commands:
    • kw env
    • kw d
    • kw bd
    • kw device
    • kw debug
    • kw drm

Demo script

In the same terminal and worktree.

First target device: Laptop (debian|x86|intel|local)
$ kw env --list # list environments available in this work tree
$ kw env --use LOCAL # select the environment of local machine (laptop) to use: loading pre-compiled files, kernel and kworkflow settings.
$ kw device # show device information
$ sudo modinfo vkms # show VKMS module information before applying kernel changes.
$ <open VKMS file and change module info>
$ kw bd # compile and install kernel with the given change
$ sudo modinfo vkms # show VKMS module information after kernel changes.
$ git checkout -- drivers
Second target device: RaspberryPi 4 (raspbian|arm64|broadcomm|remote)
$ kw env --use RPI_64 # move to the environment for a different target device.
$ kw device # show device information and kernel image name
$ kw drm --gui-off-after-reboot # set the system to not load graphical layer after reboot
$ kw b # build the kernel with the VKMS change
$ kw d --reboot # deploy the custom kernel in a Raspberry Pi 4 with Raspbian 64, and reboot
$ kw s # connect with the target machine via ssh and check the kernel image name
$ exit
Third target device: SteamDeck (steamos|x86|amd|remote)
$ kw env --use STEAMDECK # move to the environment for a different target device
$ kw device # show device information
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output
$ kw debug --dmesg --follow --history --cmd="modprobe -r vkms" # run a command and show the related dmesg output
$ <add a printk with a random msg to appear on dmesg log>
$ kw bd # deploy and install custom kernel to the target device
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output after build and deploy the kernel change

Q&A

Most of the questions raised at the end of the presentation were actually suggestions and additions of new features to kworkflow.

The first participant, that is also a kernel maintainer, asked about two features: (1) automatize getting patches from patchwork (or lore) and triggering the process of building, deploying and validating them using the existing workflow, (2) bisecting support. They are both very interesting features. The first one fits well the patch-hub subproject, that is under-development, and I’ve actually made a similar request a couple of weeks before the talk. The second is an already existing request in kworkflow github project.

Another request was to use kexec and avoid rebooting the kernel for testing. Reviewing my presentation I realized I wasn’t very clear that kworkflow doesn’t support kexec. As I replied, what it does is to install the modules and you can load/unload them for validations, but for built-in parts, you need to reboot the kernel.

Another two questions: one about Android Debug Bridge (ADB) support instead of SSH and another about support to alternative ways of booting when the custom kernel ended up broken but you only have one kernel image there. Kworkflow doesn’t manage it yet, but I agree this is a very useful feature for embedded devices. On Raspberry Pi 4, kworkflow mitigates this issue by preserving the distro kernel image and using config.txt file to set a custom kernel for booting. For ADB, there is no support too, and as I don’t see currently users of KW working with Android, I don’t think we will have this support any time soon, except if we find new volunteers and increase the pool of contributors.

The last two questions were regarding the status of b4 integration, that is under development, and other debugging features that the tool doesn’t support yet.

Finally, when Andrea and I were changing turn on the stage, he suggested to add support for virtme-ng to kworkflow. So I opened an issue for tracking this feature request in the project github.

With all these questions and requests, I could see the general need for a tool that integrates the variety of kernel developer workflows, as proposed by kworflow. Also, there are still many cases to be covered by kworkflow.

Despite the high demand, this is a completely voluntary project and it is unlikely that we will be able to meet these needs given the limited resources. We will keep trying our best in the hope we can increase the pool of users and contributors too.

22 April, 2025 07:30PM