October 24, 2021

hackergotchi for Sean Whitton

Sean Whitton

Deploying containers with Consfigurator

For some months now I’ve been working on some patches to Consfigurator to add support for Linux containers. My goal is to make Consfigurator capable of both performing the initial setup of a container and of entering the running container to apply configuration. For the case of unprivileged LXCs running as non-root, my work-in-progress branch can now do both of these things. As Consfigurator enters the container directly using system calls, it should be decently fast at configuring multiple containers on a host, and it will also be possible to have it do this in parallel. The initial setup for the container uses Consfigurator’s existing support for building root filesystems, and it should be easy to extend that to support arbitrary GNU/Linux distributions by teaching Consfigurator how to invoke bootstrapping tools other than debootstrap(8).

Here’s an example:

(defhost lxc1.silentflame.com ()
  (os:debian-stable "bullseye" :amd64)
  (basic-props)
  (apt:installed "systemd" "netcat")
  (apache:https-vhost ...))

(defhost lxctest.laptop.silentflame.com ()
  (os:debian-stable "bullseye" :amd64)
  (apt:proxy "http://192.168.122.1:3142")
  (basic-props)
  (apt:installed "linux-image-amd64" "lxc")

  (lxc:usernet-usable-by "spwhitton" "lxcbr0")
  (lxc:user-containers-autostart "spwhitton")
  (lxc:user-container-for '(:additional-lines
                            ("lxc.net.0.type = veth"
                             "lxc.net.0.flags = up"
                             "lxc.net.0.link = lxcbr0"
                             ...))
                          "spwhitton"
                          lxc1.silentflame.com))

(defhost laptop.silentflame.com ()
  ...
  (libvirt:kvm-boots-chroot-for '(:always-deploys t)
                                lxctest.laptop.silentflame.com))

This code is a simplified definition of my testing setup for this work. It defines three hosts: a container lxc1, a container host lxctest, and my laptop. When Consfigurator is asked to deploy the laptop, it will set up the root filesystem for lxctest and then boot it as a KVM virtual machine. Preparing that root filesystem will include setting up the root filesystem for lxc1, too, including shifting the ownership and ACLs to match the user namespace LXC will use when booting the container. Thus, once the deployment of the laptop is finished, it will be possible to boot the lxctest VM, connect to it as the user spwhitton, and start lxc1.

Consfigurator includes only minimal support for setting up container networking, as there are so many different ways in which you might want to do it. In my own consfig I’ve been developing properties to connect containers directly to my tinc VPN. A single tinc daemon runs on the container host, and other tinc daemons route a whole subnet, containing the addresses for each of the containers, to the container host’s tinc daemon. As the LXCs Consfigurator sets up run as non-root, some sort of setuid facility is required to configure this networking. Consfigurator’s ability to dump executable Lisp images is helping here. I define a function which runs as root to set up the networking:

(defun route-athenet-container-veth (host)
  (let ((user (getenv "USERV_USER"))
        (peer (getenv "USERV_U_PEER"))
        (ip (car (uiop:command-line-arguments))))
    (unless (string-prefix-p (format nil "veth~D_" (getenv "USERV_UID")) peer)
      (error "~A does not belong to requester." peer))
    (unless (member (cons user ip) (get-hostattrs 'veth-ips host) :test #'equal)
      (error "~A does not have permission to route ~A." user ip))
    (flet ((r (&rest args)
             ;; Explicitly passing nil means UIOP will not invoke a shell.
             (run-program args :force-shell nil)))
      (eswitch ((getenv "USERV_U_HOOK_TYPE") :test #'string=)
        ("up"
         (apply #'r
                "sysctl" "-w"
                "net.ipv6.conf.all.forwarding=1"
                ...)
         (r "ip" "addr" "flush" "dev" peer "scope" "link")
         (r "ip" "-6" "addr" "add" "fe80::1/64" "dev" peer)
         (r "ip" "-6" "route" "add" (strcat ip "/128") "dev" peer)
         ...)
        ("down"
         ...
         (r "ip" "-6" "route" "del" (strcat ip "/128") "dev" peer))))))

and then apply the following property to lxctest to dump an image which will call this function and then exit:

(image-dumped
 "/usr/lib/userv/route-athenet-container-veth"
 `(route-athenet-container-veth ,(intern (string-upcase (get-hostname)))))

I’m using GNU userv to enable ordinary users to run this image as root, so there there’s a small script which converts LXC’s LXC_HOOK_* environment variables into appropriate command line arguments to userv(1) such that the function above is able to access that information from its environment (the USERV_U_* variables above). You could just as easily do this with sudo, by giving permission for the relevant LXC_HOOK_* environment variables to survive the switch to root.

What’s particularly nice about this is that there’s no need to write any code to keep a config file updated, specifying which users are allowed to route which IPs to their containers. ROUTE-ATHENET-CONTAINER-VETH receives a HOST value for the container host and can just look at the metadata established by properties for particular containers. Each time this metadata is updated and lxctest is deployed, a fresh image is dumped containing the updated metadata.

This work has provided opportunities to make various other improvements to Consfigurator, especially with regard to dumping and reinvoking images. Making SBCL capable of entering user namespaces required a change upstream, which made it into the recent SBCL 2.1.8 release. I’m very grateful to the SBCL developers for their engagement with my project. I’ve been able to add a workaround so that Consfigurator can still enter user namespaces when run on the version of SBCL included in Debian stable. I also discovered that deploying all of my laptop, lxctest and lxc1 at once generates enough output to fill up a pipe, thus revealing a deadlock in Consfigurator’s IPC, which it was good to become aware of and fix. That involved writing my first multi-threaded Lisp, as there are two pipes that need to be kept from filling up, and to my surprise it worked first time. Take that Haskell :)

24 October, 2021 04:17PM

Petter Reinholdtsen

Debian still an excellent choice for Lego builders

The Debian Lego team saw a lot of activity the last few weeks. All the packages under the team umbrella has been updated to fix packaging, lintian issues and BTS reports. In addition, a new and inspiring team member appeared on both the debian-lego-team Team mailing list and IRC channel #debian-lego. If you are interested in Lego CAD design and LEGO Minestorms programming, check out the team wiki page to see what Debian can offer the Lego enthusiast.

Patches has been sent upstream, causing new upstream releases, one even the first one in more than ten years, and old upstreams was released with new ones. There are still a lot of work left, and the team welcome more members to help us make sure Debian is the Linux distribution of choice for Lego builders. If you want to contribute, join us in the IRC channel and become part of the team on Salsa.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

24 October, 2021 05:10AM

October 23, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppQuantuccia 0.0.5 on CRAN: Updated and Calendar Focus

Another new release of RcppQuantuccia arrived on CRAN today, just a couple of days after the previous release. RcppQuantuccia started from the Quantuccia header-only subset / variant of QuantLib which it brings it to R.

As of this release, it concentrates on calendaring functionality taking advantage of the extensive collection of country-specific holiday information in QuantLib. The release updates the included code to the most recent QuantLib release. We added one calendar (for Brazil) and one utility function (of exporting all business days in a given range, which is the simple complement to the existing holiday list getter).

The complete list changes follows.

Changes in version 0.0.5 (2021-10-23)

  • Refocused on calendaring functionality only, removed daycounters/, math/, methods/, models/, plus other unused headers

  • Fully updated to (current) QuantLib release 0.2.4

  • Added getBusinessDays() to retrieve range of dates

  • Added Brazil calendar

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 October, 2021 04:54PM

October 22, 2021

Enrico Zini

Scanning for imports in Python scripts

I had to package a nontrivial Python codebase, and I needed to put dependencies in setup.py.

I could do git grep -h import | sort -u, then review the output by hand, but I lacked the motivation for it. Much better to take a stab at solving the general problem

The result is at https://github.com/spanezz/python-devel-tools.

One fun part is scanning a directory tree, using ast to find import statements scattered around the code:

class Scanner:
    def __init__(self):
        self.names: Set[str] = set()

    def scan_dir(self, root: str):
        for dirpath, dirnames, filenames, dir_fd in os.fwalk(root):
            for fn in filenames:
                if fn.endswith(".py"):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        self.scan_file(fd, os.path.join(dirpath, fn))
                st = os.stat(fn, dir_fd=dir_fd)
                if st.st_mode & (stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        try:
                            lead = fd.readline()
                        except UnicodeDecodeError:
                            continue
                        if re_python_shebang.match(lead):
                            fd.seek(0)
                            self.scan_file(fd, os.path.join(dirpath, fn))

    def scan_file(self, fd: TextIO, pathname: str):
        log.info("Reading file %s", pathname)
        try:
            tree = ast.parse(fd.read(), pathname)
        except SyntaxError as e:
            log.warning("%s: file cannot be parsed", pathname, exc_info=e)
            return

        self.scan_tree(tree)

    def scan_tree(self, tree: ast.AST):
        for stm in tree.body:
            if isinstance(stm, ast.Import):
                for alias in stm.names:
                    if not isinstance(alias.name, str):
                        print("NAME", repr(alias.name), stm)
                    self.names.add(alias.name)
            elif isinstance(stm, ast.ImportFrom):
                if stm.module is not None:
                    self.names.add(stm.module)
            elif hasattr(stm, "body"):
                self.scan_tree(stm)

Another fun part is grouping the imported module names by where in sys.path they have been found:

    scanner = Scanner()
    scanner.scan_dir(args.dir)

    sys.path.append(args.dir)
    by_sys_path: Dict[str, List[str]] = collections.defaultdict(list)
    for name in sorted(scanner.names):
        spec = importlib.util.find_spec(name)
        if spec is None or spec.origin is None:
            by_sys_path[""].append(name)
        else:
            for sp in sys.path:
                if spec.origin.startswith(sp):
                    by_sys_path[sp].append(name)
                    break
            else:
                by_sys_path[spec.origin].append(name)

    for sys_path, names in sorted(by_sys_path.items()):
        print(f"{sys_path or 'unidentified'}:")
        for name in names:
            print(f"  {name}")

An example. It's kind of nice how it can at least tell apart stdlib modules so one doesn't need to read through those:

$ ./scan-imports …/himblick
unidentified:
  changemonitor
  chroot
  cmdline
  mediadir
  player
  server
  settings
  static
  syncer
  utils
…/himblick:
  himblib.cmdline
  himblib.host_setup
  himblib.player
  himblib.sd
/usr/lib/python3.9:
  __future__
  argparse
  asyncio
  collections
  configparser
  contextlib
  datetime
  io
  json
  logging
  mimetypes
  os
  pathlib
  re
  secrets
  shlex
  shutil
  signal
  subprocess
  tempfile
  textwrap
  typing
/usr/lib/python3/dist-packages:
  asyncssh
  parted
  progressbar
  pyinotify
  setuptools
  tornado
  tornado.escape
  tornado.httpserver
  tornado.ioloop
  tornado.netutil
  tornado.web
  tornado.websocket
  yaml
built-in:
  sys
  time

Maybe such a tool already exists and works much better than this? From a quick search I didn't find it, and it was fun to (re)invent it.

Updates:

Jakub Wilk pointed out to an old python-modules script that finds Debian dependencies.

The AST scanning code should be refactored to use ast.NodeVisitor.

22 October, 2021 08:52PM

Reproducible Builds (diffoscope)

diffoscope 188 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 188. This version includes the following changes:

* Add support for Python Sphinx inventory files, usually named objects.inv.
* Fix Python bytecode decompilation tests with Python 3.10+.
  (Closes: reproducible-builds/diffoscope#278)

You find out more by visiting the project homepage.

22 October, 2021 12:00AM

October 21, 2021

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma 5.23 “25th Anniversary Edition” for Debian

In the last week, KDE released version 5.23 – 25th Anniversary Edition – of the Plasma desktop with the usual long list of updates and improvements. This release celebrates 25 years of KDE, and Plasma 5.23.0 was released right on the day 25 years ago Matthias Ettrich sent an email to the de.comp.os.linux.misc newsgroup explaining a project he was working on. And Plasma 5.23 (with the bug fix 5.23.1) is now available for all Debian releases. (And don’t forget KDE Gears/Apps 21.08!)

As usual, I am providing packages via my OBS builds. If you have used my packages till now, then you only need to change the plasma522 line to read plasma523. To give full details, I repeat (and update) instructions for all here: First of all, you need to add my OBS key say in /etc/apt/trusted.gpg.d/obs-npreining.asc and add a file /etc/apt/sources.lists.d/obs-npreining-kde.list, containing the following lines, replacing the DISTRIBUTION part with one of Debian_11 (for Bullseye), Debian_Testing, or Debian_Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma523/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2108/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/DISTRIBUTION/ ./

The sharp eye might have detected also the apps2108 line, yes the KDE Gear suite of packages hgas been updated to 21.08 some time ago and is also available in my OBS builds (and in Debian/experimental).

Uploads to Debian

Plasma 5.23.0 has been uploaded to Debian, and is currently in transition to testing. Due to incorrect/insufficient Break/Depends, currently Debian/testing with the official packages for Plasma are broken. And as it looks this situation will continue for considerable time, considering that kwin is blocked by mesa, which in turn is blocked by llvm-toolchain-12, which has quite some RC bugs preventing it from transitioning. What a bad coincidence.

KDE Gears 21.08 are all in Debian Unstable and Testing, so the repositories here are mostly for users of Debian Bullseye (stable).

Krita beta

Krita has released the second beta of Krita 5.0, and this is available from the krita-beta repository, but only for amd64 architectures. Just add

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/krita-beta/DISTRIBUTION/ ./

Enjoy the new Plasma!

21 October, 2021 12:31AM by Norbert Preining

October 20, 2021

Ian Jackson

Going to work for the Tor Project

I have accepted a job with the Tor Project.

I joined XenSource to work on Xen in late 2007, as XenSource was being acquired by Citrix. So I have been at Citrix for about 14 years. I have really enjoyed working on Xen. There have been a variety of great people. I'm very proud of some of the things we built and achieved. I'm particularly proud of being part of a community that has provided the space for some of my excellent colleagues to really grow.

But the opportunity to go and work on a project I am so ideologically aligned with, and to work with Rust full-time, is too good to resist. That Tor is mostly written in C is quite terrifying, and I'm very happy that I'm going to help fix that!



comment count unavailable comments

20 October, 2021 05:55PM

Arturo Borrero González

Iterating on how we do NFS at Wikimedia Cloud Services

Logos

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

NFS is a central piece of infrastructure that is essential to services like Toolforge. Recently, the Cloud Services team at Wikimedia had been reviewing how we do NFS.

The current situation

NFS is a central piece of technology for some of the services that the Wikimedia Cloud Services team offers to the community. We have several shares that power different use cases: Toolforge user home directories live on NFS, and Cloud VPS users can also access dumps using this protocol. The current setup involves several physical hardware servers, with about 20TB of storage, offering shares over 10G links to the cloud. For the system to be more fault-tolerant, we duplicate each share for redundancy using DRBD.

Running NFS on dedicated hardware servers has traditionally offered us advantages: mostly on the performance and the capacity fields.

As time has passed, we have been enumerating more and more reasons to review how we do NFS. For one, the current setup is in violation of some of our internal rules regarding realm separation. Additionally, we had been longing for additional flexibility managing our servers: we wanted to use virtual machines managed by Openstack Nova. The DRBD-based high-availability system required mostly a hand-crafted procedure for failover/failback. There’s also some scalability concerns as NFS is easy to grow up, but not to grow horizontally, and of course, we have to be able to keep the tenancy setup while doing so, something that NFS does by using LDAP/Unix users and may get complicated too when growing. In general, the servers have become ‘too big to fail’, clearly technical debt, and it has taken us years to decide on taking on the task to rethink the architecture. It’s worth mentioning that in an ideal world, we wouldn’t depend on NFS, but the truth is that it will still be a central piece of infrastructure for years to come in services like Toolforge.

Over a series of brainstorming meetings, the WMCS team evaluated the situation and sorted out the many moving parts. The team managed to boil down the potential service future to two competing options:

  • Adopt and introduce a new Openstack component into our cloud: Manila — this was the right choice if we were interested in a general NFS as a service offering for our Cloud VPS users.
  • Put the data on Cinder volumes and serve NFS from a couple of virtual machines created by hand — this was the right choice if we wanted something that required low effort to engineer and adopt.

Then we decided to research both options in parallel. For a number of reasons, the evaluation was timeboxed to three weeks. Both ideas had a couple of points in common: the NFS data would be stored on our Ceph farm via Cinder volumes, and we would rely on Ceph reliability to avoid using DRBD. Another open topic was how to back up data from Ceph, to store our important bits in more than one basket. We will get to the back up topic later.

The manila experiment

The Wikimedia Foundation was an early adopter of some Openstack components (Nova, Glance, Designate, Horizon), but Manila was never evaluated for usage until now. Our approach for this experiment was to closely follow the upstream guidelines. We read the documentation and tried to understand the different setups you can build with Manila. As we often feel with other Openstack components, the documentation doesn’t perfectly describe how to introduce a given component in your particular local setup. Here we use an admin-controller flat-topology Neutron network. This network is shared by all tenants (or projects) in our Openstack deployment. Also, Manila can use many different driver backends, for things like NetApps or CephFS—that we don’t use…, yet. After some research, the generic driver was the one that seemed to better fit our use case. The generic driver leverages Nova virtual machines instances plus Cinder volume to create and manage the shares. In general, Manila supports two operational modes, whether it should create/destroy the share servers (i.e, the virtual machine instances) or not. This option is called driver_handles_share_server (or DHSS) and takes a boolean value.

We were interested in trying with DHSS=true, to really benefit from the potential of the setup.

Manila diagram NFS idea 6, original image in Wikitech

So, after sorting all these variables, we moved on with our initial testing. We built a PoC setup as depicted in the diagram above, with the manila-share component running in a virtual machine inside the cloud. The PoC led to us reporting several bugs upstream:

In some cases we tried to address these bugs ourselves:

It’s worth mentioning that the upstream community was extra-welcoming to us, and we’re thankful for that. However, at the end of our three-week period, our Manila setup still wasn’t working as expected. Your experience may change with other drivers—perhaps the ZFSonLinux or the CephFS ones. In general, we were having trouble making the setup work as expected, so we decided to abandon this approach in favor of the other option we were considering at the beginning.

Simple virtual machine serving NFS

The alternative was to create a Nova virtual machine instance by hand and to configure it using puppet. We have been investing in an automation framework lately, so the idea is to not actually create the server by hand. Anyway, the data would be decoupled from the instance into Cinder volumes, which led us to the question we left for later: How should we back up those terabytes of important information? Just to be clear, the backup problem was independent of the above options; with Manila we would still have had to solve the same challenge. We would like to see our data be backed up somewhere else other than in Ceph. And that’s exactly where we are at right now. We’ve been exploring different backup strategies and will finally use the Cinder backup API.

Conclusion

The iteration will end with the dedicated NFS hardware servers being stopped, and the shares being served from within the cloud. The migration will take some time to happen because we will check and double-check that everything works as expected (including from the performance point of view) before making definitive changes. We already have some plans to make sure our users experience as little service impact as possible. The most troublesome shares will be those related to Toolforge. At some point we will need to disallow writes to the NFS share, rsync the data out of the hardware servers into the Cinder volumes, point the NFS clients to the new virtual machines, and then enable writes again. The main Toolforge share has about 8TB of data, so this will take a while.

We will have more updates in the future. Who knows, perhaps our next-next iteration, in a couple of years, will see us adopting Openstack Manila for good.

Featured image credit: File:(from break water) Manila Skyline – panoramio.jpg, ewol, CC BY-SA 3.0

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

20 October, 2021 09:47AM

Russell Coker

Strange Apache Reload Issue

I recently had to renew the SSL certificate for my web server, nothing exciting about that but Certbot created a new directory for the key because I had removed some domains (moved to a different web server). This normally isn’t a big deal, change the Apache configuration to the new file names and run the “reload” command. My monitoring system initially said that the SSL certificate wasn’t going to expire in the near future so it looked fine. Then an hour later my monitoring system told me that the certificate was about to expire, apparently the old certificate came back!

I viewed my site with my web browser and the new certificate was being used, it seemed strange. Then I did more tests with gnutls-cli which revealed that exactly half the connections got the new certificate and half got the old one. Because my web server isn’t doing anything particularly demanding the mpm_event configuration only starts 2 servers, and even that may be excessive for what it does. So it seems that the Apache reload command had reloaded the configuration on one mpm_event server but not the other!

Fortunately this was something that was easy to test and was something that was automatically tested. If the change that didn’t get accepted was something small it would be a particularly insidious bug.

I haven’t yet tried to reproduce this. But if I get the time I’ll do so and file a bug report.

20 October, 2021 01:41AM by etbe

October 19, 2021

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, September 2021

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

Folks from the LTS team, along with members of the Debian Android Tools team and Phil Morrel, have proposed work on the Java build tool, gradle, which is currently blocked due to the need to build with a plugin not available in Debian. The LTS team reviewed the project submission and it has been approved. After approval we’ve created a Request for Bids which is active now.

You’ll hear more about this through official Debian channels, but in the meantime, if you feel you can help with this project, please submit a bid. Thanks!

This September, Freexian set aside 2550 EUR to fund Debian projects.

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In September, 15 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA has returned hours and marked themselves inactive, at least for the time being. He did 0h out of 14h, carried over 14h and returned 28h.
  • Adrian Bunk did 19.5h (out of 24.75h assigned and 12.75 from August), carrying over 18h to October.
  • Anton Gladky did 12h (out of 12h assigned).
  • Ben Hutchings did 2h (out of 12.75h assigned and 19.25h from August), thus carrying over 30h to October.
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did not report back about their work so we assume they did nothing (out of 5.5h assigned plus 74.5h from August), thus is carrying over 80h for October.
  • Holger Levsen did 3h (out of 12h assigned) and gave back 9h and carried over 3h.
  • Jeremiah Foster worked 10 hours (out of 20h assigned) on LTS work, carrying over 10h.
  • Lee Garrett did not report back about their work so we assume they did nothing (out of 24.75h assigned and 23.75 from August), thus is carrying over 48.50h for October.
  • Markus Koschany did 43.5h (out of 24.75h assigned and 18.75h from August)
  • Neil Williams did 24.5h (out of 24.75h assigned)
  • Roberto C. Sánchez did 6h (out of 24.75h assigned and gave back 18.75h)
  • Sylvain Beucler did 27h (out of 24.75h assigned).
  • Thorsten Alteholz did 24.75h (out of 24.75h assigned).
  • Utkarsh Gupta did 24.75h (out of 24.75h assigned) but did not publish his report yet.
  • Ola Lundqvist did 2 hours (out of 21h carried over from previous months), and is thus carrying 19h for October.

Evolution of the situation

In September we released 30 DLAs. September was also the second month of Jeremiah coordinating LTS contributors.

Also, we would like say that we are always looking for new contributors to LTS. Please contact Jeremiah if you are interested!

The security tracker currently lists 33 packages with a known CVE and the dla-needed.txt file has 26 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

19 October, 2021 09:09AM by Raphaël Hertzog

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RVowpalWabbit 0.0.16: One More CRAN Request

Another maintenance RVowpalWabbit released brings us to version 0.0.16 on CRAN. This is last package for which configure.ac needed an update to current standards (see the updates of corels, RcppGSL, RQuantLib, and littler). The make matters more interesting we also had to address one UBSAN issue we could not reproduce locally (which, it turns out, was our fault because we had not rebuilt one package dependency under UBSAN). But Prof Ripley confirmed the issue as addressed so all is good for now.

As noted before, there is a newer package rvw based on the excellent GSoC 2018 and beyond work by Ivan Pavlov (mentored by James and myself) so if you are into Vowpal Wabbit from R go check it out.

CRANberries provides a summary of changes to the previous version. More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 October, 2021 02:16AM

October 18, 2021

hackergotchi for Gunnar Wolf

Gunnar Wolf

raspi.debian.net now hosted on Debian infrastructure

So, since I registered the URL for serving the unofficial Debian images for the Raspberry computers, raspi.debian.net, in April 2020, I had been hosting it in my Dreamhost webspace.

Over two years ago –yes, before I finished setting it up in Dreamhost– Steve McIntyre approached me and invited me to host the images under the Debian cdimages user group. I told him I’d first just get the setup running, and later I would approach him for finalizing the setup.

Then, I set up the build on my own server, hosted on my Dreamhost account… and forgot about it for many months. Last month, there was a not particularly happy flamewar in debian-arm@lists.debian.org finished with me stating I would be moving the hosting to Debian infrastructure soon.

Well… It took me a bit over a month to get this sorted out, together with several days of half-broken links, but it is finally done: raspi.debian.net is a CNAME for ftp.acc.umu.se, which is the same system that hosts cdimage.debian.org.

And, of course — it is also reachable as https://cdimage.debian.org/cdimage/unofficial/raspi/ — looks more official, but is less memorable 😉

Thanks a lot to Steve for the nudging, and to maswan to help finalizing the setup.

What next? Well, the images are being built on my server. I’d love to move the builder over to Debian machines as well. When? How? That’s still in the air.

18 October, 2021 06:49PM

Russ Allbery

rra-c-util 10.0

It's been a while since I pushed out a release of my collection of utility libraries and test suite programs, so I've accumulated quite a lot of chanages. Here's a summary; for more, see the NEWS file.

  • Remove RRA_SET_LIBDIR macro, previously used by PAM modules to guess the install path.
  • Fix RRA_SET_LDFLAGS to always use the multilib directory if it exists, required for OpenSSL 3.0 support.
  • Perl tests and test support libraries now require Perl 5.10 or later.
  • Update RRA_LIB_PYTHON to use sysconfig by preference.
  • Fix SUN_LEN on macOS X Big Sur.
  • Add support for testing pam_end argments to the PAM testing framework.
  • Fix error handling in portable k_haspag replacement if getgroups fails.
  • Avoid deprecation warnings in Autoconf macros from Autoconf 2.71.
  • Work around fallout for Perl::Critic tests from the renaming of Perl::Critic::Freenode to Perl::Critic::Community, and suppress warnings that I haven't found useful.
  • Tweak the perltidy configuration.
  • Add a new t/docs/changes.t Perl module test for the format of Changes files.

You can get the latest version from the rra-c-util distribution page.

18 October, 2021 04:57AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dang 0.0.14: Several Updates

A new release of the dang package arrived at CRAN a couple of hours ago, exactly eight months after the previous release. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor from the last release as well the checkCRANStatus() function recently tweeted about by Tim Taylor.

This release regroups a few small edits to several functions, adds a sample function for character encoding reading and conversion using a library already used by R (hence “look Ma, no new depends”), adds a weekday helper, and a sample usage (computing rolling min/max values) of a new simple vector class added to tidyCpp (and the function and class need to get another blog post or study …), and an experimental git sha1sum and date marker (as I am not the fan of autogenerated binaries from repos as opposed to marked released meaning: we may see different binary release with the same version number).

The full NEWS entry follows.

Changes in version 0.0.14 (2021-10-17)

  • Updated continuous integration to run on Linux only.

  • Edited checkNonAscii.cpp for readability.

  • More robust title display in intradayMarketMonitor.R.

  • New C++-based function to read and convert encoding via the R-supplied iconv library, noted a potential variability.

  • New function wday returning day of the week as integer.

  • The signature to as.data.table was standardized.

  • A new function rollMinMax was added illustrating use of the NumVec class from tidyCpp.

  • The configure script can record the last commit date and sha1 to automate timestamping builds, but not activated in this release.

  • checkCRANStatus() now works correctly for single-package lookups (Jordan Mark Barbone in #4).

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 October, 2021 01:41AM

October 17, 2021

Russ Allbery

pam-krb5 4.11

The primary change in this release of my Kerberos PAM module is support for calling pam_end with PAM_DATA_SILENT. I had not known that the intent of this flag was to signal that only process resources were being cleaned up and external resources should not be (in part because an older version of the man page doesn't make this clear).

This flag is used when a proces forks with an open PAM library handle and wants to clean it up in the child process. In previous versions, this would delete the user's ticket cache, which is not the desired behavior. This version correctly leaves the ticket cache alone.

The implementation required some improvements to the PAM testing framework to support this case as well.

The other significant change in this release is that the build system no longer attempts to guess the correct PAM module installation path and instead documents that to install the module in a Linux system PAM module path, you will probably need to set --libdir explicitly. The logic used to decide between Debian and Red Hat multiarch paths broke in the presence of Debian usrmerge systems and was incredibly fragile even before that, so I've now dropped it completely.

You can get the latest version from the pam-krb5 distribution page.

17 October, 2021 11:00PM

October 15, 2021

Sven Hoexter

ThinkPad P15v Gen1, Xorg and a Samsung QHD Display

Wasted quite some hours until I found a working Modeline in this stack exchange post so the ThinkPad works with a HDMI attached Samsung QHD display.

Internal display of the ThinkPad is a FHD display detected as eDP-1, the external one is DP-3 and according to the packaging known by Samsung as S24A600NWU. The auto deteced EDID modes for QHD - 2560x1440 - did not work at all, the display simply stays dark. After a lot of back and forth with the i915 driver vs nouveau vs nvidia/nvidia-drm with and without modesetting, the following Modeline did the magic:

xrandr --newmode 2560x1440_54.97  221.00  2560 2608 2640 2720  1440 1443 1447 1478  +HSync -VSync
xrandr --addmode DP-3 2560x1440_54.97
xrandr --output DP-3 --mode 2560x1440_54.97 --right-of eDP-1 --primary

Modelines for 50Hz and 60Hz generated with cvt 2560 1440 60 did not work, neither did the one extracted with edid-decode -X from the hex blob found in .local/share/xorg/Xorg.0.log.

From the auto-detected Modelines FHD - 1920x1080 - did work. In case someone struggles with a similar setup, that might be a starting point. Fun part, if I attach my several years old Dell E7470 everything is just fine out of the box. But that one just has an Intel GPU and not the unholy combination I've here:

$ lspci|grep -E "VGA|3D"
00:02.0 VGA compatible controller: Intel Corporation CometLake-H GT2 [UHD Graphics] (rev 05)
01:00.0 3D controller: NVIDIA Corporation GP107GLM [Quadro P620] (rev ff)

15 October, 2021 02:12PM

hackergotchi for Adnan Hodzic

Adnan Hodzic

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

15 October, 2021 01:20PM by admin

October 13, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

GitHub Streak: Round Eight

Seven years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking 366 days:

github activity october 2013 to october 2014github activity october 2013 to october 2014

And six years ago a first follow-up appeared in this post about 731 days:

github activity october 2014 to october 2015github activity october 2014 to october 2015

And five years ago we had a followup at 1096 days

github activity october 2015 to october 2016github activity october 2015 to october 2016

And four years ago we had another one marking 1461 days

github activity october 2016 to october 2017github activity october 2016 to october 2017

And three years ago another one for 1826 days

github activity october 2017 to october 2018github activity october 2017 to october 2018

And two year another one bringing it to 2191 days

github activity october 2018 to october 2019github activity october 2018 to october 2019

And last year another one bringing it to 2257 days

github activity october 2019 to october 2020github activity october 2019 to october 2020

And as today is October 12, here is the newest one from 2020 to 2021 with a new total of 2922 days:

github activity october 2020 to october 2021github activity october 2020 to october 2021

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2021 02:44AM

RcppQuantuccia 0.0.4 on CRAN: Updated Calendar

A new release of RcppQuantuccia arrived on CRAN earlier today. RcppQuantuccia brings the Quantuccia header-only subset / variant of QuantLib to R. At the current stage, it mostly offers date and calendaring functions.

This release is the first in two years and brings a few internal updates (such as a swift to continuous integration to the trusted r-ci setup) along with a first update of the United States calendar. Which, just like RQuantLib, now knows about two new calendars LiborUpdate and FederalReserve. So now we can for example look for holidays during June of next year under the ‘Federal Reserve’ calendar and see

> library(RcppQuantuccia)
> setCalendar("UnitedStates/FederalReserve")
> getHolidays(as.Date("2022-06-01"), as.Date("2022-06-30"))
[1] "2022-06-20"
> 

that Juneteenth 2022 will be observed on (Monday) June 20th.

We should note that Quantuccia itself was a bit of a trial balloon and is not actively maintained so we may concentrate on these calendaring functions to keep them in sync with QuantLib. Being a header-only subset is good, and the removal of the (very !!) “expensive” (in terms of compiled library size) Sobol sequence-based RNG in release 0.0.3 was the right call. So time permitting, a leaner, meaner RcppQuantuccia with a calendaring focus may emerge.

The complete list changes follows.

Changes in version 0.0.4 (2021-10-12)

  • Allow for 'Null' calendar without weekends or holidays

  • Switch CI use to r-ci

  • Updated UnitedStates calendar to current QuantLib calendar

  • Small updates to DESCRIPTION and README.md

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2021 12:38AM

October 12, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Apache bug with mpm-itk

It seems there's a bug in Apache 2.4.49 (or newer) and mpm-itk; any forked child will segfault instead of exiting cleanly. This is, well, aesthetically not nice, and also causes problems with exit hooks for certain modules not being run.

It seems Apache upstream is on the case; from my limited understanding of the changes, there's not a lot mpm-itk as an Apache module can do here, so we'll just have to wait for upstream to deal with it. I hope we can get whatever fix in as a regression update to bullseye-security, though :-)

12 October, 2021 10:23PM

Antonio Terceiro

Triaging Debian build failure logs with collab-qa-tools

The Ruby team is working now on transitioning to ruby 3.0. Even though most packages will work just fine, there is substantial amount of packages that require some work to adapt. We have been doing test rebuilds for a while during transitions, but usually triaged the problems manually.

This time I decided to try collab-qa-tools, a set of scripts Lucas Nussbaum uses when he does archive-wide rebuilds. I'm really glad that I did, because those tols save a lot of time when processing a large number of build failures. In this post, I will go through how to triage a set of build logs using collab-qa-tools.

I have some some improvements to the code. Given my last merge request is very new and was not merged yet, a few of the things I mention here may apply only to my own ruby3.0 branch.

collab-qa-tools also contains a few tools do perform the builds in the cloud, but since we already had the builds done, I will not be mentioning that part and will write exclusively about the triaging tools.

Installing collab-qa-tools

The first step is to clone the git repository. Make sure you have the dependencies from debian/control installed (a few Ruby libraries).

One of the patches I sent, and was already accepted, is the ability to run it without the need to install:

source /path/to/collab-qa-tools/activate.sh

This will add the tools to your $PATH.

Preparation

The first think you need to do is getting all your build logs in a directory. The tools assume .log file extension, and they can be named ${PACKAGE}_*.log or just ${PACKAGE}.log.

Creating a TODO file

cqa-scanlogs | grep -v OK > todo

todo will contain one line for each log with a summary of the failure, if it's able to find one. collab-qa-tools has a large set of regular expressions for finding errors in the build logs

It's a good idea to split the TODO file in multiple ones. This can easily be done with split(1), and can be used to delimit triaging sessions, and/or to split the triaging between multiple people. For example this will create todo into todo00, todo01, ..., each containing 30 lines:

split --lines=30 --numeric-suffixes todo todo

Triaging

You can now do the triaging. Let's say we split the TODO files, and will start with todo01.

The first step is calling cqa-fetchbugs (it does what it says on the tin):

cqa-fetchbugs --TODO=todo01

Then, cqa-annotate will guide you through the logs and allow you to report bugs:

cqa-annotate --TODO=todo01

I wrote myself a process.sh wrapper script for cqa-fetchbugs and cqa-annotate that looks like this:

#!/bin/sh

set -eu

for todo in $@; do
  # force downloading bugs
  awk '{print(".bugs." $1)}' "${todo}" | xargs rm -f
  cqa-fetchbugs --TODO="${todo}"

  cqa-annotate \
    --template=template.txt.jinja2 \
    --TODO="${todo}"
done

The --template option is a recent contribution of mine. This is a template for the bug reports you will be sending. It uses Liquid templates, which is very similar to Jinja2 for Python. You will notice that I am even pretending it is Jinja2 to trick vim into doing syntax highlighting for me. The template I'm using looks like this:

From: {{ fullname }} <{{ email }}>
To: submit@bugs.debian.org
Subject: {{ package }}: FTBFS with ruby3.0: {{ summary }}

Source: {{ package }}
Version: {{ version | split:'+rebuild' | first }}
Severity: serious
Justification: FTBFS
Tags: bookworm sid ftbfs
User: debian-ruby@lists.debian.org
Usertags: ruby3.0

Hi,

We are about to enable building against ruby3.0 on unstable. During a test
rebuild, {{ package }} was found to fail to build in that situation.

To reproduce this locally, you need to install ruby-all-dev from experimental
on an unstable system or build chroot.

Relevant part (hopefully):
{% for line in extract %}> {{ line }}
{% endfor %}

The full build log is available at
https://people.debian.org/~kanashiro/ruby3.0/round2/builds/3/{{ package }}/{{ filename | replace:".log",".build.txt" }}

The cqa-annotate loop

cqa-annotate will parse each log file, display an extract of what it found as possibly being the relevant part, and wait for your input:

######## ruby-cocaine_0.5.8-1.1+rebuild1633376733_amd64.log ########
--------- Error:
     Failure/Error: undef_method :exitstatus

     FrozenError:
       can't modify frozen object: pid 2351759 exit 0
     # ./spec/support/unsetting_exitstatus.rb:4:in `undef_method'
     # ./spec/support/unsetting_exitstatus.rb:4:in `singleton class'
     # ./spec/support/unsetting_exitstatus.rb:3:in `assuming_no_processes_have_been_run'
     # ./spec/cocaine/errors_spec.rb:55:in `block (2 levels) in <top (required)>'

Deprecation Warnings:

Using `should` from rspec-expectations' old `:should` syntax without explicitly enabling the syntax is deprecated. Use the new `:expect` syntax or explicitly enable `:should` with `config.expect_with(:rspec) { |c| c.syntax = :should }` instead. Called from /<<PKGBUILDDIR>>/spec/cocaine/command_line/runners/backticks_runner_spec.rb:19:in `block (2 levels) in <top (required)>'.


If you need more of the backtrace for any of these deprecations to
identify where to make the necessary changes, you can configure
`config.raise_errors_for_deprecations!`, and it will turn the
deprecation warnings into errors, giving you the full backtrace.

1 deprecation warning total

Finished in 6.87 seconds (files took 2.68 seconds to load)
67 examples, 1 failure

Failed examples:

rspec ./spec/cocaine/errors_spec.rb:54 # When an error happens does not blow up if running the command errored before execution

/usr/bin/ruby3.0 -I/usr/share/rubygems-integration/all/gems/rspec-support-3.9.3/lib:/usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/lib /usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/exe/rspec --pattern ./spec/\*\*/\*_spec.rb --format documentation failed
ERROR: Test "ruby3.0" failed:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s|i|r|f]:

You can then choose one of the options:

  • s - skip this package and do nothing. You can run cqa-annotate again later and come back to it.
  • i - ignore this package completely. New runs of cqa-annotate won't ask about it again.

    This is useful if the package only fails in your rebuilds due to another package, and would just work when that other package gets fixes. In the Ruby transition this happens when A depends on B, while B builds a C extension and failed to build against the new Ruby. So once B is fixed, A should just work (in principle). But even if A would even have problems of its own, we can't really know before B is fixed so we can retry A.

  • r - report a bug. cqa-annotate will expand the template with the data from the current log, and feed it to mutt. This is currently a limitation: you have to use mutt to report bugs.

    After you report the bug, cqa-annotate will ask if it should edit the TODO file. In my opinion it's best to not do this, and annotate the package with a bug number when you have one (see below).

  • f - view the full log. This is useful when the extract displayed doesn't have enough info, or you want to inspect something that happened earlier (or later) during the build.

When there are existing bugs in the package, cqa-annotate will list them among the options. If you choose a bug number, the TODO file will be annotated with that bug number and new runs of cqa-annotate will not ask about that package anymore. For example after I reported a bug for ruby-cocaine for the issue listed above, I aborted with a ctrl-c, and when I run my process.sh script again I then get this prompt:

----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
1: 996206 serious ruby-cocaine: FTBFS with ruby3.0: ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus ||
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s|i|1|r|f]:

Chosing 1 will annotate the TODO file with the bug number, and I'm done with this package. Only a few other hundreds to go.

12 October, 2021 08:30AM

October 10, 2021

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, September 2021

In August I was assigned 12.75 hours of work by Freexian's Debian LTS initiative and carried over 18 hours from earlier months. I worked 2 hours and will carry over the remainder.

I started work on an update to the linux package, but did not make an upload yet.

10 October, 2021 12:03PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live contrib archive available via CTAN mirrors

The TeX Live contrib repository has been for many years now a valuable source of packages that cannot enter proper TeX Live due to license restrictions etc. I took over maintenance of it in 2017 from Taco, and since then the repository has been available via my server. Since a few weeks, tlcontrib is now available via the CTAN mirror network, the Comprehensive TeX Archive Network.

Thanks to the team of CTAN who offered to mirror the tlcontrib, users can get much faster (and reliable) access via the mirror, by adding tlcontrib as additional repository source for tlmgr, either permanently via:

tlmgr repository add https://mirrors.ctan.org/systems/texlive/tlcontrib tlcontrib

or via a one-shot

tlmgr --repository https://mirrors.ctan.org/systems/texlive/tlcontrib install PACKAGE

The list of packages can be seen here, and includes besides others:

  • support for commercial fonts (lucida, garamond, …)
  • Noto condensed
  • various sets of programs around acrotex

(and much more!).

You can install all packages from the repository by installing the new collection-contrib.

Thanks to the whole CTAN team, and please switch your repositories to the CTAN mirror to get load of my server, thanks a lot!

Enjoy.

10 October, 2021 03:39AM by Norbert Preining

October 09, 2021

Thorsten Alteholz

My Debian Activities in September 2021

FTP master

This month I accepted 224 and rejected 47 packages. This is almost thrice the rejects of last month. Please, be more careful and check your package twice before uploading. The overall number of packages that got accepted was 233.

Debian LTS

This was my eighty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.75h. During that time I did LTS and normal security uploads of:

  • [DLA 2755-1] btrbk security update for one CVE
  • [DLA 2762-1] grilo security update for one CVE
  • [DLA 2766-1] openssl security update for one CVE
  • [DLA 2774-1] openssl1.0 security update for one CVE
  • [DLA 2773-1] curl security update for two CVEs

I also started to work on exiv2 and faad2.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirty-ninth ELTS month.

Unfortunately during my allocated time I could not process any upload. I worked on openssl, curl and squashfs-tools but for one reason or another the prepared packages didn’t pass all tests. In order to avoid regressions, I postponed the uploads (meanwhile an ELA for curl was published …).

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I again uploaded some packages either for NEW or as source upload.

As Odyx took a break from all Debian activities, I volunteered to take care of the printing packages. Please be merciful when somethings breaks after I did an upload. My first printing upload was hplip

09 October, 2021 07:42PM by alteholz

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Lotus to Lily

The Lotus story so far

My very first experience with water flowering plants was pretty good. I learnt a good deal of things; from setting up the pond, germinating the lotus seeds, setting up the right soil, witnessing the growth of the lotus plant, fish eco-system to take care of the pond. Overall, a lot of things learnt.

But I couldn’t succeed in getting the Lotus flower. A lot many reasons. The granite container developed some leakage, which I had to fix by emptying it, which might have caused some shock to the lotus. But more than that, in my understanding, the reason for not being able to flower the lotus, was the amount of sunlight. From what I have learned, these plants need a minimum of 6-8 hrs of sunlight to really give you with the flowering result, whereas the setup of my pond was on the ground with hardly 3-4 hrs of sun. And that too, with all the plants growing, resulted in indirect sunlight.

Lotus to Lily

For my new setup, I chose a large oval container. And this one, I placed on my terrace, carefully choosing a spot where it’d get 6-8 hrs of very bright sun on usual days. Other than that, the rest of the setup is pretty similar to my previous setup in the garden. Guppies, Solar Water Fountain etc.

Initial lily pond setup

Initial lily pond setup

The good thing about the terrace is that the setup gets ample amount of sun. You can see that in the picture above, with the amount of algae that has been formed. Something that is vital for the plant’s ecosystem.

I must thank my wonderful neighbor who kindly shared a sapling from their lily plant. They already had had success with flowering the lily. So I had high hopes to see the day come when I’d be happy to write down my experience in this blog post. Though, a lot of patience is needed. I got the lily some time in January this year. And it blossomed now, in October.

So, here’s me sharing my happiness here, in particular order of how I documented the process.

Monday morning greeted with a blossomed lily

Monday morning greeted with a blossomed lily

Lily Blossom Closeup

Lily Blossom Closeup

Beautiful water reflection

Beautiful water reflection

Dawn to Dusk

The other thing that I learned in this whole lily episode is that the flower goes back to sleeping at dusk. And back to flowering again at dawn. There’s so much to learn in the surrounding, only if you spare some time to the little things with mother nature.

Lily status at dusk

Lily status at dusk

Lily the next day

Lily the next day

Not sure how long this phenomenon is to last, but overall witnessing this whole process has been mesmerizing.

This past week has been great. 🙏🏼

09 October, 2021 04:22PM by Ritesh Raj Sarraf (rrs@researchut.com)

October 08, 2021

hackergotchi for Neil Williams

Neil Williams

Using Salsa with contrib and non-free

OK, I know contrib and non-free aren't popular topics to many but I've had to sort out some simple CI for such contributions and I thought it best to document how to get it working. You will need access to the GitLab Settings for the project in Salsa - or ask someone to add some CI/CD variables on your behalf. (If CI isn't running at all, the settings will need to be modified to enable debian/salsa-ci.yml first, in the same way as packages in main).

The default Salsa config (debian/salsa-ci.yml) won't get a passing build for packages in contrib or non-free:

# For more information on what jobs are run see:
# https://salsa.debian.org/salsa-ci-team/pipeline
#
---
include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml

Variables need to be added. piuparts can use the extra contrib and non-free components directly from these variables.

variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'

Many packages in contrib and non-free only support amd64 - so the i386 build job needs to be removed from the pipeline by extending the variables dictionary:

variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'
   SALSA_CI_DISABLE_BUILD_PACKAGE_I386: 1

The extra step is to add the apt source file variable to the CI/CD settings for the project.

The CI/CD settings are at a URL like:

https://salsa.debian.org/<team>/<project>/-/settings/ci_cd

Expand the section on Variables and add a <b>File</b> type variable:

Key: SALSA_CI_EXTRA_REPOSITORY

Value: deb https://deb.debian.org/debian/ sid contrib non-free

The pipeline will run at the next push - alternatively, the CI/CD pipelines page has a "Run Pipeline" button. The settings added to the main CI/CD settings will be applied, so there is no need to add a variable at this stage. (This can be used to test the variables themselves but only with manually triggered CI pipelines.)

For more information and additional settings (for example disabling or allowing certain jobs to fail), check https://salsa.debian.org/salsa-ci-team/pipeline

08 October, 2021 03:45PM by Neil Williams

hackergotchi for Chris Lamb

Chris Lamb

Reproducible Builds: Increasing the Integrity of Software Supply Chains (2021)

I didn't blog about it at the time, but a paper I co-authored with Stefano Zacchiroli was accepted by IEEE Software in April of this year. Titled Reproducible Builds: Increasing the Integrity of Software Supply Chains, the abstract of the paper is as follows:

Although it is possible to increase confidence in Free and Open Source Software (FOSS) by reviewing its source code, trusting code is not the same as trusting its executable counterparts. These are typically built and distributed by third-party vendors with severe security consequences if their supply chains are compromised.

In this paper, we present reproducible builds, an approach that can determine whether generated binaries correspond with their original source code. We first define the problem and then provide insight into the challenges of making real-world software build in a "reproducible" manner — that is, when every build generates bit-for-bit identical results. Through the experience of the Reproducible Builds project making the Debian Linux distribution reproducible, we also describe the affinity between reproducibility and quality assurance (QA).

The full text of the paper can be found in PDF format and should appear, with an alternative layout, within a forthcoming issue of the physical IEEE Software magazine.

08 October, 2021 02:22PM

Reproducible Builds (diffoscope)

diffoscope 187 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 187. This version includes the following changes:

* Add support for comparing .pyc files. Thanks to Sergei Trofimovich.
  (Closes: reproducible-builds/diffoscope#278)

You find out more by visiting the project homepage.

08 October, 2021 12:00AM

October 07, 2021

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Sharing mentoring a new Debian contributor experience, lots of fun

I recently did mentoring a new Debian contributor. This is carried out in a framework with OSS Gate on-boarding.

oss-gate.github.io

In "OSS Gate on-boarding", recruit a new contributor who want to work on continuously. Then, corporation sponsor its employee as a mentor. Thus, employees can do it as a one of their job.

During Aug - Oct period, I worked with a new debian contributor every 2h in a week. This experience is lots of fun, and learned a new things for me.

The most important point is: a new Debian contributor aimed to do their work continuously even though mentoring period has finished.

So, some of the work has been finished, but not for all. I tried to transfer knowledge for it.

I'm looking forward that he makes things forward in consultation with other person's help.

Here is the report about my activity as a mentor.

First OSS Gate onboarding (The article is written by Japanese)

The original blog entry is written by Japanese, I don't afford to translate it, so just paste link to google translate for your hints

I hope someone can do a similar attempt too!

For the record, I worked with a new Debian contributor about:

07 October, 2021 08:19AM

Tim Retout

October 06, 2021

hackergotchi for Matthew Palmer

Matthew Palmer

Discovering AWS IAM accounts

Let’s say you’re someone who happens to discover an AWS account number, and would like to take a stab at guessing what IAM users might be valid in that account. Tricky problem, right? Not with this One Weird Trick!

In your own AWS account, create a KMS key and try to reference an ARN representing an IAM user in the other account as the principal. If the policy is accepted by PutKeyPolicy, then that IAM account exists, and if the error says “Policy contains a statement with one or more invalid principals” then the user doesn’t exist.

As an example, say you want to guess at IAM users in AWS account 111111111111. Then make sure this statement is in your key policy:

{
  "Sid": "Test existence of user",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111111111111:user/bob"
  },
  "Action": "kms:DescribeKey",
  "Resource": "*"
}

If that policy is accepted, then the account has an IAM user named bob. Otherwise, the user doesn’t exist. Scripting this is left as an exercise for the reader.

Sadly, wildcards aren’t accepted in the username portion of the ARN, otherwise you could do some funky searching with ...:user/a*, ...:user/b*, etc. You can’t have everything; where would you put it all?

I did mention this to AWS as an account enumeration risk. They’re of the opinion that it’s a good thing you can know what users exist in random other AWS accounts. I guess that means this is a technique you can put in your toolbox safe in the knowledge it’ll work forever.

Given this is intended behaviour, I assume you don’t need to use a key policy for this, but that’s where I stumbled over it. Also, you can probably use it to enumerate roles and anything else that can be a principal, but since I don’t see as much use for that, I didn’t bother exploring it.

There you are, then. If you ever need to guess at IAM users in another AWS account, now you can!

06 October, 2021 11:30PM by Matt Palmer (mpalmer@hezmatt.org)

hackergotchi for Thomas Goirand

Thomas Goirand

OpenStack Xena, the 24th OpenStack release, is out

It was out at 3pm, and I managed to finish uploading the last bits to Unstable at 9pm… Of course, that’s because all of the packaging and testing work was done before the release date. All of it is, as usual, also available through a Bullseye non-official backports repository that can be added using extrepo (ie: “extrepo enable openstack_xena”).

06 October, 2021 09:26PM by Goirand Thomas

Infomaniak launches its public IaaS cloud with ground breaking prices

My employer, the biggest Swiss server hosting company, Infomaniak, has just opened registration for its new IaaS (Infrastructure as a Service) OpenStack-based public cloud. Well, in fact, it’s been opened since a week or so. Previously, it was only in beta (during that beta period, we hosted (for free) the whole Debconf 21 infrastructure). Nothing really new in the market, except that it is by far cheaper than most (if not all) of its (OpenStack-based or not) competitors, including AWS, GCE or Azure.

Also, everything is hosted in Switzerland, in our own data centers, where data protection is written in the law (and Infomaniak often advertises about data privacy: this is real here…).

Not only Infomaniak is (by far…) the cheapest offer in the market (including a 300 CHF free tier: enough for our smallest VM for a full year), but we also have very good technical support, and the hardware we used is top notch:

  • 6th Gen NVMe (read intensive) Ceph-based block devices
  • AMD Epyc CPU (128 threads per server)
  • 2x 25Gbits/s (using BGP-to-the-host networking)

Some of our customers didn’t even believe how we could do such pricing. Well, the reason is simple: most of our competitors are simply really overpriced, and are making too much money. Since we’re late in the market, and that newer hardware (with many cores on a single server) makes is possible to increase density without too much over-commit, my bosses decided that since we could, we would be the cheapest! Hopefully, this will work as a good business strategy.

All of that public cloud infrastructure has been setup with OpenStack Cluster Installer for which I’m the main author, and that is fully in Debian. All of this is running on a plain, unmodified Debian Bullseye (well, with a few OpenStack packages a little bit more up-to-date, but really not much, and all of that is publicly available…).

Last, choosing the cheapest and best offer is also a good action: it promotes OpenStack and cloud computing in Debian, which I believe is the least vendor locked-in IaaS solution.

06 October, 2021 09:23PM by Goirand Thomas

Reproducible Builds

Reproducible Builds in September 2021

The goal behind “reproducible builds” is to ensure that no deliberate flaws have been introduced during compilation processes via promising or mandating that identical results are always generated from a given source. This allowing multiple third-parties to come to an agreement on whether a build was compromised or not by a system of distributed consensus.

In these reports we outline the most important things that have been happening in the world of reproducible builds in the past month:


First mentioned in our March 2021 report, Martin Heinz published two blog posts on sigstore, a project that endeavours to offer software signing as a “public good, [the] software-signing equivalent to Let’s Encrypt”. The two posts, the first entitled Sigstore: A Solution to Software Supply Chain Security outlines more about the project and justifies its existence:

Software signing is not a new problem, so there must be some solution already, right? Yes, but signing software and maintaining keys is very difficult especially for non-security folks and UX of existing tools such as PGP leave much to be desired. That’s why we need something like sigstore - an easy to use software/toolset for signing software artifacts.

The second post (titled Signing Software The Easy Way with Sigstore and Cosign) goes into some technical details of getting started.


There was an interesting thread in the /r/Signal subreddit that started from the observation that Signal’s apk doesn’t match with the source code:

Some time ago I checked Signal’s reproducibility and it failed. I asked others to test in case I did something wrong, but nobody made any reports. Since then I tried to test the Google Play Store version of the apk against one I compiled myself, and that doesn’t match either.


BitcoinBinary.org was announced this month, which aims to be a “repository of Reproducible Build Proofs for Bitcoin Projects”:

Most users are not capable of building from source code themselves, but we can at least get them able enough to check signatures and shasums. When reputable people who can tell everyone they were able to reproduce the project’s build, others at least have a secondary source of validation.


Distribution work

Frédéric Pierret announceda new testing service at beta.tests.reproducible-builds.org, showing actual rebuilds of binaries distributed by both the Debian and Qubes distributions.

In Debian specifically, however, 51 reviews of Debian packages were added, 31 were updated and 31 were removed this month to our database of classified issues. As part of this, Chris Lamb refreshed a number of notes, including the build_path_in_record_file_generated_by_pybuild_flit_plugin issue.

Elsewhere in Debian, Roland Clobus posted his Fourth status update about reproducible live-build ISO images in Jenkins to our mailing list, which mentions (amongst other things) that:

  • All major configurations are still built regularly using live-build and bullseye.
  • All major configurations are reproducible now; Jenkins is green.
    • I’ve worked around the issue for the Cinnamon image.
    • The patch was accepted and released within a few hours.
  • My main focus for the last month was on the live-build tool itself.

Related to this, there was continuing discussion on how to embed/encode the build metadata for the Debian “live” images which were being worked on by Roland Clobus.


Ariadne Conill published another detailed blog post related to various security initiatives within the Alpine Linux distribution. After summarising some conventional security work being done (eg. with sudo and the release of OpenSSH version 3.0), Ariadne included another section on reproducible builds: “The main blocker [was] determining what to do about storing the build metadata so that a build environment can be recreated precisely”.

Finally, Bernhard M. Wiedemann posted his monthly reproducible builds status report.


Community news

On our website this month, Bernhard M. Wiedemann fixed some broken links [] and Holger Levsen made a number of changes to the Who is Involved? page [][][]. On our mailing list, Magnus Ihse Bursie started a thread with the subject Reproducible builds on Java, which begins as follows:

I’m working for Oracle in the Build Group for OpenJDK which is primary responsible for creating a built artifact of the OpenJDK source code. […] For the last few years, we have worked on a low-effort, background-style project to make the build of OpenJDK itself building reproducible. We’ve come far, but there are still issues I’d like to address. []


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 183, 184 and 185 as well as performed significant triaging of merge requests and other issues in addition to making the following changes:

  • New features:

    • Support a newer format version of the R language’s .rds files. []
    • Update tests for OCaml 4.12. []
    • Add a missing format_class import. []
  • Bug fixes:

    • Don’t call close_archive when garbage collecting Archive instances, unless open_archive definitely returned successfully. This prevents, for example, an AttributeError where PGPContainer’s cleanup routines were rightfully assuming that its temporary directory had actually been created. []
    • Fix (and test) the comparison of R language’s .rdb files after refactoring temporary directory handling. []
    • Ensure that “RPM archives” exists in the Debian package description, regardless of whether python3-rpm is installed or not at build time. []
  • Codebase improvements:

    • Use our assert_diff routine in tests/comparators/test_rdata.py. []
    • Move diffoscope.versions to diffoscope.tests.utils.versions. []
    • Reformat a number of modules with Black. [][]

However, the following changes were also made:

  • Mattia Rizzolo:

    • Fix an autopkgtest caused by the androguard module not being in the (expected) python3-androguard Debian package. []
    • Appease a shellcheck warning in debian/tests/control.sh. []
    • Ignore a warning from h5py in our tests that doesn’t concern us. []
    • Drop a trailing .1 from the Standards-Version field as it’s required. []
  • Zbigniew Jędrzejewski-Szmek:

    • Stop using the deprecated distutils.spawn.find_executable utility. [][][][][]
    • Adjust an LLVM-related test for LLVM version 13. []
    • Update invocations of llvm-objdump. []
    • Adjust a test with a one-byte text file for file version 5.40. []

And, finally, Benjamin Peterson added a --diff-context option to control unified diff context size [] and Jean-Romain Garnier fixed the Macho comparator for architectures other than x86-64 [].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project runs a testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Holger Levsen:

    • Drop my package rebuilder prototype as it’s not useful anymore. []
    • Schedule old packages in Debian bookworm. []
    • Stop scheduling packages for Debian buster. [][]
    • Don’t include PostgreSQL debug output in package lists. []
    • Detect Python library mismatches during build in the node health check. []
    • Update a note on updating the FreeBSD system. []
  • Mattia Rizzolo:

    • Silence a warning from Git. []
    • Update a setting to reflect that Debian bookworm is the new testing. []
    • Upgrade the PostgreSQL database to version 13. []
  • Roland Clobus (Debian “live” image generation):

    • Workaround non-reproducible config files in the libxml-sax-perl package. []
    • Use the new DNS for the ‘snapshot’ service. []
  • Vagrant Cascadian:

    • Also note that the armhf architecture also systematically varies by the kernel. []


Contributing

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 October, 2021 04:56PM

October 05, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

plocate 1.1.12 released

plocate 1.1.12 has been released, with some minor bugfixes and a minor new option.

More interesting is that plocate is now one year old! plocate 1.0.0 was released October 11th, 2020, so I'm maybe getting a bit ahead of myself, but it feels like a good milestone. I haven't really achieved my goal of being in the default Debian install, simply because there is too much resistance to having a default locate at all, but it's now hit most major distributions (thanks to a host of packagers) and has largely supplanted mlocate in general.

plocate still feels to me like the obvious way of doing a locate; like, “why didn't anyone just do this earlier”. But io_uring really couldn't have been done just a few years ago, and added a few very interesting touches (both as a programmer, and for users). In general, it feels like plocate is “done”; it's doing one thing, doing it well, and there's nothing obvious I'm missing. (I keep getting bug reports, but they're getting increasingly obscure, and it's more like a trickle than a flood.) But I'd still love for basic UNIX tools to care more about performance—our data sets are bigger than ever, yet we wrote the for a time when our systems had just a few thousand files. The old UNIX brute force mantra just isn't good enough in the 2020s. And we don't have the manpower (in terms of developer interest) to fix it.

05 October, 2021 10:09PM

October 04, 2021

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, August 2021

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In August, we put aside 2460 EUR to fund Debian projects. We received a new project proposal that got approved and there’s an associated bid request if you feel like proposing yourself to implement this project.

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In August, 14 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 4.0h (out of 14h assigned and 5h from August), thus carrying over 15h to September.
  • Adrian Bunk did 11h (out of 23.75h assigned), thus carrying over 12.75h to September.
  • Anton Gladky did 12h (out of 12h assigned).
  • Ben Hutchings did 1.25h (out of 13.25h assigned and 6h from August), thus carrying over 18h to September.
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did not report back about their work so we assume they did nothing (out of 23.75h assigned plus 50.75h from August), thus is carrying over 74.5h for September.
  • Holger Levsen did 3h (out of 12h assigned) to help coordinate the team, and gave back the remaining hours.
  • Lee Garrett did nothing (out of 23.75h assigned), thus is carrying over 23.75h for September.
  • Markus Koschany did 35h (out of 23.75h assigned and 30h from August), thus carrying over 18.75h to September.
  • Neil Williams did 24h (out of 23.75h assigned), thus anticipating 0.25h of October.
  • Roberto C. Sánchez did 22.25h (out of 23.75h assigned), thus carrying over 1.5h to September.
  • Sylvain Beucler did 21.5h (out of 23.75h assigned), thus carrying over 2.25h to September.
  • Thorsten Alteholz did 23.75h (out of 23.75h assigned).
  • Utkarsh Gupta did 23.75h (out of 23.75h assigned).

Evolution of the situation

In August we released 30 DLAs.

This is the first month of Jeremiah coordinating LTS contributors. We would like to thank Holger Levsen for his work on this role up to now.

Also, we would like to remark once again that we are constantly looking for new contributors. Please contact Jeremiah if you are interested!

The security tracker currently lists 73 packages with a known CVE and the dla-needed.txt file has 29 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

04 October, 2021 05:04PM by Raphaël Hertzog

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities for 2021-09

Here’s a bunch of uploads for September. Mostly catching up with a few things after the Bullseye release.

2021-09-01: Upload package bundlewrap (4.11.2-1) to Debian unstable.

2021-09-01: Upload package calamares (3.2.41.1-1) to Debian unstable.

2021-09-01: Upload package g-disk (1.0.8-2) to Debian unstable (Closes: #993109).

2021-09-01: Upload package bcachefs-tools (0.1+git20201025.742dbbdb-1) to Debian unstable (Closes: #976474).

2021-09-02: Upload package fabulous (0.4.0+dfsg1-1) to Debian unstable (Closes: #983247).

2021-09-02: Upload package feed2toot (0.17-1) to Debian unstable.

2021-09-02: Merge MR!1 for fracplanet.2021-09-02:2021-09-02:

2021-09-02: Upload package fracplanet (0.5.1-6) to Debian unstable (Closes: #980808).

2021-09-02: Upload package toot (0.28.0-1) to Debian unstable.

2021-09-02: Upload package toot (0.28.0-2) to Debian unstable.

2021-09-02: Merge MR!1 for gnome-shell-extension-gamemode.

2021-09-02: Merge MR!1 for gnome-shell-extension-no-annoyance.

2021-09-02: Upload package gnome-shell-extension-no-annoyance (0+20210717-12dc667) to Debian unstable (Closes: #993193).

2021-09-02: Upload package gnome-shell-extension-gamemode (5-2) to Debian unstable.

2021-09-02: Merge MR!2 for gnome-shell-extension-harddisk-led.

2021-09-02: Upload package gnome-shell-extension-pixelsaver (1.24-2) to Debian unstable (Closes: #993195).

2021-09-02: Upload package gnome-shell-extension-dash-to-panel (43-1) to Debian unstable (Closes: #993058, #989546).

2021-09-02: Upload package gnome-shell-extension-harddisk-led (25-1) to Debian unstable (Closes: #993181).

2021-09-02: Upload package gnome-shell-extension-impatience (0.4.5+git20210412-e8e132f-1) to Debian unstable (Closes: #993190).

2021-09-02: Upload package s-tui (1.1.3-1) to Debian unstable.

2021-09-02: Upload package flask-restful (0.3.9-2) to Debian unstable.

2021-09-02: Upload package python-aniso8601 (9.0.1-2) to Debian unstable.

2021-09-03: Sponsor package fonts-jetbrains-mono (2.242+ds-1) for Debian unstable (Debian Mentors request).

2021-09-03: Sponsor package python-toml (0.10.2-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package buildbot (3.3.0-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package python-strictyaml (1.4.4-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package python-absl (0.13.0-1) for Debian unstable (Python team request).

2021-09-03: Merge MR!1 for xabacus.

2021-09-03: Upload package aalib (1.4p5-49) to Debian unstable (Closes: #981503).

2021-09-03: File ROM for gnome-shell-extension-remove-dropdown-arrows (#993577, closing: #993196).

2021-09-03: Upload package bcachefs-tools (0.1+git20210805.6c42566-2) to Debian unstable.

2021-09-05: Upload package tuxpaint (0.9.26-1~exp1) to Debian experimental.

2021-09-05: Upload package tuxpaint-config (0.17rc1-1~exp1) to Debian experimental.

2021-09-05: Upload package tuxpaint-stamps (2021.06.28-1~exp1) to Debian experimental (Closes: #988347).

2021-09-05: Upload package tuxpaint-stamps (2021.06.28-1) to Debian experimental.

2021-09-05: Upload package tuxpaint (0.9.26-1) to Debian unstable (Closes: #942889).

2021-09-06: Merge MR!2 for connectagram.

2021-09-06: Upload package connectagram (1.2.11-2) to Debian unstable.

2021-09-06: Upload package aalib (1.4p5-50) to Debian unstable (Closes: #993729).

2021-09-06: Upload packag gdisk (1.0.8-3) to Debian unstable (Closes: #993732).

2021-09-06: Upload package tuxpaint-config (0.17rc1-1) to Debian unstable.

2021-09-06: Upload package grapefruit (0.1_a3+dfsg-10) to Debian unstable.

2021-09-07: File ROM for gnome-shell-extension-hide-activities ().

2021-09-09: Upload package calamares (3.2.42-1) to Debian unstable.

2021-09-09: Upgraded peertube.debian.social to PeerTube 3.4.0.

2021-09-17: Upload calamares (3.2.43-1) to Debian unstable.

2021-09-28: Upload calamares (3.2.44.2-1) to Debian unstable.

04 October, 2021 12:50PM by jonathan

Paul Wise

FLOSS Activities September 2021

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: reopened bugs closed by a spammer
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The purple-discord/harmony/pyemd/librecaptcha/esprima-python work was sponsored by my employer. All other work was done on a volunteer basis.

04 October, 2021 04:15AM

October 03, 2021

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Human Society

In my past, I’ve had experiences that have had me thinking. My experiences have been mostly in the South Asian Indian Sub-Continent, so may not be fair to generalize it.

  • Help with finding a job: I’ve learnt many times, that when people reach out asking for help, say, for helping them with finding a job; it isn’t about you making a recommendation/referral for them. It, instead, implies that you are indirectly being asked to find and arrange them a job.

  • Gifts for people: My impression of offering a gift to someone is usually presenting them with something I’ve found useful and dear to me. This is irrespective of whether the gift is a brand new unpacked item or a used (immaculate) one. On the contrary, many people define a gift as an item which is unpacked and one that comes with its sealed original packaging.

03 October, 2021 10:54AM by Ritesh Raj Sarraf (rrs@researchut.com)

hackergotchi for Junichi Uekawa

Junichi Uekawa

Using podman for most of my local development environment.

Using podman for most of my local development environment. For my personal/upstream development I started using podman instead of lxc and pbuilder and other toolings. Most projects provide reasonable docker images (such as rust) and I am happier keeping my environment as a whole stable while I can iterate. I have a Dockerfile for the development environment like this:

03 October, 2021 08:02AM by Junichi Uekawa

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

ANC is not for me

Active noise cancellation (ANC) has been all the rage lately in the headphones and in-ear monitors market. It seems after Apple got heavily praised for their AirPods Pro, every somewhat serious electronics manufacturer released their own design incorporating this technology.

The first headphones with ANC I remember trying on (in the early 2010s) were the Bose QuietComfort 15. Although the concept did work (they indeed cancelled some sounds), they weren't amazing and did a great job of convincing me ANC was some weird fad for people who flew often.

The Sony WH-1000X M3 folded in their case

As the years passed, chip size decreased, battery capacity improved and machine learning blossomed — truly a perfect storm for the wireless ANC headphones market. I had mostly stayed a sceptic of this tech until recently a kind friend offered to let me try a pair of Sony WH-1000X M3.

Having tested them thoroughly, I have to say I'm really tempted to buy them from him, as they truly are fantastic headphones1. They are very light, comfortable, work without a proprietary app and sound very good with the ANC on2 — if a little bass-heavy for my taste3.

The ANC itself is truly astounding and is leaps and bounds beyond what was available five years ago. It still isn't perfect and doesn't cancel ALL sounds, but transforms the low hum of the subway I find myself sitting in too often these days into a light *swoosh*. When you turn the ANC on, HVAC simply disappears. Most impressive to me is the way they completely cancel the dreaded sound of your footsteps resonating in your headphones when you walk with them.

My old pair of Senheiser HD 280 Pro, with aftermarket sheepskin earpads

I won't be keeping them though.

Whilst I really like what Sony has achieved here, I've grown to understand ANC simply isn't for me. Some of the drawbacks of ANC somewhat bother me: the ear pressure it creates is tolerable, but is an additional energy drain over long periods of time and eventually gives me headaches. I've also found ANC accentuates the motion sickness I suffer from, probably because it messes up with some part of the inner ear balance system.

Most of all, I found that it didn't provide noticeable improvements over good passive noise cancellation solutions, at least in terms of how high I have to turn the volume up to hear music or podcasts clearly. The human brain works in mysterious ways and it seems ANC cancelling a class of noises (low hums, constant noises, etc.) makes other noises so much more noticeable. People talking or bursty high pitched noises bothered me much more with ANC on than without.

So for now, I'll keep using my trusty Senheiser HD 280 Pro4 at work and good in-ear monitors with Comply foam tips on the go.


  1. This blog post certainly doesn't aim to be a comprehensive review of these headphones. See Zeos' review if you want something more in-depth. 

  2. As most ANC headphones, they don't sound as good when used passively through the 3.5mm port, but that's just a testament of how a great job Sony did of tuning the DSP. 

  3. Easily fixed using an EQ. 

  4. Retrofitted with aftermarket sheepskin earpads, they provide more than 32db of passive noise reduction. 

03 October, 2021 04:00AM by Louis-Philippe Véronneau

October 02, 2021

François Marier

Anuradha Weeraman

On blood-lines, forks and survivors

The lineage of a classic operating system

02 October, 2021 04:30PM by Anuradha Weeraman

Jacob Adams

SSH Port Forwarding and the Command Cargo Cult

Someone is Wrong on the Internet

If you look up how to only forward ports with ssh, you may come across solutions like this:

ssh -nNT -L 8000:example.com:80 user@bastion.example.com

Or perhaps this, if you also wanted to send ssh to the background:

ssh -NT -L 3306:db.example.com:3306 example.com &

Both of these use at least one option that is entirely redundant, and the second can cause ssh to fail to connect if you happen to be using password authentication. However, they seem to still persist in various articles about ssh port forwarding. I myself was using the first variation until just recently, and I figured I would write this up to inform others who might be still using these solutions.

The correct option for this situation is not -nNT but simply -N, as in:

ssh -N -L 8000:example.com:80 user@bastion.example.com

If you want to also send ssh to the background, then you’ll want to add -f instead of using your shell’s built-in & feature, because you can then input passwords into ssh if necessary1

Honestly, that’s the point of this article, so you can stop here if you want. If you’re looking for a detailed explaination of what each of these options actually does, or if you have no idea what I’m talking about, read on!

What is SSH Port Forwarding?

ssh is a powerful tool for remote access to servers, allowing you to execute commands on a remote machine. It can also forward ports through a secure tunnel with the -L and -R options. Basically, you can forward a connection to a local port to a remote server like so:

ssh -L 8080:other.example.com:80 ssh.example.com

In this example, you connect to ssh.example.com and then ssh forwards any traffic on your local machine port 80802 to other.example.com port 80 via ssh.example.com. This is a really powerful feature, allowing you to jump3 inside your firewall with just an ssh server exposed to the world.

It can work in reverse as well with the -R option, allowing connections on a remote host in to a server running on your local machine. For example, say you were running a website on your local machine on port 8080 but wanted it accessible on example.com port 804. You could use something like:

ssh -R 8080:example.com:80 example.com

The trouble with ssh port forwarding is that, absent any additional options, you also open a shell on the remote machine. If you’re planning to both work on a remote machine and use it to forward some connection, this is fine, but if you just need to forward a port quickly and don’t care about a shell at that moment, it can be annoying, especially since if the shell closes ssh will close the forwarding port as well.

This is where the -N option comes in.

SSH just forwarding ports

In the ssh manual page5, -N is explained like so:

Do not execute a remote command. This is useful for just forwarding ports.

This is all we need. It instructs ssh to run no commands on the remote server, just forward the ports specified in the -L or -R options. But people seem to think that there are a bunch of other necessary options, so what do those do?

SSH and stdin

-n controls how ssh interacts with standard input, specifically telling it not to:

Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically for‐ warded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)

SSH passwords and backgrounding

-f sends ssh to background, freeing up the terminal in which you ran ssh to do other things.

Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm.

As indicated in the description of -n, this does the same thing as using the shell’s & feature with -n, but allows you to put in any necessary passwords first.

SSH and pseudo-terminals

-T is a little more complicated than the others and has a very short explanation:

Disable pseudo-terminal allocation.

It has a counterpart in -t, which is explained a little better:

Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

As the description of -t indicates, ssh is allocating a pseudo-terminal on the remote machine, not the local one. However, I have confirmed6 that -N doesn’t allocate a pseudo-terminal either, since it doesn’t run any commands. Thus this option is entirely unnecessary.

What’s a pseudo-terminal?

This is a bit complicated, but basically it’s an interface used in UNIX-like systems, like Linux or BSD, that pretends to be a terminal (thus pseudo-terminal). Programs like your shell, or any text-based menu system made in libraries like ncurses, expect to be connected to one (when used interactively at least). Basically it fakes as if the input it is given (over the network, in the case of ssh) was typed on a physical terminal device and do things like raise an interrupt (SIGINT) if Ctrl+C is pressed.

Why?

I don’t know why these incorrect uses of ssh got passed around as correct, but I suspect it’s a form of cargo cult, where we use example commands others provide and don’t question what they do. One stack overflow answer I read that provided these options seemed to think -T was disabling the local pseudo-terminal, which might go some way towards explaining why they thought it was necessary.

I guess the moral of this story is to question everything and actually read the manual, instead of just googling it.

  1. Not that you SHOULD be using ssh with password authentication anyway, but people do. 

  2. Only on your loopback address by default, so that you’re not allowing random people on your network to use your tunnel. 

  3. In fact, ssh even supports Jump Hosts, allowing you to automatically forward an ssh connection through another machine. 

  4. I can’t say I recommend a setup like this for anything serious, as you’d need to ssh as root to forward ports less than 1024. SSH forwarding is not for permanent solutions, just short-lived connections to machines that would be otherwise inaccessible. 

  5. Specifically, my source is the ssh(1) manual page in OpenSSH 8.4, shipped as 1:8.4p1-5 in Debian bullseye. 

  6. I just forwarded ports with -N and then logged in to that same machine and looked at psuedo-terminal allocations via ps ux. No terminal is associated with ssh connections using just the -N option. 

02 October, 2021 12:00AM

October 01, 2021

Vincent Bernat

FRnOG #34: how we deployed a datacenter in one click

Here are the slides I presented for FRnOG #34 in October 2021. They are about automating the deployment of Blade’s datacenters using Jerikan and Ansible. For more information, have a look at “Jerikan+Ansible: a configuration management system for network.”

The presentation, in French, was recorded. I have added English subtitles.1


  1. Good thing if you don’t understand French as my diction was poor with a lot of fillers. ↩︎

01 October, 2021 02:35PM by Vincent Bernat

Russell Coker

Getting Started With Kali

Kali is a Debian based distribution aimed at penetration testing. I haven’t felt a need to use it in the past because Debian has packages for all the scanning tools I regularly use, and all the rest are free software that can be obtained separately. But I recently decided to try it.

Here’s the URL to get Kali [1]. For a VM you can get VMWare or VirtualBox images, I chose VMWare as it’s the most popular image format and also a much smaller download (2.7G vs 4G). For unknown reasons the torrent for it didn’t work (might be a problem with my torrent client). The download link for it was extremely slow in Australia, so I downloaded it to a system in Germany and then copied it from there.

I don’t want to use either VMWare or VirtualBox because I find KVM/Qemu sufficient to do everything I want and they are in the Main section of Debian, so I needed to convert the image files. Some of the documentation on converting image formats to use with QEMU/KVM says to use a program called “kvm-img” which doesn’t seem to exist, I used “qemu-img” from the qemu-utils package in Debian/Bullseye. The man page qemu-img(1) doesn’t list the types of output format supported by the “-O” option and the examples returned by a web search show using “-O qcow2“. It turns out that the following command will convert the image to “raw” format which is the format I prefer. I use BTRFS for storing all my VM images and that does all the copy-on-write I need.

qemu-img convert Kali-Linux-2021.3-vmware-amd64.vmdk ../kali

After converting it the file was 500M smaller than the VMWare files (10.2 vs 10.7G). Probably the Kali distribution file could be reduced in size by converting it to raw and then back to VMWare format. The Kali VMWare image is compressed with 7zip which has a good compression ratio, I waited almost 90 minutes for zstd to compress it with -19 and the result was 12% larger than the 7zip file.

VMWare apparently likes to use an emulated SCSI controller, I spent some time trying to get that going in KVM. Apparently recent versions of QEMU changed the way this works and therefore older web pages aren’t helpful. Also allegedly the SCSI emulation is buggy and unreliable (but I didn’t manage to get it going so can’t be sure). It turns out that the VM is configured to work with the virtio interface, the initramfs.conf has the configuration option “MODULES=most” which makes it boot on all common configurations (good work by the initramfs-tools maintainers). The image works well with the Spice display interface, so it doesn’t capture my mouse, the window for the VM works the same way as other windows on my desktop and doesn’t capture the mouse cursor. I don’t know if this level of Spice integration is in Debian now, last time I tested it didn’t work that way.

I also downloaded Metasploitable [2] which is a VM image designed to be full of security flaws for testing the tools that are in Kali. Again it worked nicely after converting from VMWare to raw format. One thing to note about Metasploitable is that you must not make it available on the public Internet. My home network has NAT for IPv4 but all systems get public IPv6 addresses. It’s usually nice that those things just work on VMs but not for this. So I added an iptables command to block IPv6 to /etc/rc.local.

Conclusion

Installing VMs for both these distributions was quite easy. Most of my time was spent downloading from a slow server, trying to get SCSI emulation working, working out how to convert image files, and testing different compression options. The time spent doing stuff once I knew what to do was very small.

Kali has zsh as the default shell, it’s quite nice. I’ve been happy with bash for decades, but I might end up trying zsh out on other machines.

01 October, 2021 04:56AM by etbe

hackergotchi for Junichi Uekawa

Junichi Uekawa

Garbage collecting with podman system prune.

Garbage collecting with podman system prune. Tells me it freed 20GB when it seems to have freed 4GB. Wondering where that discrepancy comes from.

01 October, 2021 02:37AM by Junichi Uekawa

Reproducible Builds (diffoscope)

diffoscope 186 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 186. This version includes the following changes:

[ Chris Lamb ]
* Don't call close_archive when garbage-collecting Archive instances unless
  open_archive returned successfully. This prevents, amongst others, an
  AttributeError traceback due to PGPContainer's cleanup routines assuming
  that its temporary directory had been created.
  (Closes: reproducible-builds/diffoscope#276)
* Ensure that the string "RPM archives" exists in the package description,
  regardless of whether python3-rpm is installed or not at build time.

[ Jean-Romain Garnier ]
* Fix the LVM Macho comparator for non-x86-64 architectures.

You find out more by visiting the project homepage.

01 October, 2021 12:00AM

September 30, 2021

hackergotchi for Holger Levsen

Holger Levsen

20210930-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021 is almost over...

The Debian Reunion Hamburg 2021 is almost over now, half the attendees have already left for Regensburg, while five remaining people are still busy here, though tonight there will be two concerts at the venue, plus some lovely food and more. Together with the day trip tomorrow (involving lots of water but hopefully not from above...) I don't expect much more work to be done, so that I feel comfortable publishing the following statistics now, even though I expect some more work will be done while travelling back or due to renewed energy from the event! So I might update these numbers later :-)

Together we did:

  • 27 uploads plus 117 uploads from Gregor from the Perl term
  • 6 RC bugs closed
  • 2 RC bugs opened
  • 1 presentation given
  • 2 DM upload permission was given
  • 1 DNS entry was setup for beta.tests.reproducible-builds.org, showing preliminary real-world data for Debian and Qubes OS, thanks to Qubes OS developer Frédéric Pierret
  • 1 dinner cooked
  • 5 people didn't show up, only 2 notified us
  • 2 people showed up without registration
  • had pretty good times and other quality stuff which is harder to quantify

I think that's a pretty awesome and am very happy we did this event!

Debian Reunion / MiniDebConf Hamburg 2022 - save the date, almost!

Thus I think we should have another Debian event at Fux in 2022, and after checking suitable free dates with the venue I think what could work out is an event from Monday May 23rd until Sunday May 29th 2022. What do you think?

For now these dates are preliminary. If you know any reasons why these dates could be less than optimal for such an event, please let me know. Assuming there's no feedback indicating this is a bad idea, the dates shall be finalized by November 1st 2021. Obviously assuming having physical events is still and again a thing! ;-)

30 September, 2021 06:04PM

September 29, 2021

Ingo Juergensmann

LetsEncrypt CA Chain Issues with Ejabberd

UPDATE:
It’s not as simple as described below, I’m afraid… It appears that it’s not that easy to obtain new/correct certs from LetsEncrypt that are not cross-signed by DST Root X3 CA. Additionally older OpenSSL version (1.0.x) seems to have problems. So even when you think that your system is now ok, the remote server might refuse to accept your SSL cert. The same is valid for the SSL check on xmpp.net, which seems to be very outdated and beyond repair.

Honestly, I think the solution needs to be provided by LetsEncrypt…


I was having some strange issues on my ejabberd XMPP server the other day: some users complained that they couldn’t connect anymore to the MUC rooms on my server and in the logfiles I discovered some weird warnings about LetsEncrypt certificates being expired – although they were just new and valid until end of December.

It looks like this:

[warning] <0.368.0>@ejabberd_pkix:log_warnings/1:393 Invalid certificate in /etc/letsencrypt.sh/certs/buildd.net/fullchain.pem: at line 37: certificate is no longer valid as its expiration date has passed

and…

[warning] <0.18328.2>@ejabberd_s2s_out:process_closed/2:157 Failed to establish outbound s2s connection nerdica.net -> forum.friendi.ca: Stream closed by peer: Your server's certificate is invalid, expired, or not trusted by forum.friendi.ca (not-authorized); bouncing for 237 seconds

When checking out with some online tools like SSLlabs or XMPP.net the result was strange, because SSLlabs reported everything was ok while XMPP.net was showing the chain with X3 and D3 certs as having a short term validity of a few days:

After some days of fiddling around with the issue, trying to find a solution, it appears that there is a problem in Ejabberd when there are some old SSL certifcates being found by Ejabberd that are using the old CA chain. Ejabberd has a really nice feature where you can just configure a SSL cert directory (or a path containing wildcars. Ejabberd then reads all of the SSL certs and compare them to the list of configured domains to see which it will need and which not.

What helped (for me at least) was to delete all expired SSL certs from my directory, downloading the current CA file pems from LetsEncrypt (see their blog post from September 2020), run update-ca-certificates and ejabberdctl restart (instead of just ejabberdctl reload-config). UPDATE: be sure to use dpkg-reconfigure ca-certificates to uncheck the DST Root X3 cert (and others if necessary) before renewing the certs or running update-ca-certificates. Otherwise the update will bring in the expired cert again.

Currently I see at least two other XMPP domains in my server logs having certicate issues and in some MUCs there are reports of other domains as well.

Disclaimer: Again: this helped me in my case. I don’t know if this is a bug in Ejabberd or if this procedure will help you in your case nor if this is the proper solution. But maybe my story will help you solving your issue if you experience SSL certs issues in the last few days, especially now that the R3 cert has already expired and the X3 cert following in a few hours.

29 September, 2021 09:49PM by ij

Ian Jackson

Rust for the Polyglot Programmer

Rust is definitely in the news. I'm definitely on the bandwagon. (To me it feels like I've been wanting something like Rust for many years.) There're a huge number of intro tutorials, and of course there's the Rust Book.

A friend observed to me, though, that while there's a lot of "write your first simple Rust program" there's a dearth of material aimed at the programmer who already knows a dozen diverse languages, and is familiar with computer architecture, basic type theory, and so on. Or indeed, for the impatient and confident reader more generally. I thought I would have a go.

Rust for the Polyglot Programmer is the result.

Compared to much other information about Rust, Rust for the Polyglot Programmer is:

  • Dense: I assume a lot of starting knowledge. Or to look at it another way: I expect my reader to be able to look up and digest non-Rust-specific words or concepts.

  • Broad: I cover not just the language and tools, but also the library ecosystem, development approach, community ideology, and so on.

  • Frank: much material about Rust has a tendency to gloss over or minimise the bad parts. I don't do that. That also frees me to talk about strategies for dealing with the bad parts.

  • Non-neutral: I'm not afraid to recommend particular libraries, for example. I'm not afraid to extol Rust's virtues in the areas where it does well.

  • Terse, and sometimes shallow: I often gloss over what I see as unimportant or fiddly details; instead I provide links to appropriate reference materials.

After reading Rust for the Polyglot Programmer, you won't know everything you need to know to use Rust for any project, but should know where to find it.

Thanks are due to Simon Tatham, Mark Wooding, Daniel Silverstone, and others, for encouragement, and helpful reviews including important corrections. Particular thanks to Mark Wooding for wrestling pandoc and LaTeX into producing a pretty good-looking PDF. Remaining errors are, of course, mine.

Comments are welcome of course, via the Dreamwidth comments or Salsa issue or MR. (If you're making a contribution, please indicate your agreement with the Developer Certificate of Origin.)

edited 2021-09-29 16:58 UTC to fix Salsa link targe, and 17:01 and 17:21 to for minor grammar fixes



comment count unavailable comments

29 September, 2021 05:21PM

September 28, 2021

hackergotchi for Jonathan McDowell

Jonathan McDowell

Adding Zigbee to my home automation

SonOff Zigbee Door Sensor

My home automation setup has been fairly static recently; it does what we need and generally works fine. One area I think could be better is controlling it; we have access Home Assistant on our phones, and the Alexa downstairs can control things, but there are no smart assistants upstairs and sometimes it would be nice to just push a button to turn on the light rather than having to get my phone out. Thanks to the fact the UK generally doesn’t have neutral wire in wall switches that means looking at something battery powered. Which means wifi based devices are a poor choice, and it’s necessary to look at something lower power like Zigbee or Z-Wave.

Zigbee seems like the better choice; it’s a more open standard and there are generally more devices easily available from what I’ve seen (e.g. Philips Hue and IKEA TRÅDFRI). So I bought a couple of Xiaomi Mi Smart Home Wireless Switches, and a CC2530 module and then ignored it for the best part of a year. Finally I got around to flashing the Z-Stack firmware that Koen Kanters kindly provides. (Insert rant about hardware manufacturers that require pay-for tool chains. The CC2530 is even worse because it’s 8051 based, so SDCC should be able to compile for it, but the TI Zigbee libraries are only available in a format suitable for IAR’s embedded workbench.)

Flashing the CC2530 is a bit of faff. I ended up using the CCLib fork by Stephan Hadinger which supports the ESP8266. The nice thing about the CC2530 module is it has 2.54mm pitch pins so nice and easy to jumper up. It then needs a USB/serial dongle to connect it up to a suitable machine, where I ran Zigbee2MQTT. This scares me a bit, because it’s a bunch of node.js pulling in a chunk of stuff off npm. On the flip side, it Just Works and I was able to pair the Xiaomi button with the device and see MQTT messages that I could then use with Home Assistant. So of course I tore down that setup and went and ordered a CC2531 (the variant with USB as part of the chip). The idea here was my test setup was upstairs with my laptop, and I wanted something hooked up in a more permanent fashion.

Once the CC2531 arrived I got distracted writing support for the Desk Viking to support CCLib (and modified it a bit for Python3 and some speed ups). I flashed the dongle up with the Z-Stack Home 1.2 (default) firmware, and plugged it into the house server. At this point I more closely investigated what Home Assistant had to offer in terms of Zigbee integration. It turns out the ZHA integration has support for the ZNP protocol that the TI devices speak (I’m reasonably sure it didn’t when I first looked some time ago), so that seemed like a better option than adding the MQTT layer in the middle.

I hit some complexity passing the dongle (which turns up as /dev/ttyACM0) through to the Home Assistant container. First I needed an override file in /etc/systemd/nspawn/hass.nspawn:

[Files]
Bind=/dev/ttyACM0:/dev/zigbee

[Network]
VirtualEthernet=true

(I’m not clear why the VirtualEthernet needed to exist; without it networking broke entirely but I couldn’t see why it worked with no override file.)

A udev rule on the host to change the ownership of the device file so the root user and dialout group in the container could see it was also necessary, so into /etc/udev/rules.d/70-persistent-serial.rules went:

# Zigbee for HASS
SUBSYSTEM=="tty", ATTRS{idVendor}=="0451", ATTRS{idProduct}=="16a8", SYMLINK+="zigbee", \
	MODE="660", OWNER="1321926676", GROUP="1321926676"

In the container itself I had to switch PrivateDevices=true to PrivateDevices=false in the home-assistant.service file (which took me a while to figure out; yay for locking things down and then needing to use those locked down things).

Finally I added the hass user to the dialout group. At that point I was able to go and add the integration with Home Assistant, and add the button as a new device. Excellent. I did find I needed a newer version of Home Assistant to get support for the button, however. I was still on 2021.1.5 due to upstream dropping support for Python 3.7 and not being prepared to upgrade to Debian 11 until it was actually released, so the version of zha-quirks didn’t have the correct info. Upgrading to Home Assistant 2021.8.7 sorted that out.

There was another slight problem. Range. Really I want to use the button upstairs. The server is downstairs, and most of my internal walls are brick. The solution turned out to be a TRÅDFRI socket, which replaced the existing ESP8266 wifi socket controlling the stair lights. That was close enough to the server to have a decent signal, and it acts as a Zigbee router so provides a strong enough signal for devices upstairs. The normal approach seems to be to have a lot of Zigbee light bulbs, but I have mostly kept overhead lights as uncontrolled - we don’t use them day to day and it provides a nice fallback if the home automation has issues.

Of course installing Zigbee for a single button would seem to be a bit pointless. So I ordered up a Sonoff door sensor to put on the front door (much smaller than expected - those white boxes on the door are it in the picture above). And I have a 4 gang wireless switch ordered to go on the landing wall upstairs.

Now I’ve got a Zigbee setup there are a few more things I’m thinking of adding, where wifi isn’t an option due to the need for battery operation (monitoring the external gas meter springs to mind). The CC2530 probably isn’t suitable for my needs, as I’ll need to write some custom code to handle the bits I want, but there do seem to be some ARM based devices which might well prove suitable…

28 September, 2021 02:42PM

hackergotchi for Holger Levsen

Holger Levsen

20210928-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021, klein aber fein / small but beautiful

So the Debian Reunion Hamburg 2021 has been going on for not yet 48h now and it appears people are having fun, enjoying discussions between fellow Debian people and getting some stuff done as well. I guess I'll write some more about it once the event is over...

Sharing android screens...

For now I just want to share one little gem I learned about yesterday on the hallway track:

$ sudo apt install scrcpy
$ scrcpy

And voila, once again I can type on my phone with a proper keyboard and copy and paste URLs between the two devices. One can even watch videos on the big screen with it :)

(This requires ADB debugging enabled on the phone, but doesn't require root access.)

28 September, 2021 11:16AM

September 27, 2021

hackergotchi for Wouter Verhelst

Wouter Verhelst

SReview::Video is now Media::Convert

SReview, the video review and transcode tool that I originally wrote for FOSDEM 2017 but which has since been used for debconfs and minidebconfs as well, has long had a sizeable component for inspecting media files with ffprobe, and generating ffmpeg command lines to convert media files from one format to another.

This component, SReview::Video (plus a number of supporting modules), is really not tied very much to the SReview webinterface or the transcoding backend. That is, the webinterface and the transcoding backend obviously use the ffmpeg handling library, but they don't provide any services that SReview::Video could not live without. It did use the configuration API that I wrote for SReview, but disentangling that turned out to be very easy.

As I think SReview::Video is actually an easy to use, flexible API, I decided to refactor it into Media::Convert, and have just uploaded the latter to CPAN itself.

The intent is to refactor the SReview webinterface and transcoding backend so that they will also use Media::Convert instead of SReview::Video in the near future -- otherwise I would end up maintaining everything twice, and then what's the point. This hasn't happened yet, but it will soon (this shouldn't be too difficult after all).

Unfortunately Media::Convert doesn't currently install cleanly from CPAN, since I made it depend on Alien::ffmpeg which currently doesn't work (I'm in communication with the Alien::ffmpeg maintainer in order to get that resolved), so if you want to try it out you'll have to do a few steps manually.

I'll upload it to Debian soon, too.

27 September, 2021 12:31PM

Russ Allbery

Review: The Problem with Work

Review: The Problem with Work, by Kathi Weeks

Publisher: Duke University Press
Copyright: 2011
ISBN: 0-8223-5112-9
Format: Kindle
Pages: 304

One of the assumptions baked deeply into US society (and many others) is that people are largely defined by the work they do, and that work is the primary focus of life. Even in Marxist analysis, which is otherwise critical of how work is economically organized, work itself reigns supreme. This has been part of the feminist critique of both capitalism and Marxism, namely that both devalue domestic labor that has traditionally been unpaid, but even that criticism is normally framed as expanding the definition of work to include more of human activity. A few exceptions aside, we shy away from fundamentally rethinking the centrality of work to human experience.

The Problem with Work begins as a critical analysis of that centrality of work and a history of some less-well-known movements against it. But, more valuably for me, it becomes a discussion of the types and merits of utopian thinking, including why convincing other people is not the only purpose for making a political demand.

The largest problem with this book will be obvious early on: the writing style ranges from unnecessarily complex to nearly unreadable. Here's an excerpt from the first chapter:

The lack of interest in representing the daily grind of work routines in various forms of popular culture is perhaps understandable, as is the tendency among cultural critics to focus on the animation and meaningfulness of commodities rather than the eclipse of laboring activity that Marx identifies as the source of their fetishization (Marx 1976, 164-65). The preference for a level of abstraction that tends not to register either the qualitative dimensions or the hierarchical relations of work can also account for its relative neglect in the field of mainstream economics. But the lack of attention to the lived experiences and political textures of work within political theory would seem to be another matter. Indeed, political theorists tend to be more interested in our lives as citizens and noncitizens, legal subjects and bearers of rights, consumers and spectators, religious devotees and family members, than in our daily lives as workers.

This is only a quarter of a paragraph, and the entire book is written like this.

I don't mind the occasional use of longer words for their precise meanings ("qualitative," "hierarchical") and can tolerate the academic habit of inserting mostly unnecessary citations. I have less patience with the meandering and complex sentences, excessive hedge words ("perhaps," "seem to be," "tend to be"), unnecessarily indirect phrasing ("can also account for" instead of "explains"), or obscure terms that are unnecessary to the sentence (what is "animation of commodities"?). And please have mercy and throw a reader some paragraph breaks.

The writing style means substantial unnecessary effort for the reader, which is why it took me six months to read this book. It stalled all of my non-work non-fiction reading and I'm not sure it was worth the effort. That's unfortunate, because there were several important ideas in here that were new to me.

The first was the overview of the "wages for housework" movement, which I had not previously heard of. It started from the common feminist position that traditional "women's work" is undervalued and advocated taking the next logical step of giving it equality with paid work by making it paid work. This was not successful, obviously, although the increasing prevalence of day care and cleaning services has made it partly true within certain economic classes in an odd and more capitalist way. While I, like Weeks, am dubious this was the right remedy, the observation that household work is essential to support capitalist activity but is unmeasured by GDP and often uncompensated both economically and socially has only become more accurate since the 1970s.

Weeks argues that the usefulness of this movement should not be judged by its lack of success in achieving its demands, which leads to the second interesting point: the role of utopian demands in reframing and expanding a discussion. I normally judge a political demand on its effectiveness at convincing others to grant that demand, by which standard many activist campaigns (such as wages for housework) are unsuccessful. Weeks points out that making a utopian demand changes the way the person making the demand perceives the world, and this can have value even if the demand will never be granted. For example, to demand wages for housework requires rethinking how work is defined, what activities are compensated by the economic system, how such wages would be paid, and the implications for domestic social structures, among other things. That, in turn, helps in questioning assumptions and understanding more about how existing society sustains itself.

Similarly, even if a utopian demand is never granted by society at large, forcing it to be rebutted can produce the same movement in thinking in others. In order to rebut a demand, one has to take it seriously and mount a defense of the premises that would allow one to rebut it. That can open a path to discussing and questioning those premises, which can have long-term persuasive power apart from the specific utopian demand. It's a similar concept as the Overton Window, but with more nuance: the idea isn't solely to move the perceived range of accepted discussion, but to force society to examine its assumptions and premises well enough to defend them, or possibly discover they're harder to defend than one might have thought.

Weeks applies this principle to universal basic income, as a utopian demand that questions the premise that work should be central to personal identity. I kept thinking of the Black Lives Matter movement and the demand to abolish the police, which (at least in popular discussion) is a more recent example than this book but follows many of the same principles. The demand itself is unlikely to be met, but to rebut it requires defending the existence and nature of the police. That in turn leads to questions about the effectiveness of policing, such as clearance rates (which are far lower than one might have assumed). Many more examples came to mind. I've had that experience of discovering problems with my assumptions I'd never considered when debating others, but had not previously linked it with the merits of making demands that may be politically infeasible.

The book closes with an interesting discussion of the types of utopias, starting from the closed utopia in the style of Thomas More in which the author sets up an ideal society. Weeks points out that this sort of utopia tends to collapse with the first impossibility or inconsistency the reader notices. The next step is utopias that acknowledge their own limitations and problems, which are more engaging (she cites Le Guin's The Dispossessed). More conditional than that is the utopian manifesto, which only addresses part of society. The least comprehensive and the most open is the utopian demand, such as wages for housework or universal basic income, which asks for a specific piece of utopia while intentionally leaving unspecified the rest of the society that could achieve it. The demand leaves room to maneuver; one can discuss possible improvements to society that would approach that utopian goal without committing to a single approach.

I wish this book were better-written and easier to read, since as it stands I can't recommend it. There were large sections that I read but didn't have the mental energy to fully decipher or retain, such as the extended discussion of Ernst Bloch and Friedrich Nietzsche in the context of utopias. But that way of thinking about utopian demands and their merits for both the people making them and for those rebutting them, even if they're not politically feasible, will stick with me.

Rating: 5 out of 10

27 September, 2021 04:41AM

September 22, 2021

hackergotchi for Gunnar Wolf

Gunnar Wolf

New book out! «Mecanismos de privacidad y anonimato en redes, una visión transdisciplinaria»

Three years ago, I organized a fun and most interesting colloquium at Facultad de Ingeniería, UNAM about privacy and anonymity online.

I would have loved to share this earlier with the world, but… The university’s processes are quite slow (and, to be fair, I also took quite a bit of time to push things through). But today, I’m finally happy to share the result of that work with all of you. We managed to get 11 of the talks in the colloquium as articles. The back-cover text reads (in Spanish):

We live in an era where human to human interactions are more and more often mediated by technology. This, of course, means everything leaves a digital trail, a trail that can follow and us relentlessly. Privacy is recognized, however, as a human right — although one that is under growing threats. Anonymity is the best tool to secure it. Throughout history, clear steps have been taken –legally, technically and technologically– to defend it. Various studies point out this is not only a known issue for the network's users, but that a large majority has searched for alternatives to protect their communications' privacy. This book stems from a colloquium held by *Laboratorio de Investigación y Desarrollo de Software Libre* (LIDSOL) of Facultad de Ingeniería, UNAM, towards the end of 2018, where we invited experts from disciplines so far apart as law and systems development, psychology and economics, to contribute with their experiences to a transdisciplinary vision.

If this interests you, you can get the book at our institutional repository.

Oh, and… What about the birds?

In Spanish (Mexican only?), we have a saying, «hay pájaros en el alambre», meaning watch your words, as uninvited people might be listening, as birds resting over the wires over which phone calls used to be made (back in the day where wiretapping was that easy). I found the design proposed by our editor ingenious and very fitting for our topic!

22 September, 2021 06:26PM

Ian Jackson

Tricky compatibility issue - Rust's io::ErrorKind

This post is about some changes recently made to Rust's ErrorKind, which aims to categorise OS errors in a portable way.

Audiences for this post

  • The educated general reader interested in a case study involving error handling, stability, API design, and/or Rust.
  • Rust users who have tripped over these changes. If this is you, you can cut to the chase and skip to How to fix.

Background and context

Error handling principles

Handling different errors differently is often important (although, sadly, often neglected). For example, if a program tries to read its default configuration file, and gets a "file not found" error, it can proceed with its default configuration, knowing that the user hasn't provided a specific config.

If it gets some other error, it should probably complain and quit, printing the message from the error (and the filename). Otherwise, if the network fileserver is down (say), the program might erroneously run with the default configuration and do something entirely wrong.

Rust's portability aims

The Rust programming language tries to make it straightforward to write portable code. Portable error handling is always a bit tricky. One of Rust's facilities in this area is std::io::ErrorKind which is an enum which tries to categorise (and, sometimes, enumerate) OS errors. The idea is that a program can check the error kind, and handle the error accordingly.

That these ErrorKinds are part of the Rust standard library means that to get this right, you don't need to delve down and get the actual underlying operating system error number, and write separate code for each platform you want to support. You can check whether the error is ErrorKind::NotFound (or whatever).

Because ErrorKind is so important in many Rust APIs, some code which isn't really doing an OS call can still have to provide an ErrorKind. For this purpose, Rust provides a special category ErrorKind::Other, which doesn't correspond to any particular OS error.

Rust's stability aims and approach

Another thing Rust tries to do is keep existing code working. More specifically, Rust tries to:

  1. Avoid making changes which would contradict the previously-published documentation of Rust's language and features.
  2. Tell you if you accidentally rely on properties which are not part of the published documentation.

By and large, this has been very successful. It means that if you write code now, and it compiles and runs cleanly, it is quite likely that it will continue work properly in the future, even as the language and ecosystem evolves.

This blog post is about a case where Rust failed to do (2), above, and, sadly, it turned out that several people had accidentally relied on something the Rust project definitely intended to change. Furthermore, it was something which needed to change. And the new (corrected) way of using the API is not so obvious.

Rust enums, as relevant to io::ErrorKind

(Very briefly:)

When you have a value which is an io::ErrorKind, you can compare it with specific values:

    if error.kind() == ErrorKind::NotFound { ...
  
But in Rust it's more usual to write something like this (which you can read like a switch statement):
    match error.kind() {
      ErrorKind::NotFound => use_default_configuration(),
      _ => panic!("could not read config file {}: {}", &file, &error),
    }
  

Here _ means "anything else". Rust insists that match statements are exhaustive, meaning that each one covers all the possibilities. So if you left out the line with the _, it wouldn't compile.

Rust enums can also be marked non_exhaustive, which is a declaration by the API designer that they plan to add more kinds. This has been done for ErrorKind, so the _ is mandatory, even if you write out all the possibilities that exist right now: this ensures that if new ErrorKinds appear, they won't stop your code compiling.

Improving the error categorisation

The set of error categories stabilised in Rust 1.0 was too small. It missed many important kinds of error. This makes writing error-handling code awkward. In any case, we expect to add new error categories occasionally. I set about trying to improve this by proposing new ErrorKinds. This obviously needed considerable community review, which is why it took about 9 months.

The trouble with Other and tests

Rust has to assign an ErrorKind to every OS error, even ones it doesn't really know about. Until recently, it mapped all errors it didn't understand to ErrorKind::Other - reusing the category for "not an OS error at all".

Serious people who write serious code like to have serious tests. In particular, testing error conditions is really important. For example, you might want to test your program's handling of disk full, to make sure it didn't crash, or corrupt files. You would set up some contraption that would simulate a full disk. And then, in your tests, you might check that the error was correct.

But until very recently (still now, in Stable Rust), there was no ErrorKind::StorageFull. You would get ErrorKind::Other. If you were diligent you would dig out the OS error code (and check for ENOSPC on Unix, corresponding Windows errors, etc.). But that's tiresome. The more obvious thing to do is to check that the kind is Other.

Obvious but wrong. ErrorKind is non_exhaustive, implying that more error kinds will appears, and, naturally, these would more finely categorise previously-Other OS errors.

Unfortunately, the documentation note

Errors that are Other now may move to a different or a new ErrorKind variant in the future.
was only added in May 2020. So the wrongness of the "obvious" approach was, itself, not very obvious. And even with that docs note, there was no compiler warning or anything.

The unfortunate result is that there is a body of code out there in the world which might break any time an error that was previously Other becomes properly categorised. Furthermore, there was nothing stopping new people writing new obvious-but-wrong code.

Chosen solution: Uncategorized

The Rust developers wanted an engineered safeguard against the bug of assuming that a particular error shows up as Other. They chose the following solution:

There is now a new ErrorKind::Uncategorized which is now used for all OS errors for which there isn't a more specific categorisation. The fallback translation of unknown errors was changed from Other to Uncategorised.

This is de jure justified by the fact that this enum has always been marked non_exhaustive. But in practice because this bug wasn't previously detected, there is such code in the wild. That code now breaks (usually, in the form of failing test cases). Usually when Rust starts to detect a particular programming error, it is reported as a new warning, which doesn't break anything. But that's not possible here, because this is a behavioural change.

The new ErrorKind::Uncategorized is marked unstable. This makes it impossible to write code on Stable Rust which insists that an error comes out as Uncategorized. So, one cannot now write code that will break when new ErrorKinds are added. That's the intended effect.

The downside is that this does break old code, and, worse, it is not as clear as it should be what the fixed code looks like.

Alternatives considered and rejected by the Rust developers

Not adding more ErrorKinds

This was not tenable. The existing set is already too small, and error categorisation is in any case expected to improve over time.

Just adding ErrorKinds as had been done before

This would mean occasionally breaking test cases (or, possibly, production code) when an error that was previously Other becomes categorised. The broken code would have been "obvious", but de jure wrong, just as it is now, So this option amounts to expecting this broken code to continue to be written and continuing to break it occasionally.

Somehow using Rust's Edition system

The Rust language has a system to allow language evolution, where code declares its Edition (2015, 2018, 2021). Code from multiple editions can be combined, so that the ecosystem can upgrade gradually.

It's not clear how this could be used for ErrorKind, though. Errors have to be passed between code with different editions. If those different editions had different categorisations, the resulting programs would have incoherent and broken error handling.

Also some of the schemes for making this change would mean that new ErrorKinds could only be stabilised about once every 3 years, which is far too slow.

How to fix code broken by this change

Most main-line error handling code already has a fallback case for unknown errors. Simply replacing any occurrence of Other with _ is right.

How to fix thorough tests

The tricky problem is tests. Typically, a thorough test case wants to check that the error is "precisely as expected" (as far as the test can tell). Now that unknown errors come out as an unstable Uncategorized variant that's not so easy. If the test is expecting an error that is currently not categorised, you want to write code that says "if the error is any of the recognised kinds, call it a test failure".

What does "any of the recognised kinds" mean here ? It doesn't meany any of the kinds recognised by the version of the Rust stdlib that is actually in use. That set might get bigger. When the test is compiled and run later, perhaps years later, the error in this test case might indeed be categorised. What you actually mean is "the error must not be any of the kinds which existed when the test was written".

IMO therefore the right solution for such a test case is to cut and paste the current list of stable ErrorKinds into your code. This will seem wrong at first glance, because the list in your code and in Rust can get out of step. But when they do get out of step you want your version, not the stdlib's. So freezing the list at a point in time is precisely right.

You probably only want to maintain one copy of this list, so put it somewhere central in your codebase's test support machinery. Periodically, you can update the list deliberately - and fix any resulting test failures.

Unfortunately this approach is not suggested by the documentation. In theory you could work all this out yourself from first principles, given even the situation prior to May 2020, but it seems unlikely that many people have done so. In particular, cutting and pasting the list of recognised errors would seem very unnatural.

Conclusions

This was not an easy problem to solve well. I think Rust has done a plausible job given the various constraints, and the result is technically good.

It is a shame that this change to make the error handling stability more correct caused the most trouble for the most careful people who write the most thorough tests. I also think the docs could be improved.

edited shortly after posting, and again 2021-09-22 16:11 UTC, to fix HTML slips



comment count unavailable comments

22 September, 2021 04:10PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2021 for Debian

The release of TeX Live 2021 is already half a year away, but due to the delay of waiting for Debian/Bullseye release, we haven’t updated TeX Live in Debian for quite some time. But the waiting is over, today I uploaded the first packages of TeX Live 2021 to unstable.

All the changes listed in the upstream release blog apply also to the Debian packages.

I expect a few hiccups, but it is good to see it out of the door finally.

Enjoy.

22 September, 2021 05:42AM by Norbert Preining

September 21, 2021

hackergotchi for Clint Adams

Clint Adams

Outrage culture killed my dog

Why can't Debian have embarrassing flamewars like this thread?

Posted on 2021-09-21
Tags: barks

21 September, 2021 03:36PM

Russell Coker