June 06, 2023

Russell Coker

PinePhonePro First Impression

Hardware

I received my PinePhone Pro [1] on Thursday, it seems in many ways better than the Purism Librem 5 [2] that I have previously written about. The PinePhone is thinner, lighter, and yet has a much longer battery life. A friend described the Librem5 as “the CyberTruck phone” and not in a good way.

In a test I had my PinePhone and my Librem5 fully charged, left them for 4.5 hours without doing anything much with them, and then the PinePhone was at 85% and the Librem5 was at 57%. So the Librem5 will run out of battery after about 10 hours of not being used while a PinePhonePro can be expected to last about 30 hours. The PinePhonePro isn’t as good as some of the recent Android phones in this regard but it shows the potential to be quite usable. For this test both phones were connected to a 2.4GHz Wifi network (which uses less power than 5GHz) and doing nothing much with an out of the box configuration. A phone that is checking email, social networking, and a couple of IM services will use the battery faster. But even if the PinePhone has it’s battery used twice as fast in a more realistic test that will still be usable.

Here are the passmark results from the PinePhone Pro [3] which got a CPU score of 888 compared to 507 for the Librem 5 and 678 for one of the slower laptops I’ve used. The results are excluded from the Passmark averages because they identified the CPU as only having 4 cores (expecting just 4*A72) while the PinePhonePro has 6 cores (2*A72+4*A53). This phone definitely has the CPU power for convergence [4]!

Default OS

By default the PinePhone has a KDE based GUI and the Librem5 has a GNOME based GUI. I don’t like any iteration of GNOME (I have tried them all and disliked them all) and I like KDE so I will tend to like anything that is KDE based more than anything GNOME based. But in addition to that the PinePhone has an interface that looks a lot like Android with the three on-screen buttons at the bottom of the display and the way it has the slide up tray for installed apps. Android is the most popular phone OS and looking like the most common option is often a good idea for a new and different product, this seems like an objective criteria to determine that the default GUI on the PinePhone is a better choice (at least for the default).

When I first booted it and connected it to Wifi the updates app said that there were 633 updates to apply, but never applied them (I tried clicking on the update button but to no avail) and didn’t give any error message. For me not being Debian is enough reason to dislike Manjaro, but if that wasn’t enough then the failure to update would be a good start. When I ran pacman in a terminal window it said that each package was corrupt and asked if I wanted to delete it. According to “tar tvJf” the packages weren’t corrupt. After downloading them again it said that they were corrupt again so it seemed that pacman wasn’t working correctly.

When the screen is locked and a call comes in it gives a window with Accept and Reject buttons but neither of them works. The default country code for “Spacebar” (the SMS app) is +1 (US) even though I specified Australia on the initial login. It also doesn’t get the APN unlike Android phones which seem to have some sort of list of APNs.

Upgrading to Debian

The Debian Wiki page about Installing on the PinePhone Pro has the basic information [5]. The first thing it covers is installing the TOW boot loader – which is already installed by default in recent PinePhones (such as mine). You can recognise that TOW is installed by pressing the volume-up button in the early stages of boot up (described as “before and during the second vibration”), then the LED will turn blue and the phone will act as a USB mass storage device which makes it easy to do other install/recovery tasks. The other TOW option is to press volume-down to boot from a MicroSD card (the default is to boot the OS on the eMMC).

The images linked from the Debian wiki page are designed to be installed with bmaptool from the bmap-tools Debian package. After installing that package and downloading the pre-built Mobian image I installed it with the command “bmaptool copy mobian-pinephonepro-phosh-bookworm-12.0-rc3.img.gz /dev/sdb” where /dev/sdb is the device that the USB mapped PinePhone storage was located. That took 6 minutes and then I rebooted my PinePhone into Mobian!

Unfortunately the default GUI for Mobian is GNOME/Phosh. Changing it to KDE is my next task.

06 June, 2023 01:24PM by etbe

Dell 32″ 4K Monitor and DisplayPort Switch

After determining that the Philips 43″ monitor was too large for my taste as well as not having a clear enough display [1] I bought a Dell 32″ 4K monitor for $499 on the 1st of July 2022. That monitor has been working nicely for almost a year now, for DisplayPort it’s operation is perfect and 32″ seems like an ideal size for my use. There is one problem that both HDMI ports will sometimes turn off for about half a second, I’ve tested on both ports and on multiple computers as well as a dock and it gives the same result so it’s definitely the monitor. The problem for me is that the most casual inspection won’t reveal the problem and the monitor is large and difficult to transport as I’ve thrown out the box. If I had this sort of problem with a monitor at work I’d add it to the list of things for Dell to fix next time they visit the office or use one of the many monitor boxes available to ship it back to them. But for home use it’s more of a problem for me. The easiest solution is to avoid HDMI.

A year ago I blogged about using DDC to switch monitor inputs [2], I had that running with a cheap USB switch since then to allow a workstation and a laptop to share the same monitor, keyboard, and mouse. Recently I got a USB-C dock that allows a USB-C laptop to talk to a display via DisplayPort as opposed to the HDMI connector that’s built in. But my Dell monitor only has one DisplayPort input.

So I have just bought a DisplayPort and USB KVM switch via eBay for $52, a reasonable price given that last year such things were well over $100. It has ports for 3 USB devices which is better than my previous setup of a USB switch with only a single port that I used with a 3 port hub for my keyboard and mouse.

the DisplayPort switch is described as doing 4K at 60Hz, I don’t know how it will perform with a 5K monitor, maybe it will work at 30Hz or 40Hz. But currently Dell 5K monitors are at $2,500 and 6K monitors are about $3,800 so I don’t plan to get one of them any time soon.

06 June, 2023 08:41AM by etbe

hackergotchi for Shirish Agarwal

Shirish Agarwal

Odisha Train Crash and Coverup, Demonetization 2.0 & NHFS-6 Survey

Just a few days back we came to know about the horrific Train Crash that happened in Odisha (Orissa). There are some things that are known and somethings that can be inferred by observance. Sadly, it seems the incident is going to be covered up 😦 . Some of the facts that have not been contested in the public domain are that there were three lines. One loop line on which the Goods Train was standing and there was an up and a down line. So three lines were there. Apparently, the signalling system and the inter-locking system had issues as highlighted by an official about a month back. That letter, thankfully is in the public domain and I have downloaded it as well. It’s a letter that goes to 4 pages. The RW is incensed that the letter got leaked and is in public domain. They are blaming everyone and espousing conspiracy theories rather than taking the minister to task. Incidentally, the Minister has three ministries that he currently holds. Ministry of Communication, Ministry of Electronics and Information Technology (MEIT), and Railways Ministry. Each Ministry in itself is important and has revenues of more than 6 lakh crore rupees. How he is able to do justice to all the three ministries is beyond me 😦

The other thing is funds both for safety and relaying of tracks has been either not sanctioned or unutilized. In fact, CAG and the Railway Brass had shared how derailments have increased and unfulfilled vacancies but they were given no importance 😦 In fact, not talking about safety in the recently held ‘Chintan Shivir’ (brainstorming session) tells you how much the Govt. is serious about safety. In fact, most of the programme was on high speed rail which is a white elephant. I have shared a whitepaper done by RW in the U.S. that tells how high-speed rail doesn’t make economic sense. And that is an economy that is 20 times + the Indian Economy. Even the Chinese are stopping with HSR as it doesn’t make economic sense.

Incidentally, Air Fares again went up 200% yesterday. Somebody shared in the region of 20k + for an Air ticket from their place to Bangalore 😦

Coming back to the story itself. the Goods Train was on the loopline. Some say it was a little bit on the outer, some say otherwise, but it is established that it was on the loopline. This is standard behavior on and around Railway Stations around the world. Whether it was in the Inner or Outer doesn’t make much of a difference with what happened next. The first train that collided with the goods train was the 12864 (SMVB-HWH) Yashwantpur Howrah Express and got derailed on to the next track where from the opposite direction 12841 (Shalimar- Bangalore) Coramandel Express was coming. Now they have said that around 300 people have died and that seems to be part of the cover-up. Both the trains are long trains, having between 23 odd coaches each. Even if you have reserved tickets you have 80 odd people in a coach and usually in most of these trains, it is at least double of that. Lot of money goes to TC and then above (Corruption). The Railway fares have gone up enormously but that’s a question for perhaps another time 😦 . So at the very least, we could be looking at more than 1000 people having died. The numbers are being under-reported so that nobody has to take responsibility. The Railways itself has told that it is unable to identify 80% of the people who have died. This means that 80% were unreserved ticket holders or a majority of them. There have been disturbing images as how bodies have been flung over on tractors and whatnot to be either buried or cremated without a thought. We are in peak summer season so bodies will start to rot within 24-48 hours 😦 No arrangements made to cool the bodies and take some information and identifying marks or whatever. The whole thing being done in a very callous manner, not giving dignity to even those who have died for no fault of their own. The dissent note also tells that a cover-up is also in the picture. Apparently, India doesn’t have nor does it feel to have a need for something like the NTSB that the U.S. used when it hauled both the plane manufacturer (Boeing) and the FAA when the 737 Max went down due to improper data collection and sharing of data with pilots. And with no accountability being fixed to Minister or any of the senior staff, a small junior staff person may be fired. Perhaps the same official that actually told them about the signal failures almost 3 months back 😦

There were and are also some reports that some ‘jugaadu’/temporary fixes were applied to signalling and inter-locking just before this incident happened. I do not know nor confirm one way or the other if the above happened. I can however point out that if such a thing happened, then usually a traffic block is announced and all traffic on those lines are stopped. This has been the thing I know for decades. Traveling between Mumbai and Pune multiple times over the years am aware about traffic block. If some repair work was going on and it wasn’t able to complete the work within the time-frame then that may well have contributed to the accident. There is also a bit muddying of the waters where it is being said that one of the trains was 4 hours late, which one is conflicting stories.

On top of the whole thing, they have put the case to be investigated by CBI and hinting at sabotage. They also tried to paint a religious structure as mosque, later turned out to be a temple. The RW says done by Muslims as it was Friday not taking into account as shared before that most Railway maintenance works are usually done between Friday – Monday. This is a practice followed not just in India but world over.

There has been also move over a decade to remove wooden sleepers and have concrete sleepers. Unlike the wooden ones they do not expand and contract as much and their life is much more longer than the wooden ones. Funds had been marked (although lower than last few years) but not yet spent. As we know in case of any accident, it is when all the holes in cheese line up it happens. Fukushima is a great example of that, no sea wall even though Japan is no stranger to Tsunamis. External power at the same level as the plant. (10 meters above sea-level), no training for cascading failures scenarios which is what happened. The Days mini-series shares some but not all the faults that happened at Fukushima and the Govt. response to it. There is a difference though, the Japanese Prime Minister resigned on moral grounds. Here, nor the PM, nor the Minister would be resigning on moral grounds or otherwise :(. Zero accountability and that was partly a natural disaster, here it’s man-made. In fact, both the Minister and the Prime Minister arrived with their entourages, did a PR blitzkrieg showing how concerned they are. Within 50 hours, the lines were cleared. The part-time Railway Minister shared that he knows the root cause and then few hours later has given the case to CBI. All are saying, wait for the inquiry report. To date, none of the accidents even in this Govt. has produced an investigation report. And even if it did, I am sure it will whitewash as it did in case of Adani as I had shared before in the previous blog post. Incidentally, it is reported that Adani paid off some of its debt, but when questioned as to where they got the money, complete silence on that part :(. As can be seen cover-up after cover-up 😦

FWIW, the Coramandel Express is known as the Migrant train so has a huge number of passengers, the other one which was collided with is known as ‘sick train’ as huge number of cancer patients use it to travel to Chennai and come back 😦

Demonetization 2.0

Few days back, India announced demonetization 2.0. Surprised, don’t be. Apparently, INR 2k/- is being used for corruption and Mr. Modi is unhappy about it. He actually didn’t like the INR 2k/- note but was told that it was needed, who told him we are unaware to date. At that time the RBI Governor was Mr. Urjit Patel who didn’t say about INR 2k/- he had said that INR 1k/- note redesigned would come in the market. That has yet to happen. What has happened is that just like INR 500/- and INR 1k/- note is concerned, RBI will no longer honor the INR 2k/- note. Obviously, this has made our neighbors angry, namely Nepal, Sri Lanka, Bhutan etc. who do some trading with us. 2 Deccan herald columns share the limelight on it. Apparently, India wants to be the world’s currency reserve but doesn’t want to play by the rules for everyone else. It was pointed out that both the U.S. and Singapore had retired their currencies but they will honor that promise even today. The Singapore example being a bit closer (as it’s in Asia) is perhaps a bit more relevant than the U.S. one. Singapore retired the SGD $10,000 as of 2014 but even in 2022, it remains as legal tender. They also retired the SGD $1,000 in 2020 but still remains legal tender.

So let’s have a fictitious example to illustrate what is meant by what Singapore has done. Let’s say I go to Singapore, rent a flat, and find a $1000 note in that house somewhere. Both practically and theoretically, I could go down to any of the banks, get the amount transferred to my wallet, bank account etc. and nobody will question. Because they have promised the same. Interestingly, the Singapore Dollar has been pretty resilient against the USD for quite a number of years vis-a-vis other Asian currencies.

Most of the INR 2k/- notes were also found and exchanged in Gujarat in just a few days (The PM and HM’s state.). I am sure you are looking into the mental gymnastics that the RW indulge in :(. What is sadder that most of the people who try to defend can’t make sense one way or the other and start to name-call and get personal as they have nothing else 😦

Disability questions dropped in NHFS-6

Just came to know today that in the upcoming National Family Health Survey-6 disability questions are being dropped. Why is this important. To put it simply, if you don’t have numbers, you won’t and can’t make policies for them. India is one of the worst countries to live if you are disabled. The easiest way to share to draw attention is most Railway platforms are not at level with people. Just as Mick Lynch shares in the UK, the same is pretty much true for India too. Meanwhile in Europe, they do make an effort to be level so even disabled people have some dignity. If your public transport is sorted, then people would want much more and you will be obligated to provide for them as they are citizens. Here, we have had many reports of women being sexually molested when being transferred from platform to coach irrespective of their age or whatnot 😦 The main takeaway is if you do not have their voice, you won’t make policies for them. They won’t go away but you will make life hell for them. One thing to keep in mind that most people assume that most people are disabled from birth. This may or may not be true. For e.g. in the above triple Railways accidents, there are bound to be disabled people or newly disabled people who were healthy before the accident. The most common accident is road accidents, some involving pedestrians and vehicles or both, the easiest is Ministry of Road Transport data that says 4,00,000 people sustained injuries in 2021 alone in road mishaps. And this is in a country where even accidents are highly under-reported, for more than one reason. The biggest reason especially in 2 and 4 wheeler is the increased premium they would have to pay if in an accident, so they usually compromise with the other and pay off the Traffic Inspector.

Sadly, I haven’t read a new book, although there are a few books I’m looking forward to have. People living in India and neighbors please be careful as more heat waves are expected. Till later.

06 June, 2023 07:12AM by shirishag75

June 05, 2023

Reproducible Builds

Reproducible Builds in May 2023

Welcome to the May 2023 report from the Reproducible Builds project

In our reports, we outline the most important things that we have been up to over the past month. As always, if you are interested in contributing to the project, please visit our Contribute page on our website.


Holger Levsen gave a talk at the 2023 edition of the Debian Reunion Hamburg, a semi-informal meetup of Debian-related people in northern Germany. The slides are available online.


In April, Holger Levsen gave a talk at foss-north 2023 titled Reproducible Builds, the first ten years. Last month, however, Holger’s talk was covered in a round-up of the conference on the Free Software Foundation Europe (FSFE) blog.


Pronnoy Goswami, Saksham Gupta, Zhiyuan Li, Na Meng and Daphne Yao from Virginia Tech published a paper investigating the Reproducibility of NPM Packages. The abstract includes:

When using open-source NPM packages, most developers download prebuilt packages on npmjs.com instead of building those packages from available source, and implicitly trust the downloaded packages. However, it is unknown whether the blindly trusted prebuilt NPM packages are reproducible (i.e., whether there is always a verifiable path from source code to any published NPM package). […] We downloaded versions/releases of 226 most popularly used NPM packages and then built each version with the available source on GitHub. Next, we applied a differencing tool to compare the versions we built against versions downloaded from NPM, and further inspected any reported difference.

The paper reports that “among the 3,390 versions of the 226 packages, only 2,087 versions are reproducible,” and furthermore that multiple factors contribute to the non-reproducibility including “flexible versioning information in package.json file and the divergent behaviors between distinct versions of tools used in the build process.” The paper concludes with “insights for future verifiable build procedures.”

Unfortunately, a PDF is not available publically yet, but a Digital Object Identifier (DOI) is available on the paper’s IEEE page.


Elsewhere in academia, Betul Gokkaya, Leonardo Aniello and Basel Halak of the School of Electronics and Computer Science at the University of Southampton published a new paper containing a broad overview of attacks and comprehensive risk assessment for software supply chain security.

Their paper, titled Software supply chain: review of attacks, risk assessment strategies and security controls, analyses the most common software supply-chain attacks by providing the latest trend of analyzed attack, and identifies the security risks for open-source and third-party software supply chains. Furthermore, their study “introduces unique security controls to mitigate analyzed cyber-attacks and risks by linking them with real-life security incidence and attacks”. (arXiv.org, PDF)


NixOS is now tracking two new reports at reproducible.nixos.org. Aside from the collection of build-time dependencies of the minimal and Gnome installation ISOs, this page now also contains reports that are restricted to the artifacts that make it into the image. The minimal ISO is currently reproducible except for Python 3.10, which hopefully will be resolved with the coming update to Python version 3.11.


On our rb-general mailing list this month:

David A. Wheeler started a thread noting that the OSSGadget project’s oss-reproducible tool was measuring something related to but not the same as reproducible builds. Initially they had adopted the term “semantically reproducible build” term for what it measured, which they defined as being “if its build results can be either recreated exactly (a bit for bit reproducible build), or if the differences between the release package and a rebuilt package are not expected to produce functional differences in normal cases.” This generated a significant number of replies, and several were concerned that people might confuse what they were measuring with “reproducible builds”. After discussion, the OSSGadget developers decided to switch to the term “semantically equivalent” for what they measured in order to reduce the risk of confusion.

Vagrant Cascadian (vagrantc) posted an update about GCC, binutils, and Debian’s build-essential set with “some progress, some hope, and I daresay, some fears…”.

Lastly, kpcyrd asked a question about building a reproducible Linux kernel package for Arch Linux (answered by Arnout Engelen). In the same, thread David A. Wheeler pointed out that the Linux Kernel documentation has a chapter about Reproducible kernel builds now as well.


In Debian this month, nine reviews of Debian packages were added, 20 were updated and 6 were removed this month, all adding to our knowledge about identified issues. In addition, Vagrant Cascadian added a link to the source code causing various ecbuild issues. []


The F-Droid project updated its Inclusion How-To with a new section explaining why it considers reproducible builds to be best practice and hopes developers will support the team’s efforts to make as many (new) apps reproducible as it reasonably can.


In diffoscope development this month, version 242 was uploaded to Debian unstable by Chris Lamb who also made the following changes:

  • If binwalk is not available, ensure the user knows they may be missing more info. []
  • Factor out generating a human-readable comment when missing a Python module. []

In addition, Mattia Rizzolo documented how to (re)-produce a binary blob in the code [] and Vagrant Cascadian updated the version of diffoscope in GNU Guix to 242 [].


reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Holger Levsen uploaded versions 0.7.24 and 0.7.25 to Debian unstable which added support for Tox versions 3 and 4 with help from Vagrant Cascadian [][][]


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

In addition, Jason A. Donenfeld filed a bug (now fixed in the latest alpha version) in the Android issue tracker to report that generateLocaleConfig in Android Gradle Plugin version 8.1.0 generates XML files using non-deterministic ordering, breaking reproducible builds. []


Testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In May, a number of changes were made by Holger Levsen:

  • Update the kernel configuration of arm64 nodes only put required modules in the initrd to save space in the /boot partition. []
  • A huge number of changes to a new tool to document/track Jenkins node maintenance, including adding --fetch, --help, --no-future and --verbose options [][][][] as well as adding a suite of new actions, such as apt-upgrade, command, deploy-git, rmstamp, etc. [][][][] in addition a significant amount of refactoring [][][][].
  • Issue warnings if apt has updates to install. []
  • Allow Jenkins to run apt get update in maintenance job. []
  • Installed bind9-dnsutils on some Ubuntu 18.04 nodes. [][]
  • Fixed the Jenkins shell monitor to correctly deal with little-used directories. []
  • Updated the node health check to warn when apt upgrades are available. []
  • Performed some node maintenance. []

In addition, Vagrant Cascadian added the nocheck, nopgo and nolto when building gcc-* and binutils packages [] as well as performed some node maintenance [][]. In addition, Roland Clobus updated the openQA configuration to specify longer timeouts and access to the developer mode [] and updated the URL used for reproducible Debian Live images [].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 June, 2023 05:35PM

June 04, 2023

Thorsten Alteholz

My Debian Activities in May 2023

FTP master

This month I accepted 157 and rejected 22 packages. The overall number of packages that got accepted was 160.

Debian LTS

This was my hundred-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3430-1] cups-filters security update for one CVE
  • [DSA 5407-1] cups-filters security update for one CVE
  • [unstable] upload of cups-filters to fix CVE-2023-24805
  • [#1036548] unblock bug to fix CVE-2023-24805 in bookworm
  • [unstable] upload of sniproxy to fix CVE-2023-25076
  • [DSA 5413-1] sniproxy security update in Bullseye for one CVE
  • [cups] working to fix CVE-2023-32324 in unstable, Bookworm, Bullseye, Buster

The CVEs for cups-filters and cups have been embargoed ones, so the work for cups was done in May but the uploads happen in June.

I also did some work on security-master to inject missing dependencies for hugo and gitlab-workhose.

Last but not least I did some days on frontdesk duties.

Debian ELTS

This month was the fifty eighth ELTS month.

  • [ELA-852-1] cups-filters security update in Jessie and Stretch for one CVE
  • [ELA-856-1] freetype security update in Jessie and Stretch for two CVEs
  • [ELA-857-1] libtasn1-6 security update in Jessie and Stretch for one CVE
  • [cups] working to fix CVE-2023-32324 in Jessie and Stretch

The CVEs for cups-filters and cups have been embargoed ones, so the work for cups was done in May but the uploads happen in June.

Last but not least I did some days on frontdesk duties.

Debian Astro

This month I uploaded some packages to fix RC bugs, that were
detected by one of many QA tools:

Thanks a lot to all the hardworking people who run these tools!

Debian Printing

This month I could fix RC bugs in:

This work is generously funded by Freexian!

Debian Mobcom

This month I could fix RC bugs in:

Other stuff

Some other packages also had last minute RC bugs:

I even did an upload of a new package force-ip-protocol. I finally had enough of people using IPv6 for their hosts but are unable to configure it. Now I can force firefox, or whatever software, to only use IPv4. One nuisance settled.

04 June, 2023 10:43AM by alteholz

hackergotchi for Debian Brasil

Debian Brasil

Oficina de tradução do Manual do(a) Administrador(a) Debian em 13 de junho

A equipe de tradução do Debian para o português do Brasil realizará, no dia 13 de junho a partir das 20h, uma oficina de tradução do Manual do(a) Administrador(a) Debian (The Debian Administrator's Handbook).

O objetivo é mostrar aos(às) iniciantes como colaborar na tradução deste importante material, que existe desde 2004 e vem sendo traduzido para o português ao longo dos anos. Agora a tradução precisa ser atualizada para a versão 12 do Debian (bookworm), que será lançada este mês.

A ferramenta usada para traduzir o Manual é o site weblate, então você já pode criar sua conta e acessar o Projeto Debian Handbook para se ambientar.

A oficina acontecerá no formato online, e o link para participar da sala no jitsi será divulgado no grupo debl10nptBR no telegram e no canal #debian-l10n-br do IRC.

04 June, 2023 10:00AM

June 03, 2023

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in May 2023

03 June, 2023 04:50PM

June 02, 2023

Jelmer Vernooij

Porting Python projects to Rust

I’ve recently been working on porting some of my Python code to rust, both for performance reasons, and because of the strong typing in the language. As a fan of Haskell, I also just really enjoy using the language.

Porting any large project to a new language can be a challenge. There is a temptation to do a rewrite from the ground-up in idiomatic rust and using all new fancy features of the language.

Porting in one go

However, this is a bit of a trap:

  • It blocks other work. It can take a long time to finish the rewrite, during which time there is no good place to make other bug fixes/feature changes. If you make the change in the python branch, then you may also have to patch the in-progress rust fork.
  • No immediate return on investment. While the rewrite is happening, all of the investment in it is sunk costs.
  • Throughout the process, you can only run the tests for subsystems that have already been ported. It’s common to find subtle bugs later in code ported early.
  • Understanding existing code, porting it and making it idiomatic rust all at the same time takes more time and post-facto debugging.

Iterative porting

Instead, we’ve found that it works much better to take an iterative approach. One of the hidden gems of rust is the excellent PyO3 crate, which allows creating python bindings for rust code in a way that is several times less verbose and less painful than C or SWIG. Because of rust’s strong ownership model, it’s also really hard to muck up e.g. reference counts when creating Python bindings for rust code.

We port individual functions or classes to rust one at a time, starting with functionality that doesn’t have dependencies on other python code and gradually working our way up the call stack.

Each subsystem of the code is converted to two matching rust crates: one with a port of the code to pure rust, and one with python bindings for the rust code. Generally multiple python modules end up being a single pair of rust crates.

The signature for the pure Rust code follow rust conventions, but the business logic is mostly ported as-is (just in rust syntax) and the signatures of the python bindings match that of the original python code.

This then allows running the original python tests to verify that the code still behaves the same way. Changes can also immediately land on the main branch.

A subsequent step is usually to refactor the rust code to be more idiomatic - all the while keeping the tests passing. There is also the potential to e.g. switch to using external rust crates (with perhaps subtly different behaviour), or drop functionality altogether.

At some point, we will also port the tests from python to rust, and potentially drop the python bindings - once all the caller’s have been converted to rust.

Example

For example, imagine I have a Python module janitor/mail_filter.py with this function:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
def parse_plain_text_body(text):
   lines = text.splitlines()

   for i, line in enumerate(lines):
       if line == 'Reply to this email directly or view it on GitHub:':
           return lines[i + 1].split('#')[0]
       if (line == 'For more details, see:'
               and lines[i + 1].startswith('https://code.launchpad.net/')):
           return lines[i + 1]
       try:
           (field, value) = line.split(':', 1)
       except ValueError:
           continue
       if field.lower() == 'merge request url':
           return value.strip()
   return None

Porting this to rust naively (in a crate I’ve called “mailfilter”) it might look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
pub fn parse_plain_text_body(text: &str) -> Option<String> {
     let lines: Vec<&str> = text.lines().collect();

     for (i, line) in lines.iter().enumerate() {
         if line == &"Reply to this email directly or view it on GitHub:" {
             return Some(lines[i + 1].split('#').next().unwrap().to_string());
         }
         if line == &"For more details, see:"
             && lines[i + 1].starts_with("https://code.launchpad.net/")
         {
             return Some(lines[i + 1].to_string());
         }
         if let Some((field, value)) = line.split_once(':') {
             if field.to_lowercase() == "merge request url" {
                 return Some(value.trim().to_string());
             }
         }
     }
     None
 }

Bindings are created in a crate called mailfilter-py, which looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use pyo3::prelude::*;

 #[pyfunction]
 fn parse_plain_text_body(text: &str) -> Option<String> {
     janitor_mail_filter::parse_plain_text_body(text)
 }

 #[pymodule]
 pub fn _mail_filter(py: Python, m: &PyModule) -> PyResult<()> {
     m.add_function(wrap_pyfunction!(parse_plain_text_body, m)?)?;

     Ok(())
 }

The metadata for the crates is what you’d expect. mailfilter-py uses PyO3 and depends on mailfilter.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[package]
 name = "mailfilter-py"
 version = "0.0.0"
 authors = ["Jelmer Vernooij <jelmer@jelmer.uk>"]
 edition = "2018"

 [lib]
 crate-type = ["cdylib"]

 [dependencies]
 janitor-mail-filter = { path = "../mailfilter" }
 pyo3 = { version = ">=0.14", features = ["extension-module"]}

I use python-setuptools-rust to get the python ecosystem to build the python bindings. Here is what setup.py looks like:

1
2
3
4
5
6
7
8
9
#!/usr/bin/python3
from setuptools import setup
from setuptools_rust import RustExtension, Binding

setup(
        rust_extensions=[RustExtension(
        "janitor._mailfilter", "crates/mailfilter-py/Cargo.toml",
        binding=Binding.PyO3)],
)

And of course, setuptools-rust needs to be listed as a setup requirement in pyproject.toml or setup.cfg.

After that, we can replace the original python code with a simple import and verify that the tests still run:

1
from ._mailfilter import parse_plain_text_body

Of course, not all bindings are as simple as this. Iterators in particular are more complicated, as is code that has a loose idea of ownership in python. But I’ve found that the time investment is usually well worth the ability to land changes on the development head early and often.

I’d be curious to hear if people have had success with other approaches to porting Python code to Rust. If you do, please leave a comment.

02 June, 2023 05:00PM by Jelmer Vernooij

hackergotchi for Matt Brown

Matt Brown

Calling time on DNSSEC: The costs exceed the benefits

I’m calling time on DNSSEC. Last week, prompted by a change in my DNS hosting setup, I began removing it from the few personal zones I had signed. Then this Monday the .nz ccTLD experienced a multi-day availability incident triggered by the annual DNSSEC key rotation process. This incident broke several of my unsigned zones, which led me to say very unkind things about DNSSEC on Mastodon and now I feel compelled to more completely explain my thinking:

For almost all domains and use-cases, the costs and risks of deploying DNSSEC outweigh the benefits it provides. Don’t bother signing your zones.

The .nz incident, while topical, is not the motivation or the trigger for this conclusion. Had it been a novel incident, it would still have been annoying, but novel incidents are how we learn so I have a small tolerance for them. The problem with DNSSEC is precisely that this incident was not novel, just the latest in a long and growing list.

It’s a clear pattern. DNSSEC is complex and risky to deploy. Choosing to sign your zone will almost inevitably mean that you will experience lower availability for your domain over time than if you leave it unsigned. Even if you have a team of DNS experts maintaining your zone and DNS infrastructure, the risk of routine operational tasks triggering a loss of availability (unrelated to any attempted attacks that DNSSEC may thwart) is very high - almost guaranteed to occur. Worse, because of the nature of DNS and DNSSEC these incidents will tend to be prolonged and out of your control to remediate in a timely fashion.

The only benefit you get in return for accepting this almost certain reduction in availability is trust in the integrity of the DNS data a subset of your users (those who validate DNSSEC) receive. Trusted DNS data that is then used to communicate across an untrusted network layer. An untrusted network layer which you are almost certainly protecting with TLS which provides a more comprehensive and trustworthy set of security guarantees than DNSSEC is capable of, and provides those guarantees to all your users regardless of whether they are validating DNSSEC or not.

In summary, in our modern world where TLS is ubiquitous, DNSSEC provides only a thin layer of redundant protection on top of the comprehensive guarantees provided by TLS, but adds significant operational complexity, cost and a high likelihood of lowered availability.

In an ideal world, where the deployment cost of DNSSEC and the risk of DNSSEC-induced outages were both low, it would absolutely be desirable to have that redundancy in our layers of protection. In the real world, given the DNSSEC protocol we have today, the choice to avoid its complexity and rely on TLS alone is not at all painful or risky to make as the operator of an online service. In fact, it’s the prudent choice that will result in better overall security outcomes for your users.

Ignore DNSSEC and invest the time and resources you would have spent deploying it improving your TLS key and certificate management.

Ironically, the one use-case where I think a valid counter-argument for this position can be made is TLDs (including ccTLDs such as .nz). Despite its many failings, DNSSEC is an Internet Standard, and as infrastructure providers, TLDs have an obligation to enable its use. Unfortunately this means that everyone has to bear the costs, complexities and availability risks that DNSSEC burdens these operators with. We can’t avoid that fact, but we can avoid creating further costs, complexities and risks by choosing not to deploy DNSSEC on the rest of our non-TLD zones.

But DNSSEC will save us from the evil CA ecosystem!

Historically, the strongest motivation for DNSSEC has not been the direct security benefits themselves (which as explained above are minimal compared to what TLS provides), but in the new capabilities and use-cases that could be enabled if DNS were able to provide integrity and trusted data to applications.

Specifically, the promise of DNS-based Authentication of Named Entities (DANE) is that with DNSSEC we can be free of the X.509 certificate authority ecosystem and along with it the expensive certificate issuance racket and dubious trust properties that have long been its most distinguishing features.

Ten years ago this was an extremely compelling proposition with significant potential to improve the Internet. That potential has gone unfulfilled.

Instead of maturing as deployments progressed and associated operational experience was gained, DNSSEC has been beset by the discovery of issue after issue. Each of these has necessitated further changes and additions to the protocol, increasing complexity and deployment cost. For many zones, including significant zones like google.com (where I led the attempt to evaluate and deploy DNSSEC in the mid 2010s), it is simply infeasible to deploy the protocol at all, let alone in a reliable and dependable manner.

While DNSSEC maturation and deployment has been languishing, the TLS ecosystem has been steadily and impressively improving. Thanks to the efforts of many individuals and companies, although still founded on the use of a set of root certificate authorities, the TLS and CA ecosystem today features transparency, validation and multi-party accountability that comprehensively build trust in the ability to depend and rely upon the security guarantees that TLS provides. When you use TLS today, you benefit from:

  • Free/cheap issuance from a number of different certificate authorities.
  • Regular, automated issuance/renewal via the ACME protocol.
  • Visibility into who has issued certificates for your domain and when through Certificate Transparency logs.
  • Confidence that certificates issued without certificate transparency (and therefore lacking an SCT) will not be accepted by the leading modern browsers.
  • The use of modern cryptographic protocols as a baseline, with a plausible and compelling story for how these can be steadily and promptly updated over time.

DNSSEC with DANE can match the TLS ecosystem on the first benefit (up front price) and perhaps makes the second benefit moot, but has no ability to match any of the other transparency and accountability measures that today’s TLS ecosystem offers. If your ZSK is stolen, or a parent zone is compromised or coerced, validly signed TLSA records for a forged certificate can be produced and spoofed to users under attack with minimal chances of detection.

Finally, in terms of overall trust in the roots of the system, the CA/Browser forum requirements continue to improve the accountability and transparency of TLS certificate authorities, significantly reducing the ability for any single actor (say a nefarious government) to subvert the system. The DNS root has a well established transparent multi-party system for establishing trust in the DNSSEC root itself, but at the TLD level, almost intentionally thanks to the hierarchical nature of DNS, DNSSEC has multiple single points of control (or coercion) which exist outside of any formal system of transparency or accountability.

We’ve moved from DANE being a potential improvement in security over TLS when it was first proposed, to being a definite regression from what TLS provides today.

That’s not to say that TLS is perfect, but given where we’re at, we’ll get a better security return from further investment and improvements in the TLS ecosystem than we will from trying to fix DNSSEC.

But TLS is not ubiquitous for non-HTTP applications

The arguments above are most compelling when applied to the web-based HTTP-oriented ecosystem which has driven most of the TLS improvements we’ve seen to date. Non-HTTP protocols are lagging in adoption of many of the improvements and best practices TLS has on the web. Some claim this need to provide a solution for non-HTTP, non-web applications provides a motivation to continue pushing DNSSEC deployment.

I disagree, I think it provides a motivation to instead double-down on moving those applications to TLS. TLS as the new TCP.

The problem is that costs of deploying and operating DNSSEC are largely fixed regardless of how many protocols you are intending to protect with it, and worse, the negative side-effects of DNSSEC deployment can and will easily spill over to affect zones and protocols that don’t want or need DNSSEC’s protection. To justify continued DNSSEC deployment and operation in this context means using a smaller set of benefits (just for the non-HTTP applications) to justify the already high costs of deploying DNSSEC itself, plus the cost of the risk that DNSSEC poses to the reliability to your websites. I don’t see how that equation can ever balance, particularly when you evaluate it against the much lower costs of just turning on TLS for the rest of your non-HTTP protocols instead of deploying DNSSEC. MTA-STS is a worked example of how this can be achieved.

If you’re still not convinced, consider that even DNS itself is considering moving to TLS (via DoT and DoH) in order to add the confidentiality/privacy attributes the protocol currently lacks. I’m not a huge fan of the latency implications of these approaches, but the ongoing discussion shows that clever solutions and mitigations for that may exist.

DoT/DoH solve distinct problems from DNSSEC and in principle should be used in combination with it, but in a world where DNS itself is relying on TLS and therefore has eliminated the majority of spoofing and cache poisoning attacks through DoT/DoH deployment the benefit side of the DNSSEC equation gets smaller and smaller still while the costs remain the same.

OK, but better software or more careful operations can reduce DNSSEC’s cost

Some see the current DNSSEC costs simply as teething problems that will reduce as the software and tooling matures to provide more automation of the risky processes and operational teams learn from their mistakes or opt to simply transfer the risk by outsourcing the management and complexity to larger providers to take care of.

I don’t find these arguments compelling. We’ve already had 15+ years to develop improved software for DNSSEC without success. What’s changed that we should expect a better outcome this year or next? Nothing.

Even if we did have better software or outsourced operations, the approach is still only hiding the costs behind automation or transferring the risk to another organisation. That may appear to work in the short-term, but eventually when the time comes to upgrade the software, migrate between providers or change registrars the debt will come due and incidents will occur.

The problem is the complexity of the protocol itself. No amount of software improvement or outsourcing addresses that.

After 15+ years of trying, I think it’s worth considering that combining cryptography, caching and distributed consensus, some of the most fundamental and complex computer science problems, into a slow-moving and hard to evolve low-level infrastructure protocol while appropriately balancing security, performance and reliability appears to be beyond our collective ability.

That doesn’t have to be the end of the world, the improvements achieved in the TLS ecosystem over the same time frame provide a positive counter example - perhaps DNSSEC is simply focusing our attention at the wrong layer of the stack.

Ideally secure DNS data would be something we could have, but if the complexity of DNSSEC is the price we have to pay to achieve it, I’m out. I would rather opt to remain with the simpler yet insecure DNS protocol and compensate for its short comings at higher transport or application layers where experience shows we are able to more rapidly improve and develop our security capabilities.

Summing up

For the vast majority of domains and use-cases there is simply no net benefit to deploying DNSSEC in 2023. I’d even go so far as to say that if you’ve already signed your zones, you should (carefully) move them back to being unsigned - you’ll reduce the complexity of your operating environment and lower your risk of availability loss triggered by DNS. Your users will thank you.

The threats that DNSSEC defends against are already amply defended by the now mature and still improving TLS ecosystem at the application layer, and investing in further improvements here carries far more return than deployment of DNSSEC.

For TLDs, like .nz whose outage triggered this post, DNSSEC is not going anywhere and investment in mitigating its complexities and risks is an unfortunate burden that must be shouldered. While the full incident report of what went wrong with .nz is not yet available, the interim report already hints at some useful insights. It is important that InternetNZ publishes a full and comprehensive review so that the full set of learnings and improvements this incident can provide can be fully realised by .nz and other TLD operators stuck with the unenviable task of trying to safely operate DNSSEC.

Postscript

After taking a few days to draft and edit this post, I’ve just stumbled across a presentation from the well respected Geoff Huston at last weeks RIPE86 meeting. I’ve only had time to skim the slides (video here) - they don’t seem to disagree with my thinking regarding the futility of the current state of DNSSEC, but also contain some interesting ideas for what it might take for DNSSEC to become a compelling proposition.

Probably worth a read/watch!

02 June, 2023 12:20AM

June 01, 2023

hackergotchi for Gunnar Wolf

Gunnar Wolf

Cheatable e-voting booths in Coahuila, Mexico, detected at the last minute

It’s been a very long time I haven’t blogged about e-voting, although some might remember it’s been a topic I have long worked with; particularly, it was the topic of my 2018 Masters thesis, plus some five articles I wrote in the 2010-2018 period. After the thesis, I have to admit I got weary of the subject, and haven’t pursued it anymore.

So, I was saddened and dismayed to read that –once again, as it has already happened– the electoral authorities would set up a pilot e-voting program in the local elections this year, that would probably lead to a wider deployment next year, in the Federal elections.

This year (…this week!), two States will have elections for their Governors and local Legislative branches: Coahuila (North, bordering with Texas) and Mexico (Center, surrounding Mexico City). They are very different states, demographically and in their development level.

Pilot programs with e-voting booths have been seen in four states TTBOMK in the last ~15 years: Jalisco (West), Mexico City, State of Mexico and Coahuila. In Coahuila, several universities have teamed up with the Electoral Institute to develop their e-voting booth; a good thing that I can say about how this has been done in my country is that, at least, the Electoral Institute is providing their own implementations, instead of sourcing with e-booth vendors (which have their long, tragic story mostly in the USA, but also in other places). Not only that: They are subjecting the machines to audit processes. Not open audit processes, as demanded by academics in the field, but nevertheless, external, rigorous audit processes.

But still, what me and other colleagues with Computer Security background oppose to is not a specific e-voting implementation, but the adoption of e-voting in general. If for nothing else, because of the extra complexity it brings, because of the many more checks that have to be put in place, and… Because as programmers, we are aware of the ease with which bugs can creep in any given implementation… both honest bugs (mistakes) and, much worse, bugs that are secretly requested and paid for.

Anyway, leave this bit aside for a while. I’m not implying there was any ill intent in the design or implementation of these e-voting booths.

Two days ago, the Electoral Institute announced there was an important bug found in the Coahuila implementation. The bug consists, as far as I can understand from the information reported in newspapers, in:

  • Each voter approaches their electoral authorities, who verify their identity and their authorization to vote in that precinct
  • The voter is given an activation code, with which they go to the voting booth
  • The booth is activated and enables each voter to cast a vote only once

The problem was that the activation codes remained active after voting, so a voter could vote multiple times.

This seems like an easy problem to be patched — It most likely is. However, given the inability to patch, properly test, and deploy in a timely manner the fix to all of the booths (even though only 74 e-voting booths were to be deployed for this pilot), the whole pilot for Coahuila was scratched; Mexico State is voting with a different implementation that is not affected by this issue.

This illustrates very well one of the main issues with e-voting technology: It requires a team of domain-specific experts to perform a highly specialized task (code and physical audits). I am happy and proud to say that part of the auditing experts were the professors of the Information Security Masters program of ESIME Culhuacán (the Masters program I was part of).

The reaction by the Electoral Institute was correct. As far as I understand, there is no evidence suggesting this bug could have been purposefully built, but it’s not impossible to rule it out.

A traditional, paper-and-ink-based process is not only immune to attacks (or mistakes!) based on code such as this one, but can be audited by anybody. And that is, I believe, a fundamental property of democracy: ensuring the process is done right is not limited to a handful of domain experts. Not only that: In Mexico, I am sure there are hundreds of very proficient developers that could perform a code and equipment audit such as this one, but the audits are open by invitation only, so being an expert is not enough to get clearance to do this.

In a democracy, the whole process should be observable and verifiable by anybody interested in doing so.

Some links about this news:

01 June, 2023 04:22PM

hackergotchi for Holger Levsen

Holger Levsen

20230601-developers-reference-translations

src:developers-reference translations wanted

I've just uploaded developers-reference 12.19, bringing the German translation status back to 100% complete, thanks to Carsten Schoenert. Some other translations however could use some updates:

$ make status
for l in de fr it ja ru; do     \
    if [ -d source/locales/$l/LC_MESSAGES ] ; then  \
        echo -n "Stats for $l: " ;          \
        msgcat --use-first source/locales/$l/LC_MESSAGES/*.po | msgfmt --statistics - 2>&1 ; \
    fi ;                            \
done
Stats for de: 1374 translated messages.
Stats for fr: 1286 translated messages, 39 fuzzy translations, 49 untranslated messages.
Stats for it: 869 translated messages, 46 fuzzy translations, 459 untranslated messages.
Stats for ja: 891 translated messages, 26 fuzzy translations, 457 untranslated messages.
Stats for ru: 870 translated messages, 44 fuzzy translations, 460 untranslated messages.

01 June, 2023 01:39PM

Russell Coker

Do Desktop Computers Make Sense?

Laptop vs Desktop Price

Currently the smaller and cheaper USB-C docks start at about $25 and Dell has a new Vostro with 8G of RAM and 2*USB-C ports for $788. That gives a bit over $800 for a laptop and dock vs $795 for the cheapest Dell desktop which also has 8G of RAM. For every way of buying laptops and desktops (EG buying from Officeworks, buying on ebay, etc) the prices for laptops and desktops seem very similar. For all those comparisons the desktop will typically have a faster CPU and more options for PCIe cards, larger storage, etc. But if you don’t want to expand storage beyond the affordable 4TB NVMe/SSD devices, don’t need to add PCIe cards, and don’t need much CPU power then a laptop will do well. For the vast majority of the computer work I do my Thinkpad Carbon X1 Gen1 (from 2012) had plenty of CPU power.

If someone who’s not an expert in PC hardware was to buy a computer of a given age then laptops probably aren’t more expensive than desktops even disregarding the fact that a laptop works without the need to purchase a monitor, a keyboard, or a mouse. I can get regular desktop PCs for almost nothing and get parts to upgrade them very cheaply but most people can’t do that. I can also get a decent second-hand laptop and USB-C dock for well under $400.

Servers and Gaming Systems

For people doing serious programming or other compute or IO intensive tasks some variation on the server theme is the best option. That may be something more like the servers used by the r/homelab people than the corporate servers, or it might be something in the cloud, but a server is a server. If you are going to have a home server that’s a tower PC then it makes sense to put a monitor on it and use it as a workstation. If your server makes so much noise that you can’t spend much time in the same room or if it’s hosted elsewhere then using a laptop to access it makes sense.

Desktop computers for PC gaming makes sense as no-one seems to be making laptops with moderately powerful GPUs. The most powerful GPUs draw 150W which is more than most laptop PSUs can supply and even if a laptop PSU could supply that much there would be the issue of cooling. The Steam Deck [1] and the Nintendo Switch [2] can both work with USB-C docks. The PlayStation 5 [3] has a 350W PSU and doesn’t support video over USB-C. The Steam Deck can do 8K resolution at 60Hz or 4K at 120Hz but presumably the newer Steam games will need a desktop PC with a more powerful GPU to properly use such resolutions.

For people who want the best FPS rates on graphics intensive games it could make sense to have a tower PC. Also a laptop that’s run at high CPU/GPU use for a long time will tend to have it’s vents clogged by dust and possibly have the cooling fan wear out.

Monitor Resolution

Laptop support for a single 4K monitor became common in 2012 with the release of the Ivy Bridge mobile CPUs from Intel in 2012. My own experience of setting up 4K monitors for a Linux desktop in 2019 was that it was unreasonably painful and that the soon to be released Debian/Bookworm will make things work nicely for 4K monitors with KDE on X11. So laptop hardware has handled the case of a single high resolution monitor since before such monitors were cheap or common and before software supported it well. Of course at that time you had to use either a proprietary dock or a mini-DisplayPort to HDMI adaptor to get 4K working. But that was still easier than getting PCIe video cards supporting 4K resolution which is something that according to spec sheets wasn’t well supported by affordable cards in 2017.

Since USB-C became a standard feature in laptops in about 2017 support of more monitors than most people would want through a USB-C dock became standard. My Thinkpad X1 Carbon Gen5 which was released in 2017 will support 2*FullHD monitors plus a 4K monitor via a USB-C dock, I suspect it would do at least 2*4K monitors but haven’t had a chance to test. Cheap USB-C docks supporting this sort of thing have only become common in the last year or so.

How Many Computers per Home

Among middle class Australians it’s common to have multiple desktop PCs per household. One for each child who’s over the age of about 13 and one for the parents seems to be reasonably common. Students in the later years of high-school and university students are often compelled to have laptops so having the number of laptops plus the number of desktops be larger than the population of the house probably isn’t uncommon even among people who aren’t really into computers. As an aside it’s probably common among people who read my blog to have 2 desktops, a laptop, and a cloud server for their own personal use. But even among people who don’t do that sort of thing having computers outnumber people in a home is probably common.

A large portion of the computer users can do everything they need on a laptop. For gamers the graphics intensive games often run well on a console and that’s probably the most effective way of getting to playing the games. Of course the fact that there is “RGB RAM” (RAM with Red, Green, and Blue LEDs to light up) along with a lot of other wild products sold to gamers suggests that gaming PCs are not about what runs the game most effectively and that an art/craft project with the PC is more important than actually playing games.

Instead of having one desktop PC per bedroom and laptops for school/university as well it would make more sense to have a laptop per person and have a USB-C dock and monitor in each bedroom and a USB-C dock connected to a large screen TV in the lounge. This gives plenty of flexibility for moving around to do work and sharing what’s on your computer with other people. It also allows taking a work computer home and having work with your monitor, having a friend bring their laptop to your home to work on something together, etc.

For most people desktop computers don’t make sense. While I think that convergence of phones with laptops and desktops is the way of the future [4] for most people having laptops take over all functions of desktops is the best option today.

01 June, 2023 12:38PM by etbe

Jamie McClelland

Enough about the AI Apocalypse Already

After watching Democracy Now’s segment on artificial intelligence I started to wonder - am I out of step on this topic?

When people claim artificial intelligence will surpass human intelligence and thus threaten humanity with extinction, they seem to be referring specifically to advances made with large language models.

As I understand them, large language models are probability machines that have ingested massive amounts of text scraped from the Internet. They answer questions based on the probability of one series of words (their answer) following another series of words (the question).

It seems like a stretch to call this intelligence, but if we accept that definition then it follows that this kind of intelligence is nothing remotely like human intelligence, which makes the claim that it will surpass human intelligence confusing. Hasn’t this kind of machine learning surpassed us decades ago?

Or when we say “surpass” does that simply refer to fooling people into thinking an AI machine is a human via conversation? That is an important milestone, but I’m not ready to accept the turing test as proof of equal intelligence.

Furthermore, large language models “hallucinate” and also reflect the biases of their training data. The word “hallucinate” seems like a euphemism, as if it could be corrected with the right medication when in fact it seems hard to avoid when your strategy is to correlate words based on probability. But even if you could solve the “here is a completely wrong answer presented with sociopathic confidence” problem, reflecting the biases of your data sources seems fairly intractable. In what world would a system with built-in bias be considered on the brink of surpassing human intelligence?

The danger from LLMs seems to be their ability to convince people that their answers are correct, including their patently wrong and/or biased answers.

Why do people think they are giving correct answers? Oh right… terrifying right wing billionaires (with terrifying agendas have been claiming AI will exceed human intelligence and threaten humanity and every time they sign a hyperbolic statement they get front page mainstream coverage. And even progressive news outlets are spreading this narrative with minimal space for contrary opinions (thank you Tawana Petty from the Algorithmic Justice League for providing the only glimpse of reason in the segment).

The belief that artificial intelligence is or will soon become omnipotent has real world harms today: specifically it creates the misperception that current LLMs are accurate, which paves the way for greater adoption among police forces, social service agencies, medical facilities and other places where racial and economic biases have life and death consequences.

When the CEO of OpenAI calls the technology dangerous and in need of regulation, he gets both free advertising promoting the power and supposed accuracy of his product and the possibility of freezing further developments in the field that might challenge OpenAI’s current dominance.

The real threat to humanity is not AI, it’s massive inequality and the use of tactics ranging from mundane bureaucracy to deadly force and incarceration to segregate the affluent from the growing number of people unable to make ends meet. We have spent decades training bureaucrats, judges and cops to robotically follow biased laws to maintain this order without compassion or empathy. Replacing them with AI would be make things worse and should be stopped. But, let’s be clear, the narrative that AI is poised to surpass human intelligence and make humanity extinct is a dangerous distraction that runs counter to a much more important story about “the very real and very present exploitative practices of the [companies building AI], who are rapidly centralizing power and increasing social inequities.”.

Maybe we should talk about that instead?

01 June, 2023 12:27PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Already June.

Already June.

01 June, 2023 02:23AM by Junichi Uekawa

Paul Wise

FLOSS Activities May 2023

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian IRC: set topic on new #debian-sa channel
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SIMDe, gensim, sptag work was sponsored. All other work was done on a volunteer basis.

01 June, 2023 12:09AM

May 31, 2023

Arturo Borrero González

Wikimedia Hackathon 2023 Athens summary

Post logo

During the weekend of 19-23 May 2023 I attended the Wikimedia hackathon 2023 in Athens, Greece. The event physically reunited folks interested in the more technological aspects of the Wikimedia movement in person for the first time since 2019. The scope of the hacking projects include (but was not limited to) tools, wikipedia bots, gadgets, server and network infrastructure, data and other technical systems.

My role in the event was two-fold: on one hand I was in the event because of my role as SRE in the Wikimedia Cloud Services team, where we provided very valuable services to the community, and I was expected to support the technical contributors of the movement that were around. Additionally, and because of that same role, I did some hacking myself too, which was specially augmented given I generally collaborate on a daily basis with some community members that were present in the hacking room.

The hackathon had some conference-style track and I ran a session with my coworker Bryan, called Past, Present and Future of Wikimedia Cloud Services (Toolforge and friends) (slides) which was very satisfying to deliver given the friendly space that it was. I attended a bunch of other sessions, and all of them were interesting and well presented. The number of ML themes that were present in the program schedule was exciting. I definitely learned a lot from attending those sessions, from how LLMs work, some fascinating applications for them in the wikimedia space, to what were some industry trends for training and hosting ML models.

Session

Despite the sessions, the main purpose of the hackathon was, well, hacking. While I was in the hacking space for more than 12 hours each day, my ability to get things done was greatly reduced by the constant conversations, help requests, and other social interactions with the folks. Don’t get me wrong, I embraced that reality with joy, because the social bonding aspect of it is perhaps the main reason why we gathered in person instead of virtually.

That being said, this is a rough list of what I did:

The hackathon was also the final days of Technical Engagement as an umbrella group for WMCS and Developer Advocacy teams within the Technology department of the Wikimedia Foundation because of an internal reorg.. We used the chance to reflect on the pleasant time we have had together since 2019 and take a final picture of the few of us that were in person in the event.

Technical Engagement

It wasn’t the first Wikimedia Hackathon for me, and I felt the same as in previous iterations: it was a welcoming space, and I was surrounded by friends and nice human beings. I ended the event with a profound feeling of being privileged, because I was part of the Wikimedia movement, and because I was invited to participate in it.

31 May, 2023 12:11PM

Russell Coker

Genesis GV60

I recently test drove a Genesis GV70, but the GV60 [1] which I didn’t test drive is a nicer car.

The GV70 and GV60 are all electric so they are quiet and perform well. The GV70 has a sun-roof that opens, it was the first car I’ve driven like that and I decided I don’t like it. Having the shade open so I can see the sky while stuck in a traffic jam is nice though. The GV60 has a non-opening sun-roof with a shade that can be retracted, this is a feature I’d really like to have in my next car.

Electric cars as a general rule have good acceleration and are quiet, the GV70 performed as expected in that regard. It has a head-up display projected on the windscreen for the speed and the speed limit on the road in question which is handy. When driving in a car park it showed images from all sides which is really handy, I wish I had explored that feature more.

The console is all electronic with a TFT display instead of mechanical instruments but the only significant difference this makes in driving is that when a turn indicator is used the console display shows a video feed for the blind-spot that matches the lane change direction. This is a significant safety feature and will reduce the incidence of collisions. But the capabilities of the hardware seem under utilised, hopefully they will release a software update at some future time to do more with it.

The most significant benefit of the GV60 over the GV70 is that it has cameras instead of mirrors at the sides of the car. This reduces drag and also removes the need to adjust mirrors to match the height of the driver. Also for driver instruction the instructor and learner get to see the same view. A logical development of such cars is an expansion pack for instruction that has displays in the passenger seat to show the instructor the same instrument view as the driver sees.

The minimum list driveaway price for the GV60 is $117,171.50 and for the GV70 it is $138,119.89 – both of which are more than I’m prepared to pay for a car. The GV60 apparently can be started by fingerprint which seems like a bad idea given the poor security of fingerprint sensors, but as regular car keys tend not to be too difficult to work around it probably doesn’t matter. The Genesis web site makes it difficult to find the ranges of electric cars which is surprising. A Google search suggests that the GV60 can do 466Km and the GV70 can do 410Km which are both reasonable numbers and nothing to be ashamed of.

The GV70 was a fun car to drive and the GV60 looks like it would be even better. I recommend that everyone who likes technology take one for a test drive, but for my own use I’m looking for something that costs less than half as much.

31 May, 2023 11:03AM by etbe

Russ Allbery

Review: Night Watch

Review: Night Watch, by Terry Pratchett

Series: Discworld #29
Publisher: Harper
Copyright: November 2002
Printing: August 2014
ISBN: 0-06-230740-1
Format: Mass market
Pages: 451

Night Watch is the 29th Discworld novel and the sixth Watch novel. I would really like to tell people they could start here if they wanted to, for reasons that I will get into in a moment, but I think I would be doing you a disservice. The emotional heft added by having read the previous Watch novels and followed Vimes's character evolution is significant.

It's the 25th of May. Vimes is about to become a father. He and several of the other members of the Watch are wearing sprigs of lilac for reasons that Sergeant Colon is quite vehemently uninterested in explaining. A serial killer named Carcer the Watch has been after for weeks has just murdered an off-duty sergeant. It's a tense and awkward sort of day and Vimes is feeling weird and wistful, remembering the days when he was a copper and not a manager who has to dress up in ceremonial armor and meet with committees.

That may be part of why, when the message comes over the clacks that the Watch have Carcer cornered on the roof of the New Hall of the Unseen University, Vimes responds in person. He's grappling with Carcer on the roof of the University Library in the middle of a magical storm when lightning strikes. When he wakes up, he's in the past, shortly after he joined the Watch and shortly before the events of the 25th of May that the older Watch members so vividly remember and don't talk about.

I have been saying recently in Discworld reviews that it felt like Pratchett was on the verge of a breakout book that's head and shoulders above Discworld prior to that point. This is it. This is that book.

The setup here is masterful: the sprigs of lilac that slowly tell the reader something is going on, the refusal of any of the older Watch members to talk about it, the scene in the graveyard to establish the stakes, the disconcerting fact that Vetinari is wearing a sprig of lilac as well, and the feeling of building tension that matches the growing electrical storm. And Pratchett never gives into the temptation to explain everything and tip his hand prematurely. We know the 25th is coming and something is going to happen, and the reader can put together hints from Vimes's thoughts, but Pratchett lets us guess and sometimes be right and sometimes be wrong. Vimes is trying to change history, which adds another layer of uncertainty and enjoyment as the reader tries to piece together both the true history and the changes. This is a masterful job at a "what if?" story.

And, beneath that, the commentary on policing and government and ethics is astonishingly good. In a review of an earlier Watch novel, I compared Pratchett to Dickens in the way that he focuses on a sort of common-sense morality rather than political theory. That is true here too, but oh that moral analysis is sharp enough to slide into you like a knife. This is not the Vimes that we first met in Guards! Guards!. He has has turned his cynical stubbornness into a working theory of policing, and it's subtle and complicated and full of nuance that he only barely knows how to explain. But he knows how to show it to people.

Keep the peace. That was the thing. People often failed to understand what that meant. You'd go to some life-threatening disturbance like a couple of neighbors scrapping in the street over who owned the hedge between their properties, and they'd both be bursting with aggrieved self-righteousness, both yelling, their wives would either be having a private scrap on the side or would have adjourned to a kitchen for a shared pot of tea and a chat, and they all expected you to sort it out.

And they could never understand that it wasn't your job. Sorting it out was a job for a good surveyor and a couple of lawyers, maybe. Your job was to quell the impulse to bang their stupid fat heads together, to ignore the affronted speeches of dodgy self-justification, to get them to stop shouting and to get them off the street. Once that had been achieved, your job was over. You weren't some walking god, dispensing finely tuned natural justice. Your job was simply to bring back peace.

When Vimes is thrown back in time, he has to pick up the role of his own mentor, the person who taught him what policing should be like. His younger self is right there, watching everything he does, and he's desperately afraid he'll screw it up and set a worse example. Make history worse when he's trying to make it better. It's a beautifully well-done bit of tension that uses time travel as the hook to show both how difficult mentorship is and also how irritating one's earlier naive self would be.

He wondered if it was at all possible to give this idiot some lessons in basic politics. That was always the dream, wasn't it? "I wish I'd known then what I know now"? But when you got older you found out that you now wasn't you then. You then was a twerp. You then was what you had to be to start out on the rocky road of becoming you now, and one of the rocky patches on that road was being a twerp.

The backdrop of this story, as advertised by the map at the front of the book, is a revolution of sorts. And the revolution does matter, but not in the obvious way. It creates space and circumstance for some other things to happen that are all about the abuse of policing as a tool of politics rather than Vimes's principle of keeping the peace. I mentioned when reviewing Men at Arms that it was an awkward book to read in the United States in 2020. This book tackles the ethics of policing head-on, in exactly the way that book didn't.

It's also a marvelous bit of competence porn. Somehow over the years, Vimes has become extremely good at what he does, and not just in the obvious cop-walking-a-beat sort of ways. He's become a leader. It's not something he thinks about, even when thrown back in time, but it's something Pratchett can show the reader directly, and have the other characters in the book comment on.

There is so much more that I'd like to say, but so much would be spoilers, and I think Night Watch is more effective when you have the suspense of slowly puzzling out what's going to happen. Pratchett's pacing is exquisite. It's also one of the rare Discworld novels where Pratchett fully commits to a point of view and lets Vimes tell the story. There are a few interludes with other people, but the only other significant protagonist is, quite fittingly, Vetinari. I won't say anything more about that except to note that the relationship between Vimes and Vetinari is one of the best bits of fascinating subtlety in all of Discworld.

I think it's also telling that nothing about Night Watch reads as parody. Sure, there is a nod to Back to the Future in the lightning storm, and it's impossible to write a book about police and street revolutions without making the reader think about Les Miserables, but nothing about this plot matches either of those stories. This is Pratchett telling his own story in his own world, unapologetically, and without trying to wedge it into parody shape, and it is so much the better book for it.

The one quibble I have with the book is that the bits with the Time Monks don't really work. Lu-Tze is annoying and flippant given the emotional stakes of this story, the interludes with him are frustrating and out of step with the rest of the book, and the time travel hand-waving doesn't add much. I see structurally why Pratchett put this in: it gives Vimes (and the reader) a time frame and a deadline, it establishes some of the ground rules and stakes, and it provides a couple of important opportunities for exposition so that the reader doesn't get lost. But it's not good story. The rest of the book is so amazingly good, though, that it doesn't matter (and the framing stories for "what if?" explorations almost never make much sense).

The other thing I have a bit of a quibble with is outside the book. Night Watch, as you may have guessed by now, is the origin of the May 25th Pratchett memes that you will be familiar with if you've spent much time around SFF fandom. But this book is dramatically different from what I was expecting based on the memes. You will, for example see a lot of people posting "Truth, Justice, Freedom, Reasonably Priced Love, And a Hard-Boiled Egg!", and before reading the book it sounds like a Pratchett-style humorous revolutionary slogan. And I guess it is, sort of, but, well... I have to quote the scene:

"You'd like Freedom, Truth, and Justice, wouldn't you, Comrade Sergeant?" said Reg encouragingly.

"I'd like a hard-boiled egg," said Vimes, shaking the match out.

There was some nervous laughter, but Reg looked offended.

"In the circumstances, Sergeant, I think we should set our sights a little higher—"

"Well, yes, we could," said Vimes, coming down the steps. He glanced at the sheets of papers in front of Reg. The man cared. He really did. And he was serious. He really was. "But...well, Reg, tomorrow the sun will come up again, and I'm pretty sure that whatever happens we won't have found Freedom, and there won't be a whole lot of Justice, and I'm damn sure we won't have found Truth. But it's just possible that I might get a hard-boiled egg."

I think I'm feeling defensive of the heart of this book because it's such an emotional gut punch and says such complicated and nuanced things about politics and ethics (and such deeply cynical things about revolution). But I think if I were to try to represent this story in a meme, it would be the "angels rise up" song, with all the layers of meaning that it gains in this story. I'm still at the point where the lilac sprigs remind me of Sergeant Colon becoming quietly furious at the overstep of someone who wasn't there.

There's one other thing I want to say about that scene: I'm not naturally on Vimes's side of this argument. I think it's important to note that Vimes's attitude throughout this book is profoundly, deeply conservative. The hard-boiled egg captures that perfectly: it's a bit of physical comfort, something you can buy or make, something that's part of the day-to-day wheels of the city that Vimes talks about elsewhere in Night Watch. It's a rejection of revolution, something that Vimes does elsewhere far more explicitly.

Vimes is a cop. He is in some profound sense a defender of the status quo. He doesn't believe things are going to fundamentally change, and it's not clear he would want them to if they did.

And yet. And yet, this is where Pratchett's Dickensian morality comes out. Vimes is a conservative at heart. He's grumpy and cynical and jaded and he doesn't like change. But if you put him in a situation where people are being hurt, he will break every rule and twist every principle to stop it.

He wanted to go home. He wanted it so much that he trembled at the thought. But if the price of that was selling good men to the night, if the price was filling those graves, if the price was not fighting with every trick he knew... then it was too high.

It wasn't a decision that he was making, he knew. It was happening far below the areas of the brain that made decisions. It was something built in. There was no universe, anywhere, where a Sam Vimes would give in on this, because if he did then he wouldn't be Sam Vimes any more.

This is truly exceptional stuff. It is the best Discworld novel I have read, by far. I feel like this was the Watch novel that Pratchett was always trying to write, and he had to write five other novels first to figure out how to write it. And maybe to prepare Discworld readers to read it.

There are a lot of Discworld novels that are great on their own merits, but also it is 100% worth reading all the Watch novels just so that you can read this book.

Followed in publication order by The Wee Free Men and later, thematically, by Thud!.

Rating: 10 out of 10

31 May, 2023 02:51AM

May 30, 2023

Review: The Mimicking of Known Successes

Review: The Mimicking of Known Successes, by Malka Older

Series: Mossa and Pleiti #1
Publisher: Tordotcom
Copyright: 2023
ISBN: 1-250-86051-2
Format: Kindle
Pages: 169

The Mimicking of Known Successes is a science fiction mystery novella, the first of an expected series. (The second novella is scheduled to be published in February of 2024.)

Mossa is an Investigator, called in after a man disappears from the eastward platform on the 4°63' line. It's an isolated platform, five hours away from Mossa's base, and home to only four residential buildings and a pub. The most likely explanation is that the man jumped, but his behavior before he disappeared doesn't seem consistent with that theory. He was bragging about being from Valdegeld University, talking to anyone who would listen about the important work he was doing — not typically the behavior of someone who is suicidal. Valdegeld is the obvious next stop in the investigation.

Pleiti is a Classics scholar at Valdegeld. She is also Mossa's ex-girlfriend, making her both an obvious and a fraught person to ask for investigative help. Mossa is the last person she expected to be waiting for her on the railcar platform when she returns from a trip to visit her parents.

The Mimicking of Known Successes is mostly a mystery, following Mossa's attempts to untangle the story of what happened to the disappeared man, but as you might have guessed there's a substantial sapphic romance subplot. It's also at least adjacent to Sherlock Holmes: Mossa is brilliant, observant, somewhat monomaniacal, and very bad at human relationships. All of this story except for the prologue is told from Pleiti's perspective as she plays a bit of a Watson role, finding Mossa unreadable, attractive, frustrating, and charming in turn. Following more recent Holmes adaptations, Mossa is portrayed as probably neurodivergent, although the story doesn't attach any specific labels.

I have no strong opinions about this novella. It was fine? There's a mystery with a few twists, there's a sapphic romance of the second chance variety, there's a bit of action and a bit of hurt/comfort after the action, and it all felt comfortably entertaining but kind of predictable. Susan Stepney has a "passes the time" review rating, and while that may be a bit harsh, that's about where I ended up.

The most interesting part of the story is the science fiction setting. We're some indefinite period into the future. Humans have completely messed up Earth to the point of making it uninhabitable. We then took a shot at terraforming Mars and messed that planet up to the point of uninhabitability as well. Now, what's left of humanity (maybe not all of it — the story isn't clear) lives on platforms connected by rail lines high in the atmosphere of Jupiter. (Everyone in the story calls Jupiter "Giant" for reasons that I didn't follow, given that they didn't rename any of its moons.) Pleiti's position as a Classics scholar means that she studies Earth and its now-lost ecosystems, whereas the Modern faculty focus on their new platform life.

This background does become relevant to the mystery, although exactly how is not clear at the start.

I wouldn't call this a very realistic setting. One has to accept that people are living on platforms attached to artificial rings around the solar system's largest planet and walk around in shirt sleeves and only minor technological support due to "atmoshields" of some unspecified capability, and where the native atmosphere plays the role of London fog. Everything feels vaguely Edwardian, including to the occasional human porter and message runner, which matches the story concept but seems unlikely as a plausible future culture. I also disbelieve in humanity's ability to do anything to Earth that would make it less inhabitable than the clouds of Jupiter.

That said, the setting is a lot of fun, which is probably more important. It's fun to try to visualize, and it has that slightly off-balance, occasionally surprising feel of science fiction settings where everyone is recognizably human but the things they consider routine and unremarkable are unexpected by the reader.

This novella also has a great title. The Mimicking of Known Successes is simultaneously a reference a specific plot point from late in the story, a nod to the shape of the romance, and an acknowledgment of the Holmes pastiche, and all of those references work even better once you know what the plot point is. That was nicely done.

This was not very memorable apart from the setting, but it was pleasant enough. I can't say that I'm inspired to pre-order the next novella in this series, but I also wouldn't object to reading it. If you're in the mood for gender-swapped Holmes in an exotic setting, you could do worse.

Followed by The Imposition of Unnecessary Obstacles.

Rating: 6 out of 10

30 May, 2023 02:09AM

May 29, 2023

hackergotchi for Shirish Agarwal

Shirish Agarwal

Pearls of Luthra, Dahaad, Tetris & Discord.

Pearls of Luthra

Pearls of Luthra is the first book by Brian Jacques and I think I am going to be a fan of his work. This particular book you have to be wary of. While it is a beautiful book with quite a few illustrations, I have to warn that if you are somebody who feels hungry at the very mention of food, then you will be hungry throughout the book. There isn’t a single page where food isn’t mentioned and not just any kind of food, the kind of food that is geared towards sweet tooth. So if you fancy tarts or chocolates or anything sweet you will right at home. The book also touches upon various teas and wines and various liquors but food is where it shines in literally. The tale is very much like a Harry Potter adventure but isn’t as dark as HP was. In fact, apart from one death and one ear missing rest of our heroes and heroines and there are quite a few. I don’t want to give too much away as it’s a book to be treasured.

Dahaad

Dahaad (the roar) is Sonakshi Sinha’s entry in OTT/Web Series. The stage is set somewhere in North India while the exploits are based on a real life person called Cyanide Mohan who killed 20 women between 2005-2009. In the web series however, the antagonist’s crimes are done over a period of 12 years and has 29 women as his victims. Apart from that it’s pretty much a copy of what was done by the person above. It’s a melting pot of a series which quite a few stories enmeshed along with the main one. The main onus and plot of the movie is about women from lower economic and caste order whose families want them to be wed but cannot due to huge demands for dowry. Now in such a situation, if a person were to give them a bit of attention, promise marriage and ask them to steal a bit and come with him and whatever, they will do it. The same modus operandi was done by Cynaide Mohan. He had a car that was not actually is but used it show off that he’s from a richer background, entice the women, have sex, promise marriage and in the morning after pill there will be cynaide which the women unwittingly will consume.

This is also framed by the protagonist Sonakshi Sinha to her mother as her mother is also forcing her to get married as she is becoming older. She shows some of the photographs of the victims and says that while the perpetrator is guilty but so is the overall society that puts women in such vulnerable positions. AFAIK, that is still the state of things. In fact, there is a series called ‘Indian Matchmaking‘ that has all the snobbishness that you want. How many people could have a lifestyle like the ones shown in that, less than 2% of the population. It’s actually shows like the above that make the whole thing even more precarious 😦

Apart from it, the show also shows prejudice about caste and background. I wouldn’t go much into it as it’s worth seeing and experiencing.

Tetris

Tetris in many a ways is a story of greed. It’s also a story of a lone inventor who had to wait almost 20 odd years to profit from his invention. Forbes does a marvelous job of giving some more background and foreground info. about Tetris, the inventor and the producer that went to strike it rich. It also does share about copyright misrepresentation happens but does nothing to address it. Could talk a whole lot but better to see the movie and draw your own conclusions. For me it was 4/5.

Discord

Discord became Discord 2.0 and is a blank to me. A blank page. Can’t do anything. First I thought it was a bug. Waited for a few days as sometimes webservices do fix themselves. But two weeks on and it still wasn’t fixed then decided to look under. One of the tools in Firefox is Web Developer Tools ( CTRL+Shift+I) that tells you if an element of a page is not appearing or at least gives you a hint. To me it gave me the following –


Content Security Policy: Ignoring “'unsafe-inline'” within script-src or style-src: nonce-source or hash-source specified
Content Security Policy: The page’s settings blocked the loading of a resource at data:text/css,%0A%20%20%20%20%20%20%20%2… (“style-src”). data:44:30
Content Security Policy: Ignoring “'unsafe-inline'” within script-src or style-src: nonce-source or hash-source specified
TypeError: AudioContext is not a constructor 138875 https://discord.com/assets/cbf3a75da6e6b6a4202e.js:262 l https://discord.com/assets/f5f0b113e28d4d12ba16.js:1ed46a18578285e5c048b.js:241:118

What is being done is dom.webaudio.enabled being disabled in Firefox.

Then on a hunch, searched on reddit and saw the following. Be careful while visiting the link as it’s labelled NSFW although to my mind there wasn’t anything remotely NSFW about it. They do mention using another tool ‘AudioContext Fingerprint Defender‘ which supposedly fakes or spoofs an id. As this add-on isn’t tracked by Firefox privacy team it’s hard for me to say anything positive or negative.

So, in the end I stopped using discord as the alternative was being tracked by them 😦

Last but not the least, saw this about a week back. Sooner or later this had to happen as Elon tries to make money off Twitter.


29 May, 2023 11:49PM by shirishag75

John Goerzen

Recommendations for Tools for Backing Up and Archiving to Removable Media

I have several TB worth of family photos, videos, and other data. This needs to be backed up — and archived.

Backups and archives are often thought of as similar. And indeed, they may be done with the same tools at the same time. But the goals differ somewhat:

Backups are designed to recover from a disaster that you can fairly rapidly detect.

Archives are designed to survive for many years, protecting against disaster not only impacting the original equipment but also the original person that created them.

Reflecting on this, it implies that while a nice ZFS snapshot-based scheme that supports twice-hourly backups may be fantastic for that purpose, if you think about things like family members being able to access it if you are incapacitated, or accessibility in a few decades’ time, it becomes much less appealing for archives. ZFS doesn’t have the wide software support that NTFS, FAT, UDF, ISO-9660, etc. do.

This post isn’t about the pros and cons of the different storage media, nor is it about the pros and cons of cloud storage for archiving; these conversations can readily be found elsewhere. Let’s assume, for the point of conversation, that we are considering BD-R optical discs as well as external HDDs, both of which are too small to hold the entire backup set.

What would you use for archiving in these circumstances?

Establishing goals

The goals I have are:

  • Archives can be restored using Linux or Windows (even though I don’t use Windows, this requirement will ensure the broadest compatibility in the future)
  • The archival system must be able to accommodate periodic updates consisting of new files, deleted files, moved files, and modified files, without requiring a rewrite of the entire archive dataset
  • Archives can ideally be mounted on any common OS and the component files directly copied off
  • Redundancy must be possible. In the worst case, one could manually copy one drive/disc to another. Ideally, the archiving system would automatically track making n copies of data.
  • While a full restore may be a goal, simply finding one file or one directory may also be a goal. Ideally, an archiving system would be able to quickly tell me which discs/drives contain a given file.
  • Ideally, preserves as much POSIX metadata as possible (hard links, symlinks, modification date, permissions, etc). However, for the archiving case, this is less important than for the backup case, with the possible exception of modification date.
  • Must be easy enough to do, and sufficiently automatable, to allow frequent updates without error-prone or time-consuming manual hassle

I would welcome your ideas for what to use. Below, I’ll highlight different approaches I’ve looked into and how they stack up.

Basic copies of directories

The initial approach might be one of simply copying directories across. This would work well if the data set to be archived is smaller than the archival media. In that case, you could just burn or rsync a new copy with every update and be done. Unfortunately, this is much less convenient with data of the size I’m dealing with. rsync is unavailable in that case. With some datasets, you could manually design some rsyncs to store individual directories on individual devices, but that gets unwieldy fast and isn’t scalable.

You could use something like my datapacker program to split the data across multiple discs/drives efficiently. However, updates will be a problem; you’d have to re-burn the entire set to get a consistent copy, or rely on external tools like mtree to reflect deletions. Not very convenient in any case.

So I won’t be using this.

tar or zip

While you can split tar and zip files across multiple media, they have a lot of issues. GNU tar’s incremental mode is clunky and buggy; zip is even worse. tar files can’t be read randomly, making it extremely time-consuming to extract just certain files out of a tar file.

The only thing going for these formats (and especially zip) is the wide compatibility for restoration.

dar

Here we start to get into the more interesting tools. Dar is, in my opinion, one of the best Linux tools that few people know about. Since I first wrote about dar in 2008, it’s added some interesting new features; among them, binary deltas and cloud storage support. So, dar has quite a few interesting features that I make use of in other ways, and could also be quite helpful here:

  • Dar can both read and write files sequentially (streaming, like tar), or with random-access (quick seek to extract a subset without having to read the entire archive)
  • Dar can apply compression to individual files, rather than to the archive as a whole, faciliting both random access and resilience (corruption in one file doesn’t invalidate all subsequent files). Dar also supports numerous compression algorithms including gzip, bzip2, xz, lzo, etc., and can omit compressing already-compressed files.
  • The end of each dar file contains a central directory (dar calls this a catalog). The catalog contains everything necessary to extract individual files from the archive quickly, as well as everything necessary to make a future incremental archive based on this one. Additionally, dar can make and work with “isolated catalogs” — a file containing the catalog only, without data.
  • Dar can split the archive into multiple pieces called slices. This can best be done with fixed-size slices (–slice and –first-slice options), which let the catalog regord the slice number and preserves random access capabilities. With the –execute option, dar can easily wait for a given slice to be burned, etc.
  • Dar normally stores an entire new copy of a modified file, but can optionally store an rdiff binary delta instead. This has the potential to be far smaller (think of a case of modifying metadata for a photo, for instance).

Additionally, dar comes with a dar_manager program. dar_manager makes a database out of dar catalogs (or archives). This can then be used to identify the precise archive containing a particular version of a particular file.

All this combines to make a useful system for archiving. Isolated catalogs are tiny, and it would be easy enough to include the isolated catalogs for the entire set of archives that came before (or even the dar_manager database file) with each new incremental archive. This would make restoration of a particular subset easy.

The main thing to address with dar is that you do need dar to extract the archive. Every dar release comes with source code and a win64 build. dar also supports building a statically-linked Linux binary. It would therefore be easy to include win64 binary, Linux binary, and source with every archive run. dar is also a part of multiple Linux and BSD distributions, which are archived around the Internet. I think this provides a reasonable future-proofing to make sure dar archives will still be readable in the future.

The other challenge is user ability. While dar is highly portable, it is fundamentally a CLI tool and will require CLI abilities on the part of users. I suspect, though, that I could write up a few pages of instructions to include and make that a reasonably easy process. Not everyone can use a CLI, but I would expect a person that could follow those instructions could be readily-enough found.

One other benefit of dar is that it could easily be used with tapes. The LTO series is liked by various hobbyists, though it could pose formidable obstacles to non-hobbyists trying to aceess data in future decades. Additionally, since the archive is a big file, it lends itself to working with par2 to provide redundancy for certain amounts of data corruption.

git-annex

git-annex is an interesting program that is designed to facilitate managing large sets of data and moving it between repositories. git-annex has particular support for offline archive drives and tracks which drives contain which files.

The idea would be to store the data to be archived in a git-annex repository. Then git-annex commands could generate filesystem trees on the external drives (or trees to br burned to read-only media).

In a post about using git-annex for blu-ray backups, an earlier thread about DVD-Rs was mentioned.

This has a few interesting properties. For one, with due care, the files can be stored on archival media as regular files. There are some different options for how to generate the archives; some of them would place the entire git-annex metadata on each drive/disc. With that arrangement, one could access the individual files without git-annex. With git-annex, one could reconstruct the final (or any intermediate) state of the archive appropriately, handling deltions, renames, etc. You would also easily be able to know where copies of your files are.

The practice is somewhat more challenging. Hundreds of thousands of files — what I would consider a medium-sized archive — can pose some challenges, running into hours-long execution if used in conjunction with the directory special remote (but only minutes-long with a standard git-annex repo).

Ruling out the directory special remote, I had thought I could maybe just work with my files in git-annex directly. However, I ran into some challenges with that approach as well. I am uncomfortable with git-annex mucking about with hard links in my source data. While it does try to preserve timestamps in the source data, these are lost on the clones. I wrote up my best effort to work around all this.

In a forum post, the author of git-annex comments that “I don’t think that CDs/DVDs are a particularly good fit for git-annex, but it seems a couple of users have gotten something working.” The page he references is Managing a large number of files archived on many pieces of read-only medium. Some of that discussion is a bit dated (for instance, the directory special remote has the importtree feature that implements what was being asked for there), but has some interesting tips.

git-annex supplies win64 binaries, and git-annex is included with many distributions as well. So it should be nearly as accessible as dar in the future. Since git-annex would be required to restore a consistent recovery image, similar caveats as with dar apply; CLI experience would be needed, along with some written instructions.

Bacula and BareOS

Although primarily tape-based archivers, these do also also nominally support drives and optical media. However, they are much more tailored as backup tools, especially with the ability to pull from multiple machines. They require a database and extensive configuration, making them a poor fit for both the creation and future extractability of this project.

Conclusions

I’m going to spend some more time with dar and git-annex, testing them out, and hope to write some future posts about my experiences.

29 May, 2023 04:57PM by John Goerzen

hackergotchi for Jonathan Carter

Jonathan Carter

MiniDebConf Germany 2023

This year I attended Debian Reunion Hamburg (aka MiniDebConf Germany) for the second time. My goal for this MiniDebConf was just to talk to people and make the most of the time I have there. No other specific plans or goals. Despite this simple goal, it was a very productive and successful event for me.

Tuesday 23rd:

  • Arrived much later than planned after about 18h of travel, went to bed early.

Wednesday 24th:

  • Was in a discussion about individual package maintainership.
  • Was in a discussion about the nature of Technical Committee.
  • Co-signed a copy of The Debian System book along with the other DDs
  • Submitted a BoF request for people who are present to bring issues to the attention of the DPL (and to others who are around).
  • Noticed I still had a blog entry draft about this event last year, and posted it just to get it done.
  • Had a stand-up meeting, was nice to see what everyone was working on.
  • Had some event budgeting discussions with Holger.
  • Worked a bit on a talk I haven’t submitted yet called “Current events” (it’s slightly punny, get it?) – it’s still very raw but I’m passively working on it just in case we need a backup talk over the weekend.
  • Had a discussion over lunch with someone who runs their HPC on Debian and learned about Octopus and Pac.
  • TIL (from -python) about pyproject.toml (https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/)
  • Was in a discussion about amd64 build times on our buildds and referred them to DSA. I also e-mailed DSA to ask them if there’s anything we can do to improve build times (since it affects both productivity and motivation).
  • Did some premium cola tasting with andrewsh
  • Had a discussion with Ilu about installers (and luks2 issues in Calamares), accessibility and organisational stuff.

Thursday 25th:

  • Spent quite a chunk of the morning in a usrmerge BoF. I’m very impressed by the amount of reading and research the people in the BoF did and gathering all the facts/data, it seems that there is now a way forward that will fix usrmerge in Debian in a way that could work for everyone, an extensive summary/proposal will be posted to debian-devel as soon as possible.
  • Mind was in zombie mode. So I did something easy and upgraded the host running this blog and a few other hosts to bookworm to see what would break.
  • Cheese and wine party, which resulted in a mao party that ran waaaay too late.

Friday 26th:

Saturday 27th:

  • Attended talks:
    • HTTP all the things – The rocky path from the basement into the “cloud”
    • Running Debian on a Smartphone
    • debvm – Ephemeral Virtual Debian Machines
    • Network Configuration on Debian Systems
    • Discussing changes to the Debian key package definition
    • Meet the Release Team
    • Towards collective decision-making and maintenance in the Debian base system
  • Performed some PGP key signing.
  • Edited group photo.

Sunday 28th:

  • Had a BoF where we had an open discussion about things on our collective minds (Debian Therapy Session).
  • Had a session on upcoming legislature in the EU (like CRA).
  • Some web statistics with MrFai.
  • Talked to Marc Haber about a DebConf bid for Heidelberg for DebConf 25.
  • Closing session.

Monday 29th:

  • Started the morning with Helmut and Jochen convincing me switch from cowbuilder to sbuild (I’m tentatively sold, the huge new plus is that you don’t need schroot anymore, which trashed two of my systems in the past and effectively made sbuild a no-go for me until now).
  • Dealt with more laptop hardware failures, removing a stick of RAM seems to have solved that for now!

Das is nicht gut.

  • Dealt with some delegation issues for release team and publicity team.
  • Attended my last stand-up meeting.
  • Wrapped things up, blogged about the event. Probably forgot to list dozens of things in this blog entry. It is fine.

Tuesday 30th:

  • Didn’t attend the last day, basically a travel day for me.

Thank you to Holger for organising this event yet again!

29 May, 2023 12:48PM by jonathan

Russ Allbery

Book haul

I think this is partial because I also have a stack of other books that I missed recording. At some point, I should stop using this method to track book acquisitions in favor of one of the many programs intended for this purpose, but it's in the long list of other things I really should do one of these days.

As usual, I have already read and reviewed a few of these. I might be getting marginally better at reading books shortly after I acquire them? Maybe?

Steven Brust — Tsalmoth (sff)
C.L. Clark — The Faithless (sff)
Oliver Darkshire — Once Upon a Tome (non-fiction)
Hernan Diaz — Trust (mainstream)
S.B. Divya — Meru (sff)
Kate Elliott — Furious Heaven (sff)
Steven Flavall — Before We Go Live (non-fiction)
R.F. Kuang — Babel (sff)
Laurie Marks — Dancing Jack (sff)
Arkady Martine — Rose/House (sff)
Madeline Miller — Circe (sff)
Jenny Odell — Saving Time (non-fiction)
Malka Older — The Mimicking of Known Successes (sff)
Sabaa Tahir — An Ember in the Ashes (sff)
Emily Tesh — Some Desperate Glory (sff)
Valerie Valdes — Chilling Effect (sff)

29 May, 2023 04:31AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Python 3.11, pip and (breaking) system packages

As we get closer to Debian Bookworm's release, I thought I'd share one change in Python 3.11 that will surely affect many people.

Python 3.11 implements the new PEP 668, Marking Python base environments as “externally managed”1. If you use pip regularly on Debian, it's likely you'll eventually hit the externally-managed-environment error:

error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.

    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    See /usr/share/doc/python3.11/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

With this PEP, Python tools can now distinguish between packages that have been installed by the user with a tool like pip and ones installed using a distribution's package manager, like apt.

This is generally great news: it was previously too easy to break a system by mixing the two types of packages. This PEP will simplify our role as a distribution, as well as improve the overall Python user experience in Debian.

Sadly, it's also likely this change will break some of your scripts, especially CI that (legitimately) install packages via pip alongside system packages. For example, I use the following gitlab-ci snippet to make sure my PRs don't break my build process2:

build:flit:
  stage: build
  script:
  - apt-get update && apt-get install -y flit python3-pip
  - FLIT_ROOT_INSTALL=1 flit install
  - metalfinder --help

With Python 3.11, this snippet will error out, as pip will refuse to install packages alongside the system's. The fix is to tell pip it's OK to "break" your system packages, either using the --break-system-packages parameter, or the PIP_BREAK_SYSTEM_PACKAGES=1 environment variable3.

This, of course, is not something you should be using in production to restore the old behavior! The "proper" way to fix this issue, as the externally-managed-environment error message aptly (har har) informs you, is to use virtual environments.

Happy hacking!


  1. Kudos to our own Matthias Klose, Stefano Rivera and Elana Hashman, who worked on designing and implementing this PEP! 

  2. Which is something that bit me before... You push some changes to your git repository, everything seems fine and all the tests pass, so you merge it and make a new git tag. When the time comes to build and upload this tag to PyPi, you find out some minor thing broke your build system (which you weren't testing) and you have to scramble to make a point-release to fix the issue. Sad! 

  3. Don't go searching for this environment variable in pip's code though, as you won't find it! All of pip's command line options can be passed as env vars using the PIP_<UPPER_LONG_NAME> format. Useful for tools that use pip indirectly, like flit

29 May, 2023 04:00AM by Louis-Philippe Véronneau

May 27, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.4.0.0 on CRAN: New Upstream Minor

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1074 other packages on CRAN, downloaded 29.3 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 535 times according to Google Scholar.

This release brings a new upstream release 12.4.0 made by Conrad a day or so ago. I prepared the usual release candidate, tested on the over 1000 reverse depends (which sadly takes almost a day on old hardware), found no issues and sent it to CRAN. Where it got tested again and was once again auto-processed smoothly by CRAN within a few hours on a Friday night which is just marvelous. So this time I tweeted about it too.

The releases actually has a relatively small set of changes as a second follow-up release in the 12.* series.

Changes in RcppArmadillo version 0.12.4.0.0 (2023-05-26)

  • Upgraded to Armadillo release 12.4.0 (Cortisol Profusion Redux)

    • Added norm2est() for finding fast estimates of matrix 2-norm (spectral norm)

    • Added vecnorm() for obtaining the vector norm of each row or column of a matrix

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like my open-source work, you may consider sponsoring me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 May, 2023 09:35PM

May 26, 2023

Valhalla's Things

Late Victorian Combinations

Posted on May 26, 2023

A woman wearing a white linen combination suite, with a very fitted top, small sleevelets that cover the armpits (to protect the next layers from sweat) and split drawers. The suite buttons up along the front (where it is a bit tight around the bust) and has a line of lace at the neckline and two tucks plus some lace at the legs.

Some time ago, on an early Friday afternoon our internet connection died. After a reasonable time had passed we called the customer service, they told us that they would look into it and then call us back.

On Friday evening we had not heard from them, and I was starting to get worried. At the time in the evening when I would have been relaxing online I grabbed the first Victorian sewing-related book I found on my hard disk and started to read it.

For the record, it wasn’t actually Victorian, it was Margaret J. Blair. System of Sewing and Garment Drafting. from 1904, but I also had available for comparison the earlier and smaller Margaret Blair. System of Garment Drafting. from 1897.

A page from the book showing the top part of a pattern with all construction lines

Anyway, this book had a system to draft a pair of combinations (chemise top + drawers); and months ago I had already tried to draft a pair from another system, but they didn’t really fit and they were dropped low on the priority list, so on a whim I decided to try and draft them again with this new-to-me system.

Around 23:00 in the night the pattern was ready, and I realized that my SO had gone to sleep without waiting for me, as I looked too busy to be interrupted.

The next few days were quite stressful (we didn’t get our internet back until Wednesday) and while I couldn’t work at my day job I didn’t sew as much as I could have done, but by the end of the week I had an almost complete mockup from an old sheet, and could see that it wasn’t great, but it was a good start.

One reason why the mockup took a whole week is that of course I started to sew by machine, but then I wanted flat-felled seams, and felling them by hand is so much neater, isn’t it?

And let me just say, I’m grateful for the fact that I don’t depend on streaming services for media, but I have a healthy mix of DVDs and stuff I had already temporary downloaded to watch later, because handsewing and being stressed out without watching something is not really great.

Anyway, the mockup was a bit short on the crotch, but by the time I could try it on and be sure I was invested enough in it that I decided to work around the issue by inserting a strip of lace around the waist.

And then I went back to the pattern to fix it properly, and found out that I had drafted the back of the drawers completely wrong, making a seam shorter rather than longer as it should have been. ooops.

I fixed the pattern, and then decided that YOLO and cut the new version directly on some lightweight linen fabric I had originally planned to use in this project.

The result is still not perfect, but good enough, and I finished it with a very restrained amount of lace at the neckline and hems, wore it one day when the weather was warm (loved the linen on the skin) and it’s ready to be worn again when the weather will be back to being warm (hopefully not too soon).

The last problem was taking pictures of this underwear in a way that preserves the decency (and it even had to be outdoors, for the light!).

This was solved by wearing leggings and a matched long sleeved shirt under the combinations, and then promptly forgetting everything about decency and, well, you can see what happened.

A woman mooning by keeping the back of split drawers open with her hands, but at least there are black leggings under them.

The pattern is, as usual, published on my pattern website as #FreeSoftWear.

And then, I started thinking about knits.

In the late Victorian and Edwardian eras knit underwear was a thing, also thanks to the influence of various aspects of the rational dress movement; reformers such as Gustav Jäger advocated for wool underwear, but mail order catalogues from the era such as https://archive.org/details/cataloguefallwin00macy (starting from page 67) have listings for both cotton and wool ones.

From what I could find, back then they would have been either handknit at home or made to shape on industrial knitting machines; patterns for the former are available online, but the latter would probably require a knitting machine that I don’t currently1 have.

However, this is underwear that is not going to be seen by anybody2, and I believe that by using flat knit fabric one can get a decent functional approximation.

In The Stash I have a few meters of a worked cotton jersey with a pretty comfy feel, and to make a long story short: this happened.

a woman wearing a black cotton jersey combination suite; the front is sewn shut, but the neck is wide and finished with elastic.  The top part is pretty fitted, but becomes baggier around the crotch area and the legs are a comfortable width.

I suspect that the linen one will get worn a lot this summer (linen on the skin. nothing else need to be said), while the cotton one will be stored away for winter. And then maybe I may make a couple more, if I find out that I’m using it enough.


  1. cue ominous music. But first I would need space to actually keep and use it :)↩︎

  2. other than me, my SO, any costuming friend I may happen to change in the presence of, and everybody on the internet in these pictures.↩︎

26 May, 2023 12:00AM

Correspondence Book

Posted on May 26, 2023

A Coptic bound book open to the first page with the title “Book of <space> Correspondence / Volume <space> Years <space>”

I write letters. The kind that are written on paper with a dip pen 1 and ink, stamped and sent through the post, spend a few days or weeks maturing like good wine in a depot somewhere2, and then get delivered to the recipient.

Some of them (mostly cards) are to people who will receive them and thank me via xmpp (that sounds odd, but actually works out nicely), but others are proper letters with long texts that I exchange with penpals.

Most of those are fountain pen frea^Wenthusiasts, so I usually use a different ink each time, and try to vary the paper, and I need to keep track of what I’ve used.

Some time ago, I’ve read a Victorian book3 which recommended keeping a correspondence book to register all mail received and sent, the topics and whether it had been replied or otherwise acted upon. I don’t have the mail traffic of a Victorian lady (or even middle class woman), but this looked like something fun to do, and if I added fields for the inks and paper used it would also have useful side effect.

A page with writing lines with the title of the field below it: it has a number and then date, sender / recipient (at the ends of the same line, in reply to / replied, ink, paper, pen, topics / notes.

So I headed over to the obvious program anybody would use for these things (XeLaTeX, of course) and quickly designed a page with fields for the basic thinks I want to record; it was a bit hurried, and I may improve on it the next time I make one, but I expect this one to last me two or three years, and it is good enough.

I’ve decided to make it A6 sized, so that it doesn’t require a lot of space on my busy desktop, and it could be carried inside a portable desktop, if I ever decide to finish the one for which I’ve made a mockup years ago :)

Picture of book open to the correspondent pages: the fields are name, letters sent, letters received, address and notes.

I’ve also added a few pages for the addresses of my correspondents (and an index of the letters I’ve exchanged with them), and a few empty pages for other notes.

Then I’ve used my a6_book.py script to rearrange the A6 pages into signatures and impress them on A4; to reduce later effort I’ve added an option to order the pages in such a way that if I then cut four A4 sheet in half at a time (the limit of my rotary cutter) the signatures are ready to be folded. It’s not the default because it requires that the pages are a multiple of 32 rather than just 16 (and they are padded up with empty pages if they aren’t).

If you’re also interested in making one, here are the files:

the book open to the page of letter two, which is repeated twice.

After printing (an older version where some of the pages are repeated. whoops, but it only happened 4 times, and it’s not a big deal), it was time for binding this into a book.

I’ve opted for Coptic stitch, so that the book will open completely flat and writing on it will be easier and the covers are 2 mm cardboard covered in linen-look bookbinding paper (sadly I no longer have a source for bookbinding cloth made from actual cloth).

The grey cover of the book with the word correspondence, a stylised envelope and a border in blue.

I tried to screenprint a simple design on the cover: the first attempt was unusable (the paper was smaller than the screen, so I couldn’t keep it in the right place and moved as I was screenprinting); on the second attempt I used some masking tape to keep the paper in place, and they were a bit better, but I need more practice with the technique.

Finally, I decided that for such a Victorian thing I will use an Iron-gall ink, but it’s Rohrer & Knlingner Scabiosa, with a purple undertone, because life’s too short to use blue-black ink :D

And now, I’m off to write an actual letter, rather than writing online about things that are related to letter writing.


  1. not a quill! I’m a modern person who uses steel nibs!↩︎

  2. Milano Roserio, I’m looking at you. a month to deliver a postcard from Lombardy to Ticino? not even a letter, which could have hidden contraband, a postcard.↩︎

  3. I think. I’ve looked at some plausible candidates and couldn’t find the source.↩︎

26 May, 2023 12:00AM

May 25, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.6 on CRAN: More updates from QuantLib

The sixth release of the still new-ish qlcal package arrivied at CRAN today.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more.

This release brings updates to a few calendars which happened since the QuantLib 1.30 release, and also updates a several of the (few) non-calendaring functions.

Changes in version 0.0.6 (2023-05-24)

  • Several calendars (India, Singapore, South Africa, South Korea) updated with post-QuantLib 1.3.0 changes (Sebastian Schmidt in #6)

  • Three now-used scheduled files were removed (Dirk in #7))

  • A number of non-calendaring files used were synchronised with the current QuantLib repo (Dirk in #8)

Last release, we also added a quick little demo using xts to column-bind calendars produced from each of the different US sub-calendars. This is a slightly updated version of the sketch we tooted a few days ago. The output now is

> print(Reduce(cbind, Map(makeHol, cals)))
           LiborImpact NYSE GovernmentBond NERC FederalReserve
2023-01-02        TRUE TRUE           TRUE TRUE           TRUE
2023-01-16        TRUE TRUE           TRUE   NA           TRUE
2023-02-20        TRUE TRUE           TRUE   NA           TRUE
2023-04-07          NA TRUE             NA   NA             NA
2023-05-29        TRUE TRUE           TRUE TRUE           TRUE
2023-06-19        TRUE TRUE           TRUE   NA           TRUE
2023-07-04        TRUE TRUE           TRUE TRUE           TRUE
2023-09-04        TRUE TRUE           TRUE TRUE           TRUE
2023-10-09        TRUE   NA           TRUE   NA           TRUE
2023-11-10        TRUE   NA             NA   NA             NA
2023-11-23        TRUE TRUE           TRUE TRUE           TRUE
2023-12-25        TRUE TRUE           TRUE TRUE           TRUE
> 

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 May, 2023 10:33PM

hackergotchi for Jonathan Carter

Jonathan Carter

Upgraded this host to Debian 12 (bookworm)

I upgraded the host running my blog to Debian 12 today. My website has existed in some form since 1997, it changed from pure html to a Python CGI script in the early 2000s, and when blogging became big around then, I migrated to WordPress around 2004.

This WordPress instance ran on Ubuntu up until 2010, and then on Debian ever since. Upgrades are just too easy. I did end up hitting one small bug with today’s upgrade though, I run the PHP fast process manager on the Apache MPM event server, and during upgrade, php8.2-fpm wasn’t enabled somehow (contrary to what I would expect), at least a simple 'a2conf enable php8.2-fpm' saved my site again after a (very rare) few minutes of downtime.

25 May, 2023 10:10AM by jonathan

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (March and April 2023)

The following contributors got their Debian Developer accounts in the last two months:

  • James Lu (jlu)
  • Hugh McMaster (hmc)
  • Agathe Porte (gagath)

The following contributors were added as Debian Maintainers in the last two months:

  • Soren Stoutner
  • Matthijs Kooijman
  • Vinay Keshava
  • Jarrah Gosbell
  • Carlos Henrique Lima Melara
  • Cordell Bloor

Congratulations!

25 May, 2023 10:00AM by Jean-Pierre Giraud

May 24, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

RIP Brenda McDowell

My mother died earlier this month. She’d been diagnosed with cancer back in February 2022 and had been through major surgery and a couple of rounds of chemotherapy, so it wasn’t a complete surprise even if it was faster at the end than expected. That doesn’t make it easy, but I’m glad to be able to say that her immediate family were all with her at home at the end.

I was touched by the number of people who turned up, both to the wake and the subsequent funeral ceremony. Mum had done a lot throughout her life and was settled in Newry, and it was nice to see how many folk wanted to pay their respects. It was also lovely to hear from some old school friends who had fond memories of her.

There are many things I could say about her, but I don’t feel that here is the place to do so. My father and brother did excellent jobs at eulogies at the funeral. However, while I blog less about life things than I did in the past, I did not want it to go unmarked here. She was my Mum, I loved her, and I am sad she is gone.

24 May, 2023 07:02PM

Scarlett Gately Moore

KDE Gear 23.04.1 Snaps Released! Snapcraft updates and more.

Kweather SnapKweather Snap

I have completed the 23.04.1 KDE Gear applications release for snaps! With this release comes several new KDE Snaps!

  • Kweather
  • Krecorder
  • Kclock
  • Alligator
  • Ghostwriter
  • Kasts
  • Tokodon

Plus many long outdated / broken snaps are updated and or fixed!

Check them all out here:

https://snapcraft.io/search?q=KDE

I have been busy triaging and squashing bugs in regards to snaps on https://bugs.kde.org

Snapcraft:

Updated the kde-neon extension for the newest content pack.

Made a core22 qmake plugin with tests PR

https://github.com/canonical/craft-parts/pull/463

Future work:

Top on my TO-DO list is still PIM. There are many parts, making it more complex. I am working on it though. QT6/KF6 is making it’s way to the top of the list as well. KDE Neon has made significant progress here, so I am in early stages of updating our build scripts to generate our qt6/kf6 content snap.

Thanks for stopping by!

https://gofund.me/2c7b1808 All proceeds go to improving my ability to work. Thanks for your consideration!

24 May, 2023 06:27PM by sgmoore

hackergotchi for Jonathan Carter

Jonathan Carter

Debian Reunion MiniDebConf 2022

It wouldn’t be inaccurate to say that I’ve had a lot on my plate in the last few years, and that I have a *huge* backlog of little tasks to finish. Just last week, I finally got to all my keysigning from DebConf22. This week, I’m at MiniDebConf Germany in Hamburg. It’s the second time I’m here! And it’s great already. Last year I drafted a blog entry, but never got around to publishing it. So, in order to mentally tick off yet another thing, here follows a somewhat imperfect (I had to delete a lot of short-hand because I didn’t know what it means anymore), but at least published post about my activities from a year ago.

This week (well, last year) I attended my first ever in-person MiniDebConf and MiniDebCamp in Hamburg, Germany. The last time I was in Germany was 7 years ago for DebConf15 (or at time of publishing, actually, last year… for this same event).

My focus for the week was to work on Debian live related stuff.

In preparation for the week I tried to fix/close as many Calamares bugs as I could, so before the event I closed:

  • File calamares upstream issue #1944 ‘Calamares allows me to select a username of “root”‘ for Debian bug #976617.
  • File calamares upstream issue #1945 ‘Calamares needs support for high DPI’ for Debian bug #992162.
  • Comment on calamares bug #1005212 ‘Calamares installer fails at partitioning disks’ requesting further info.
  • Close calamares bug #1009876 ‘There is no /tmp item in the list during the partitioning step in the debian calamares installer’ – /tmp partitions can be created, not a bug, really just a small UI issue.
  • Close calamares bug #974998 ‘SegFault when clicked on “Create” in manual partitioning’, not reproducible in bullseye.
  • Close calamares bug #971266 ‘Debian fails to start when /home is encrypted during installation’ – this works fine since bullseye.
  • Close calamares bug #976617 ‘Calamares allows me to select a username of “root”‘ – has since been fixed upstream.

Monday to Friday we worked together on various issues, and the weekend was for talks.

On Monday morning, I had a nice discussion with Roland Clobus who has been working on making Debian live images reproducible. He’s also been working on testing Debian Live on openqa.debian.net. We’re planning on integrating his work so that Debian 12 live images will be reproducible. For automated testing on openqa, it will be ongoing work, one blocker has been that snapshots.debian.org limits connections after a while, so builds on there start failing fast.

On Monday afternoon, I went ahead and uploaded the latest Calamares hotfix (Calamares 3.2.58.1-1) release that fixes a UI issue on the partitioning screen where it could get stuck. On 15:00 we had a stand-up meeting where we introduced ourselves and talked a bit about our plans. It was great to see how many people could benefit from each other being there. For example, someone wanting to learn packaging, another wanting to improve packaging documentation, another wanting help with packaging something written in Rust, another wanting to improve Rust packaging in general and lots of overlap when it comes to reproducible builds! I also helped a few people with some of their packaging issues.

On Monday evening, someone in videoteam managed to convince me to put together a loopy loop for this MiniDebConf. There’s really wasn’t enough time to put together something elaborate, but I put something together based on the previous loopy with some experiments that I’ve been working on for the upcoming DC22 loopy, and we can use this loop to do a call for content for the DC22 loop.

On Tuesday morning had some chats with urbec and Ilu,Tuesday afternoon talked to MIA team about upcoming removals. Did some admin on debian.ch payments for hosting. On Tuesday evening worked on live image stuff (d-i downloader, download module for dmm).

On Wednesday morning I slept a bit late, then had to deal with some DPL admin/legal things. Wednesday afternoon, more chats with people.

On Thursday: Talked to a bunch more people about a lot of issues, got loopy in a reasonably shape, edited and published the Group photo!

On Friday: prepared my talk slides, learned about Brave (https://github.com/bbc/brave) – It initially looked like a great compositor for DebConf video stuff (and possible replacement for OBS, but it turned out it wasn’t really maintained upstream). In the evening we had the Cheese and Wine party, where lots of deliciousness was experienced.

On Saturday, I learned from Felix’s talk that Tensorflow is now in experimental! (and now in 2023 I checked again and that’s still the case, although it hasn’t made it’s way in unstable yet, hopefully that improves over the trixie cycle)

I know most of the people who attended quite well, but it was really nice to also see a bunch of new Debianites that I’ve only seen online before and to properly put some faces to names. We also had a bunch of enthusiastic new contributors and we did some key signing.

24 May, 2023 01:32PM by jonathan

May 23, 2023

Craig Small

Devices with cgroup v2

Docker and other container systems by default restrict access to devices on the host. They used to do this with cgroups with the cgroup v1 system, however, the second version of cgroups removed this controller and the man page says:

Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF.
https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v2.rst

That is just awesome, nothing to see here, go look at the BPF documents if you have cgroup v2.

With cgroup v1 if you wanted to know what devices were permitted, you just would cat /sys/fs/cgroup/XX/devices.allow and you were done!

The kernel documentation is not very helpful, sure its something in BPF and has something to do with the cgroup BPF specifically, but what does that mean?

There doesn’t seem to be an easy corresponding method to get the same information. So to see what restrictions a docker container has, we will have to:

  1. Find what cgroup the programs running in the container belong to
  2. Find what is the eBPF program ID that is attached to our container cgroup
  3. Dump the eBPF program to a text file
  4. Try to interpret the eBPF syntax

The last step is by far the most difficult.

Finding a container’s cgroup

All containers have a short ID and a long ID. When you run the docker ps command, you get the short id. To get the long id you can either use the --no-trunc flag or just guess from the short ID. I usually do the second.

$ docker ps 
CONTAINER ID   IMAGE            COMMAND       CREATED          STATUS          PORTS     NAMES
a3c53d8aaec2   debian:minicom   "/bin/bash"   19 minutes ago   Up 19 minutes             inspiring_shannon

So the short ID is a3c53d8aaec2 and the long ID is a big ugly hex string starting with that. I generally just paste the relevant part in the next step and hit tab. For this container the cgroup is /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/ Notice that the last directory has “docker-” then the short ID.

If you’re not sure of the exact path. The “/sys/fs/cgroup” is the cgroup v2 mount point which can be found with mount -t cgroup2 and then rest is the actual cgroup name. If you know the process running in the container then the cgroup column in ps will show you.

$ ps -o pid,comm,cgroup 140064
    PID COMMAND         CGROUP
 140064 bash            0::/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope

Either way, you will have your cgroup path.

eBPF programs and cgroups

Next we will need to get the eBPF program ID that is attached to our recently found cgroup. To do this, we will need to use the bpftool. One thing that threw me for a long time is when the tool talks about a program or a PROG ID they are talking about the eBPF programs, not your processes! With that out of the way, let’s find the prog id.

$ sudo bpftool cgroup list /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/
ID       AttachType      AttachFlags     Name
90       cgroup_device   multi

Our cgroup is attached to eBPF prog with ID of 90 and the type of program is cgroup _device.

Dumping the eBPF program

Next, we need to get the actual code that is run every time a process running in the cgroup tries to access a device. The program will take some parameters and will return either a 1 for yes you are allowed or a zero for permission denied. Don’t use the file option as it dumps the program in binary format. The text version is hard enough to understand.

sudo bpftool prog dump xlated id 90 > myebpf.txt

Congratulations! You now have the eBPF program in a human-readable (?) format.

Interpreting the eBPF program

The eBPF format as dumped is not exactly user friendly. It probably helps to first go and look at an example program to see what is going on. You’ll see that the program splits type (lower 4 bytes) and access (higher 4 bytes) and then does comparisons on those values. The eBPF has something similar:

   0: (61) r2 = *(u32 *)(r1 +0)
   1: (54) w2 &= 65535
   2: (61) r3 = *(u32 *)(r1 +0)
   3: (74) w3 >>= 16
   4: (61) r4 = *(u32 *)(r1 +4)
   5: (61) r5 = *(u32 *)(r1 +8)

What we find is that once we get past the first few lines filtering the given value that the comparison lines have:

  • r2 is the device type, 1 is block, 2 is character.
  • r3 is the device access, it’s used with r1 for comparisons after masking the relevant bits. mknod, read and write are 1,2 and 3 respectively.
  • r4 is the major number
  • r5 is the minor number

For a even pretty simple setup, you are going to have around 60 lines of eBPF code to look at. Luckily, you’ll often find the lines for the command options you added will be near the end, which makes it easier. For example:

  63: (55) if r2 != 0x2 goto pc+4
  64: (55) if r4 != 0x64 goto pc+3
  65: (55) if r5 != 0x2a goto pc+2
  66: (b4) w0 = 1
  67: (95) exit

This is a container using the option --device-cgroup-rule='c 100:42 rwm'. It is checking if r2 (device type) is 2 (char) and r4 (major device number) is 0x64 or 100 and r5 (minor device number) is 0x2a or 42. If any of those are not true, move to the next section, otherwise return with 1 (permit). We have all access modes permitted so it doesn’t check for it.

The previous example has all permissions for our device with id 100:42, what about if we only want write access with the option --device-cgroup-rule='c 100:42 r'. The resulting eBPF is:

  63: (55) if r2 != 0x2 goto pc+7  
  64: (bc) w1 = w3
  65: (54) w1 &= 2
  66: (5d) if r1 != r3 goto pc+4
  67: (55) if r4 != 0x64 goto pc+3
  68: (55) if r5 != 0x2a goto pc+2
  69: (b4) w0 = 1
  70: (95) exit

The code is almost the same but we are checking that w3 only has the second bit set, which is for reading, effectively checking for X==X&2. It’s a cautious approach meaning no access still passes but multiple bits set will fail.

The device option

docker run allows you to specify files you want to grant access to your containers with the --device flag. This flag actually does two things. The first is to great the device file in the containers /dev directory, effectively doing a mknod command. The second thing is to adjust the eBPF program. If the device file we specified actually did have a major number of 100 and a minor of 42, the eBPF would look exactly like the above snippets.

What about privileged?

So we have used the direct cgroup options here, what does the --privileged flag do? This lets the container have full access to all the devices (if the user running the process is allowed). Like the --device flag, it makes the device files as well, but what does the filtering look like? We still have a cgroup but the eBPF program is greatly simplified, here it is in full:

   0: (61) r2 = *(u32 *)(r1 +0)
   1: (54) w2 &= 65535
   2: (61) r3 = *(u32 *)(r1 +0)
   3: (74) w3 >>= 16
   4: (61) r4 = *(u32 *)(r1 +4)
   5: (61) r5 = *(u32 *)(r1 +8)
   6: (b4) w0 = 1
   7: (95) exit

There is the usual setup lines and then, return 1. Everyone is a winner for all devices and access types!

23 May, 2023 12:13PM by dropbear

hackergotchi for Jonathan Dowland

Jonathan Dowland

neovim plugins and distributions

I've been watching the neovim community for a while and what seems like a cambrian explosion of plugins emerging. A few weeks back I decided to spend most of a "day of learning" on investigating some of the plugins and technologies that I'd read about: Language Server Protocol, TreeSitter, neorg (a grandiose organiser plugin), etc.

It didn't go so well. I spent most of my time fighting version incompatibilities or tracing through scant documentation or code to figure out what plugin was incompatible with which other.

There's definitely a line where crossing it is spending too much time playing with your tools instead of creating. On the other hand, there's definitely value in honing your tools and learning about new technologies. Everyone's line is probably in a different place. I've come to the conclusion that I don't have the time or inclination (or both) to approach exploring the neovim universe in this way. There exist a number of plugin "distributions" (such as LunarVim): collections of pre- configured and integrated plugins that you can try to use out-of-the-box. Next time I think I'll pick one up and give that a try &emdash independently from my existing configuration &emdash and see which ideas from it I might like to adopt.

shared vimrcs

Some folks upload their vim or neovim configurations in their entirety for others to see. I noticed Jess Frazelle had published hers so I took a look. I suppose one could evaluate a bunch of plugins and configuration in isolation using a shared vimrc like this, in the same was as a distribution.

bufferline

Amongst the plugins she uses was bufferline, a plugin to re-work neovim's tab bar to behave like tab bars from more conventional editors1. I don't make use of neovim's tabs at all2, so I would lose nothing having the (presently hidden) tab bar reworked, so I thought I'd give it a go.

I had to disable an existing plugin lightline, which I've had enabled for years but I wasn't sure I was getting much value from. Apparently it also messes with the tab bar! Disabling it, at least for now, at least means I'll find out if I miss it.

I am already using vim-buffergator as a means of seeing and managing open buffers: a hotkey opens a sidebar with a list of open buffers, to switch between or close. Bufferline gives me a more immediate, always-present view of open buffers, which is faintly useful: but not much. Perhaps I'd like it more if I was coming from an editor that had made it more of an expected feature. The two things I noticed about it that aren't especially useful for me: when browsing around vimwiki pages, I quickly open a lot of buffers. The horizontal line fills up very quickly. Even when I don't, I habitually have quite a lot of buffers open, and the horizontal line is quickly overwhelmed.

I have found myself closing open buffers with the mouse, which I didn't do before.

vert

Since I have brought up a neovim UI feature (tabs) I thought I'd briefly mention my new favourite neovim built-in command: vert.

Quite a few plugins and commands open up a new window (e.g. git-fugitive, Man, etc.) and they typically do so in a horizontal split. I'm increasingly preferring vertical splits. Prefixing any3 such command with vert forces the split to be vertical instead.


  1. in this case the direct influence was apparently DOOM Emacs
  2. (neo)vim's notion of tabs is completely different to what you might expect from other UI models.
  3. at least, I haven't found one that doesn't work yet

23 May, 2023 11:04AM

hackergotchi for Bits from Debian

Bits from Debian

proxmox Platinum Sponsor of DebConf23

proxmoxlogo

We are pleased to announce that Proxmox has committed to sponsor DebConf23 as a Platinum Sponsor.

Proxmox develops powerful, yet easy-to-use open-source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are based on the great Debian platform, and we are happy that we can give back to the community by sponsoring DebConf23.

With this commitment as Platinum Sponsor, Proxmox is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Proxmox, for your support of DebConf23!

Become a sponsor too!

DebConf23 will take place from September 10th to 17th, 2022 in Kochi, India, and will be preceded by DebCamp, from September 3rd to 9th.

And DebConf23 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf23 website at https://debconf23.debconf.org/sponsors/become-a-sponsor/.

23 May, 2023 09:17AM by Sahil Dhiman

Sergio Durigan Junior

Using WireGuard to host services at home

It’s been a while since I had this idea to leverage the power of WireGuard to self-host stuff at home. Even though I pay for a proper server somewhere in the world, there are some services that I don’t consider critical to put there, or that I consider too critical to host outside my home.

It’s only NATural

With today’s ISP packages for end users, I find it very annoying the amount of trouble they create when you try to host anything at home. Dynamic IPs, NAT/CGNAT, port-blocking, traffic shapping are only a few examples of methods or limitations that prevent users from making local services reachable in a reliable way from outside.

WireGuard comes to help

If you already pay for a VPS or a dedicated server somewhere, why not use its existing infrastructure (and public availability) in your favour? That’s what I thought when I started this journey.

My initial idea was to use a reverse proxy to redirect external requests to the service running at my home. But how could I make sure that these requests reach my dynamic-IP-behind-a-NAT-behind-another-NAT? Well, let’s create a tunnel! WireGuard is the perfect tool for that because of many things: it’s stateless, very performant, secure, and requires very little configuration.

Setting up on the server

On the server side (i.e., VPS or dedicated server), you will create the first endpoint. Something like the following should do:

[Interface]
PrivateKey = PRIVATE_KEY_HERE
Address = 10.0.0.1/32
ListenPort = 51821

[Peer]
PublicKey = PUBLIC_KEY_HERE
AllowedIps = 10.0.0.2/32
PersistentKeepalive = 10

A few interesting points to note:

  • The Peer section contains information about the home service that will be configured below.
  • I’m using PersistentKeepalive because I have a dynamic IP at my home. If you have a static IP, you could get rid of PersistentKeepalive and specify an Endpoint here (don’t forget to set a ListenPort below, in the Interface section).
  • Now you have an IP where you can forward requests to. If we’re talking about HTTP traffic, Apache and nginx are absolutely capable of doing it. If we’re talking about other kind of traffic, you might want to look into other utilities, like HAProxy, Traefik and others.

Setting up at your home

At your home, you will configure the peer:

[Interface]
PrivateKey = PRIVATE_KEY_HERE
Address = 10.0.0.2/32

[Peer]
PublicKey = PUBLIC_KEY_HERE
AllowedIps = 10.0.0.1/32
Endpoint = YOUR_SERVER:51821
PersistentKeepalive = 10

A few notes about security

I would be remiss if I didn’t say anything about security, especially because we’re talking about hosting services at home. So, here are a few recommendations:

  • Make sure to put your services in a separate local network. Using VLANs is also a good option.
  • Don’t run services on your personal (or work!) computer, even if they’ll be running inside a VM.
  • Run a firewall on the WireGuard interface and make sure that you only allow traffic over the required ports.

Have fun!

23 May, 2023 04:56AM

Russ Allbery

Review: A Half-Built Garden

Review: A Half-Built Garden, by Ruthanna Emrys

Publisher: Tordotcom
Copyright: 2022
ISBN: 1-250-21097-6
Format: Kindle
Pages: 340

The climate apocalypse has happened. Humans woke up to the danger, but a little bit too late. Over one billion people died. But the world on the other side of that apocalypse is not entirely grim. The corporations responsible for so much of the damage have been pushed out of society and isolated on their independent "aislands," traded with only grudgingly for the few commodities the rest of the world has not yet learned how to manufacture without them. Traditional governments have largely collapsed, although they cling to increasingly irrelevant trappings of power. In their place arose the watershed networks: a new way of living with both nature and other humans, built around a mix of anarchic consensus and direct democracy, with conservation and stewardship of the natural environment at its core.

Therefore, when the aliens arrive near Bear Island on the Potomac River, they're not detected by powerful telescopes and met by military jets. Instead, their waste sets off water sensors, and they're met by the two women on call for alert duty, carrying a nursing infant and backed by the real-time discussion and consensus technology of the watershed's dandelion network. (Emrys is far from the first person to name something a "dandelion network," so be aware that the usage in this book seems unrelated to the charities or blockchain network.)

This is a first contact novel, but it's one that skips over the typical focus of the subgenre. The alien Ringers are completely fluent in English down to subtle nuance of emotion and connotation (supposedly due to observation of our radio and TV signals), have translation devices, and in some cases can make our speech sounds directly. Despite significantly different body shapes, they are immediately comprehensible; differences are limited mostly to family structure, reproduction, and social norms. This is Star Trek first contact, not the type more typical of written science fiction. That feels unrealistic, but it's also obviously an authorial choice to jump directly to the part of the story that Emrys wants to write.

The Ringers have come to save humanity. In their experience, technological civilization is inherently incompatible with planets. Technology will destroy the planet, and the planet will in turn destroy the species unless they can escape. They have reached other worlds multiple times before, only to discover that they were too late and everyone is already dead. This is the first time they've arrived in time, and they're eager to help humanity off its dying planet to join them in the Dyson sphere of space habitats they are constructing. Planets, to them, are a nest and a launching pad, something to eventually abandon and break down for spare parts.

The small, unexpected wrinkle is that Judy, Carol, and the rest of their watershed network are not interested in leaving Earth. They've finally figured out the most critical pieces of environmental balance. Earth is going to get hotter for a while, but the trend is slowing. What they're doing is working. Humanity would benefit greatly from Ringer technology and the expertise that comes from managing closed habitat ecosystems, but they don't need rescuing.

This goes over about as well as a toddler saying that playing in the road is perfectly safe.

This is a fantastic hook for a science fiction novel. It does exactly what a great science fiction premise should do: takes current concerns (environmentalism, space boosterism, the debatable primacy of humans as a species, the appropriate role of space colonization, the tension between hopefulness and doomcasting about climate change) and uses the freedom of science fiction to twist them around and come at them from an entirely different angle.

The design of the aliens is excellent for this purpose. The Ringers are not one alien species; they are two, evolved on different planets in the same system. The plains dwellers developed space flight first and went to meet the tree dwellers, and while their relationship is not entirely without hierarchy (the plains dwellers clearly lead on most matters), it's extensively symbiotic. They now form mixed families of both species, and have a rich cultural history of stories about first contact, interspecies conflicts and cooperation, and all the perils and misunderstandings that they successfully navigated. It makes their approach to humanity more believable to know that they have done first contact before and are building on a model. Their concern for humanity is credibly sincere. The joining of two species was wildly successful for them and they truly want to add a third.

The politics on the human side are satisfyingly complicated. The watershed network may have made first contact, but the US government (in the form of NASA) is close behind, attempting to lean on its widely ignored formal power. The corporations are farther away and therefore slower to arrive, but the alien visitors have a damaged ship and need space to construct a subspace beacon and Asterion is happy to offer a site on one of its New Zealand islands. The corporate representatives are salivating at the chance to escape Earth and its environmental regulation for uncontrolled space construction and a new market of trillions of Ringers. NASA's attitude is more measured, but their representative is easily persuaded that the true future of humanity is in space. The work the watershed networks are doing is difficult, uncertain, and involves a lot of sacrifice, particularly for corporate consumer lifestyles. With such an attractive alien offer on the table, why stay and work so hard for an uncertain future? Maybe the Ringers are right.

And then the dandelion networks that the watersheds use as the core of their governance and decision-making system all crash.

The setup was great; I was completely invested. The execution was more mixed. There are some things I really liked, some things that I thought were a bit too easy or predictable, and several places where I wish Emrys had dug deeper and provided more detail. I thought the last third of the book fizzled a little, although some of the secondary characters Emrys introduces are delightful and carry the momentum of the story when the politics feel a bit lacking.

If you tried to form a mental image of ecofeminist political science fiction with 1970s utopian sensibilities, but updated for the concerns of the 2020s, you would probably come very close to the politics of the watershed networks. There are considerably more breastfeedings and diaper changes than the average SF novel. Two of the primary characters are transgender, but with very different experiences with transition. Pronoun pins are an ubiquitous article of clothing. One of the characters has a prosthetic limb. Another character who becomes important later in the story codes as autistic. None of this felt gratuitous; the characters do come across as obsessed with gender, but in a way that I found believable. The human diversity is well-integrated with the story, shapes the characters, creates practical challenges, and has subtle (and sometimes not so subtle) political ramifications.

But, and I say this with love because while these are not quite my people they're closely adjacent to my people, the social politics of this book are a very specific type of white feminist collaborative utopianism. When religion makes an appearance, I was completely unsurprised to find that several of the characters are Jewish. Race never makes a significant appearance at all. It's the sort of book where the throw-away references to other important watershed networks includes African ones, and the characters would doubtless try to be sensitive to racial issues if they came up, but somehow they never do. (If you're wondering if there's polyamory in this book, yes, yes there is, and also I suspect you know exactly what culture I'm talking about.)

This is not intended as a criticism, just more of a calibration. All science fiction publishing houses could focus only on this specific political perspective for a year and the results would still be dwarfed by the towering accumulated pile of thoughtless paeans to capitalism. Ecofeminism has a long history in the genre but still doesn't show up in that many books, and we're far from exhausting the space of possibilities for what a consensus-based politics could look like with extensive computer support. But this book has a highly specific point of view, enough so that there won't be many thought-provoking surprises if you're already familiar with this school of political thought.

The politics are also very earnest in a way that I admit provoked a bit of eyerolling. Emrys pushes all of the political conflict into the contrasts between the human factions, but I would have liked more internal disagreement within the watershed networks over principles rather than tactics. The degree of ideological agreement within the watershed group felt a bit unrealistic. But, that said, at least politics truly matters and the characters wrestle directly with some tricky questions. I would have liked to see more specifics about the dandelion network and the exact mechanics of the consensus decision process, since that sort of thing is my jam, but we at least get more details than are typical in science fiction. I'll take this over cynical libertarianism any day.

Gender plays a huge role in this story, enough so that you should avoid this book if you're not interested in exploring gender conceptions. One of the two alien races is matriarchal and places immense social value on motherhood, and it's culturally expected to bring your children with you for any important negotiation. The watersheds actively embrace this, or at worst find it comfortable to use for their advantage, despite a few hints that the matriarchy of the plains aliens may have a very serious long-term demographic problem. In an interesting twist, it's the mostly-evil corporations that truly challenge gender roles, albeit by turning it into an opportunity to sell more clothing.

The Asterion corporate representatives are, as expected, mostly the villains of the plot: flashy, hierarchical, consumerist, greedy, and exploitative. But gender among the corporations is purely a matter of public performance, one of a set of roles that you can put on and off as you choose and signal with clothing. They mostly use neopronouns, change pronouns as frequently as their clothing, and treat any question of body plumbing as intensely private. By comparison, the very 2020 attitudes of the watersheds towards gender felt oddly conservative and essentialist, and the main characters get flustered and annoyed by the ever-fluid corporate gender presentation. I wish Emrys had done more with this.

As you can tell, I have a lot of thoughts and a lot of quibbles. Another example: computer security plays an important role in the plot and was sufficiently well-described that I have serious questions about the system architecture and security model of the dandelion networks. But, as with decision-making and gender, the more important takeaway is that Emrys takes enough risks and describes enough interesting ideas that there's a lot of meat here to argue with. That, more than getting everything right, is what a good science fiction novel should do.

A Half-Built Garden is written from a very specific political stance that may make it a bit predictable or off-putting, and I thought the tail end of the book had some plot and resolution problems, but arguing with it was one of the more intellectually satisfying science fiction reading experiences I've had recently. You have to be in the right mood, but recommended for when you are.

Rating: 7 out of 10

23 May, 2023 02:46AM

May 22, 2023

hackergotchi for Adnan Hodzic

Adnan Hodzic

rpi-microk8s-bootstrap: Automate RPI device conversion into Kubernetes cluster nodes with Terraform

Considering I’ve created my own private cloud in my home as part of: wp-k8s: WordPress on privately hosted Kubernetes cluster (Raspberry Pi 4 + Synology)....

The post rpi-microk8s-bootstrap: Automate RPI device conversion into Kubernetes cluster nodes with Terraform appeared first on FoolControl: Phear the penguin.

22 May, 2023 10:44AM by Adnan Hodzic

Russ Allbery

Review: Tsalmoth

Review: Tsalmoth, by Steven Brust

Series: Vlad Taltos #16
Publisher: Tor
Copyright: 2023
ISBN: 1-4668-8970-5
Format: Kindle
Pages: 277

Tsalmoth is the sixteenth book in the Vlad Taltos series and (some fans of the series groan) yet another flashback novel to earlier in Vlad's life. It takes place between Yendi and the interludes in Dragon (or, perhaps more straightforwardly, between Yendi and Jhereg. Most of the books of this series stand alone to some extent, so you could read this book out of order and probably not be horribly confused, but I suspect it would also feel weirdly pointless outside of the context of the larger series.

We're back to Vlad running a fairly small operation as a Jhereg, who are the Dragaeran version of organized crime. A Tsalmoth who owes Vlad eight hundred imperials has rudely gotten himself murdered, thoroughly enough that he can't be revived. That's a considerable amount of money, and Vlad would like it back, so he starts poking around. As you might expect if you've read any other book in this series, things then get a bit complicated. This time, they involve Jhereg politics, Tsalmoth house politics, and necromancy (which in this universe is more about dimensional travel than it is about resurrecting the dead).

The main story is... fine. Kragar is around being unnoticeable as always, Vlad is being cocky and stubborn and bantering with everyone, and what appears to be a straightforward illegal business relationship turns out to involve Dragaeran magic and thus Vlad's highly-placed friends. As usual, they're intellectually curious about the magic and largely ambivalent to the rest of Vlad's endeavors. The most enjoyable part of the story is Vlad's insistence on getting his money back while everyone else in the story cannot believe he would be this persistent over eight hundred imperials and is certain he has some other motive. It's otherwise a fairly forgettable little adventure.

The implications for the broader series, though, are significant, although essentially none of the payoff is here. Brust has been keeping a major secret about Vlad that's finally revealed here, one that has little impact on the plot of this book (although it causes Vlad a lot of angst) but which I suspect will become very important later in the series. That was intriguing but rather unsatisfying, since it stays only a future hook with an attached justification for why we're only finding out about it now.

If one has read the rest of the series, it's also nice to see Vlad and Cawti working together, bantering with each other and playing off of each other's strengths. It's reminiscent of the best parts of Yendi. As with many of the books of this series, the chapter introductions tell a parallel story; this time, it's Vlad and Cawti's wedding.

I think previous books already mentioned that Vlad is narrating this series into some sort of recording device, and a bit about why he's doing that, but this is made quite explicit here. We get as much of the surrounding frame as we've ever seen before. There are no obvious plot consequences from this — it's still all hints and guesswork — but I suspect this will also become important by the end of the series.

If you've read this much of the series, you'll obviously want to read this one as well, but unfortunately don't get your hopes up for significant plot advancement. This is another station-keeping book, which is a bit of a disappointment. We haven't gotten major plot advancement since Hawk in 2014, and I'm getting impatient. Thankfully, Lyorn has a release date already (April 9, 2024), and assuming all goes according to the grand plan, there are only two books left after Lyorn (Chreotha and The Last Contract). I'm getting hopeful that we're going to get to see the entire series.

Meanwhile, I am very tempted to do a complete re-read of the series to date, probably in series chronological order rather than in publication order (as much as that's possible given the fractured timelines of Dragon and Tiassa) so that I can see how the pieces fit together. The constant jumping back and forth and allusions to events that have already happened but that we haven't seen yet is hard to keep track of. I'm very glad the Lyorn Records exists.

Followed by Lyorn.

Rating: 7 out of 10

22 May, 2023 02:39AM

May 21, 2023

hackergotchi for Bits from Debian

Bits from Debian

Infomaniak First Platinum Sponsor of DebConf23

infomaniaklogo

We are pleased to announce that Infomaniak has committed to sponsor DebConf23 as a Platinum Sponsor.

Infomaniak is a key player in the European Cloud and the leading developer of Web technologies in Switzerland. It aims to be an independent European alternative to the web giants and is committed to an ethical and sustainable Web that respects privacy and creates local jobs. Infomaniak develops cloud solutions (IaaS, PaaS, VPS), productivity tools for online collaboration and video and radio streaming services.

The company uses only renewable electricity, offsets 200% of its CO2 emissions and extends the life of its servers up to 15 years. The company cools its infrastructure with filtered air, without air conditioning, and is building a new data centre that will fully recycle the energy it consumes to heat up to 6,000 homes.

With this commitment as Platinum Sponsor, Infomaniak is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Infomaniak, for your support of DebConf23!

Become a sponsor too!

DebConf23 will take place from September 10th to 17th, 2022 in Kochi, India, and will be preceded by DebCamp, from September 3rd to 9th.

And DebConf23 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf23 website at https://debconf23.debconf.org/sponsors/become-a-sponsor/.

21 May, 2023 12:08PM by Sahil Dhiman

May 19, 2023

Petter Reinholdtsen

wmbusmeters, parse data from your utility meter - nice free software

There is a European standard for reading utility meters like water, gas, electricity or heat distribution meters. The Meter-Bus standard (EN 13757-2, EN 13757-3 and EN 13757–4) provide a cross vendor way to talk to and collect meter data. I ran into this standard when I wanted to monitor some heat distribution meters, and managed to find free software that could do the job. The meters in question broadcast encrypted messages with meter information via radio, and the hardest part was to track down the encryption keys from the vendor. With this in place I could set up a MQTT gateway to submit the meter data for graphing.

The free software systems in question, rtl-wmbus to read the messages from a software defined radio, and wmbusmeters to decrypt and decode the content of the messages, is working very well and allowe me to get frequent updates from my meters. I got in touch with upstream last year to see if there was any interest in publishing the packages via Debian. I was very happy to learn that Fredrik Öhrström volunteered to maintain the packages, and I have since assisted him in getting Debian package build rules in place as well as sponsoring the packages into the Debian archive. Sadly we completed it too late for them to become part of the next stable Debian release (Bookworm). The wmbusmeters package just cleared the NEW queue. It will need some work to fix a built problem, but I expect Fredrik will find a solution soon.

If you got a infrastructure meter supporting the Meter Bus standard, I strongly recommend having a look at these nice packages.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

19 May, 2023 07:50PM

May 18, 2023

Antoine Beaupré

A terrible Pixel Tablet

In a strange twist of history, Google finally woke and thought "I know what we need to do! We need to make a TABLET!".

So some time soon in 2023, Google will release "The tablet that only Google could make", the Pixel Tablet.

Having owned a Samsung Galaxy Tab S5e for a few years, I was very curious to see how this would pan out and especially whether it would be easier to flash than the Samsung. As an aside, I figured I would give that a shot, and within a few days managed to completely brick the device. Awesome. See gts4lvwifi for the painful details of that.

In any case, Google made a tablet. I own a Pixel phone and I'm moderately happy with it. It's easy to flash with CalyxOS, maybe this is the promise land of tablets?

Compared with the Samsung

But it turns out that the Pixel Tablet pales in comparison with the Samsung tablet, produced 4 years ago, in 2019:

  • it's thicker (8.1mm vs 5.5mm)
  • it's heavier (493g vs 400g)
  • it's not AMOLED (IPS LCD)
  • it doesn't have an SD card reader
  • its camera is worse (8MP vs 13MP, 1080p video instead of 4k)
  • it's more expensive (670EUR vs 410EUR)

What the Pixel tablet has going for it:

  • a slightly more powerful CPU
  • a stylus
  • more storage (128GB or 256GB vs 64GB or 128GB)
  • more RAM (8GB vs 4GB or 6GB)
  • Wifi 6

I guess I should probably wait for the actual device to come out to see reviews and how it stacks up, but so far it's kind of impressive how underwhelming this is.

Also note that we're comparing against a very old Samsung tablet here, a fairer comparison might be against the Samsung Galaxy Tab S8. There the sizes are comparable, and the Samsung is more expensive than the Pixel, but then the Pixel has absolutely zero advantages and all the other disadvantages.

The Dock

The "Dock" is also worth a little aside.

See, the tablet comes with a dock that doubles as a speaker.

You can't buy the tablet without the dock. You have to have a dock.

I shit you not, actual quote: "Can I purchase a Pixel Tablet without the Charging Speaker Dock? No, you can only purchase the Pixel Tablet with the Charging Speaker Dock."

In case you really, really like the dock, "You may purchase additional Charging Speaker Docks separately (coming soon)." And no, they can't all play together, only the dock the tablet is docked into will play audio.

The dock is not a Bluetooth speaker, it can only play audio from that one tablet that Google made, this one time.

It's also not a battery pack. It's just a charger with speakers in it.

Promising e-waste.

Again, I hope I'm wrong and that this is going to be a fine tablet. But so far, it looks like it doesn't even come close to whatever Samsung threw over the fence before the apocalypse (remember 2019? were we even born yet?).

"The tablet that only Google could make." Amazing. Hopefully no one else gets any bright ideas like this.

18 May, 2023 03:59PM

May 17, 2023

Jamie McClelland

Cranky old timers should know perl

I act like an old timer (I’ve been around linux for 25 years and I’m cranky about new tech that is not easily maintained and upgraded) yet somehow I don’t know perl. How did that happen?

I discovered this state when I decided to move from the heroically packaged yet seemingly upstream un-maintained opendmarc package to authentication_milter.

It’s written in perl. And, alas, not in debian.

How hard could this be?

The instructions for installing seemed pretty straight forward: cpanm Mail::Milter::Authentication.

Wah. I’m glad I tried this out on a test virtual machine. It took forever! It ran tests! It compiled things! And, it installed a bunch of perl modules already packaged in Debian.

I don’t think I want to add this command to my ansible playbook.

Next I spent an inordinate amount of time trying to figure out how to list the dependencies of a given CPAN module. I was looking for something like cpanm --list-dependencies Mail::Milter::Authentication but eventually ended up writing a perl script that output perl code, inserting a “use " before each dependency and a semicolon and line break after them. Then, I could execute that script on a clean debian installation and see which perl modules I needed. For each error, I checked for the module in Debian (and installed it) or kept a list of modules I would have to build (and commented out the line).

Once I had a list of modules to build, I used the handy cpan2deb command. It took some creative ordering but eventually I got it right. Since I will surely forget how to do this when it’s time to upgrade, I wrote a script.

In total it took me several days to figure this all out, so I once again find myself very appreciative of all the debian packagers out there - particularly the perl ones!!

And also… if I did this all wrong and there is an easier way I would love to hear about it in the comments.

17 May, 2023 12:27PM

May 16, 2023

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, April 2023 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In April, 18 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 6.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 8.0h to the next month.
  • Adrian Bunk did 18.0h (out of 16.5h assigned and 24.0h from previous period), thus carrying over 22.5h to the next month.
  • Anton Gladky did 8.0h (out of 9.5h assigned and 5.5h from previous period), thus carrying over 7.0h to the next month.
  • Bastien Roucariès did 17.0h (out of 17.0h assigned and 3.0h from previous period), thus carrying over 3.0h to the next month.
  • Ben Hutchings did 16.0h (out of 12.0h assigned and 12.0h from previous period), thus carrying over 8.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Dominik George did 0.0h (out of 0h assigned and 20.34h from previous period), thus carrying over 20.34h to the next month.
  • Emilio Pozuelo Monfort did 4.5h (out of 11.0h assigned and 9.5h from previous period), thus carrying over 16.0h to the next month.
  • Guilhem Moulin did 8.5h (out of 8.0h assigned and 12.0h from previous period), thus carrying over 11.5h to the next month.
  • Helmut Grohne did 5.0h (out of 2.5h assigned and 7.5h from previous period), thus carrying over 5.0h to the next month.
  • Lee Garrett did 0.0h (out of 31.5h assigned and 9.0h from previous period), thus carrying over 40.5h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 12.5h (out of 0h assigned and 24.0h from previous period), thus carrying over 11.5h to the next month.
  • Roberto C. Sánchez did 8.5h (out of 4.75h assigned and 15.25h from previous period), thus carrying over 11.5h to the next month.
  • Stefano Rivera did 1.0h (out of 0h assigned and 28.0h from previous period), thus carrying over 27.0h to the next month.
  • Sylvain Beucler did 35.0h (out of 40.5h assigned), thus carrying over 5.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 15.0h (out of 15.0h assigned and 1.0h from previous period), thus carrying over 1.0h to the next month.
  • Utkarsh Gupta did 3.5h (out of 11.0h assigned and 18.5h from previous period), thus carrying over 26.0h to the next month.

Evolution of the situation

In April, we have released 35 DLAs.

The LTS team would like to welcome our newest sponsor, Institut Camille Jordan, a French research lab. Thanks to the support of the many LTS sponsors, the entire Debian community benefits from direct security updates, as well as indirect improvements and collaboration with other members of the Debian community.

As part of improving the efficiency of our work and the quality of the security updates we produce, the LTS has continued improving our workflow. Improvements include more consistent tagging of release versions in Git and broader use of continuous integration (CI) to ensure packages are tested thoroughly and consistently. Sponsors and users can rest assured that we work continuously to maintain and improve the already high quality of the work that we do.

Thanks to our sponsors

Sponsors that joined recently are in bold.

16 May, 2023 12:00AM by Roberto C. Sánchez

May 15, 2023

Sven Hoexter

GCP: Private Service Connect Forwarding Rules can not be Updated

PSA for those foolish enough to use Google Cloud and try to use private service connect: If you want to change the serviceAttachment your private service connect forwarding rule points at, you must delete the forwarding rule and create a new one. Updates are not supported. I've done that in the past via terraform, but lately encountered strange errors like this:

Error updating ForwardingRule: googleapi: Error 400: Invalid value for field 'target.target':
'<https://www.googleapis.com/compute/v1/projects/mydumbproject/regions/europe-west1/serviceAttachments/
k8s1-sa-xyz-abc>'. Unexpected resource collection 'serviceAttachments'., invalid

Worked around that with the help of terrraform_data and lifecycle:

resource "terraform_data" "replacement" {
    input = var.gcp_psc_data["target"]
}

resource "google_compute_forwarding_rule" "this" {
    count   = length(var.gcp_psc_data["target"]) > 0 ? 1 : 0
    name    = "${var.gcp_psc_name}-psc"
    region  = var.gcp_region
    project = var.gcp_project

    target                = var.gcp_psc_data["target"]
    load_balancing_scheme = "" # need to override EXTERNAL default when target is a service attachment
    network               = var.gcp_network
    ip_address            = google_compute_address.this.id

    lifecycle {
        replace_triggered_by = [
            terraform_data.replacement
        ]
    }
}

See also terraform data for replace_triggered_by.

15 May, 2023 07:21AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.1.10 on CRAN: New Upstream

We are happy to share that the RcppSimdJson package has been updated to release 0.1.10.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.

This release updates the underlying simdjson library to version 3.1.8 (also made today). Otherwise we only made a minor edit to the README and adjusted one tweek for code coverage.

The (very short) NEWS entry for this release follows.

Changes in version 0.1.10 (2023-05-14)

  • simdjson was upgraded to version 3.1.8 (Dirk in #85).

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 May, 2023 12:41AM

May 14, 2023

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Joining files with FFmpeg

Joining video files (back-to-back) losslessly with FFmpeg is a surprisingly cumbersome operation. You can't just, like, write all the inputs on the command line or something; you need to use a special demuxer and then write all the names in a text file and override the security for that file, which is pretty crazy.

But there's one issue I had that I crashed into and which random searching around didn't help for, namely this happening sometimes on switching files (and the resulting files just having no video in that area):

[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290238, current: 86263699; changing to 162290239. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290239, current: 86264723; changing to 162290240. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290240, current: 86265747; changing to 162290241. This may result in incorrect timestamps in the output file.

There are lots of hits about this online, most of them around different codecs and such, but the problem was surprisingly mundane: Some of the segments had video in stream 0 and audio in stream 1, and some the other way round, and the concat demuxer doesn't account for this.

Simplest workaround; just remux the files first. FFmpeg will put the streams in a consistent order. (Inspired by a Stack Overflow answer that suggested remuxing to MPEG-TS in order to use the concat protocol instead of the concat demuxer.)

14 May, 2023 09:54PM

Petter Reinholdtsen

The 2023 LinuxCNC Norwegian developer gathering

The LinuxCNC project is making headway these days. A lot of patches and issues have seen activity on the project github pages recently. A few weeks ago there was a developer gathering over at the Tormach headquarter in Wisconsin, and now we are planning a new gathering in Norway. If you wonder what LinuxCNC is, lets quote Wikipedia:

"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."

The Norwegian developer gathering take place the weekend June 16th to 18th this year, and is open for everyone interested in contributing to LinuxCNC. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian, Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14 May, 2023 06:30PM

hackergotchi for Holger Levsen

Holger Levsen

20230514-fwupd

How-To use fwupd

As one cannot use fwupd on Qubes OS to update firmwares this is a quick How-To for using fwupd on Grml for future me. (Qubes 4.2 will bring qubes-fwupd.)

  • boot into Grml.
  • mkdir /efi ; mount /boot/efi to /efi or set OverrideESPMountPoint=/boot/efi/EFI if you mount to the usual path.
  • apt update ; apt install fwupd fwupd-amd64-signed udisks2 policykit-1
  • fwupdmgr get-devices
  • fwupdmgr refresh
  • fwupdmgr get-updates
  • fwupdmgr update
  • reboot into Qubes OS.

14 May, 2023 03:20PM

hackergotchi for C.J. Collier

C.J. Collier

Early Access: Inserting JSON data to BigQuery from Spark on Dataproc

Hello folks!

We recently received a case letting us know that Dataproc 2.1.1 was unable to write to a BigQuery table with a column of type JSON. Although the BigQuery connector for Spark has had support for JSON columns since 0.28.0, the Dataproc images on the 2.1 line still cannot create tables with JSON columns or write to existing tables with JSON columns.

The customer has graciously granted permission to share the code we developed to allow this operation. So if you are interested in working with JSON column tables on Dataproc 2.1 please continue reading!

Use the following gcloud command to create your single-node dataproc cluster:

IMAGE_VERSION=2.1.1-debian11
REGION=us-west1
ZONE=${REGION}-a
CLUSTER_NAME=pick-a-cluster-name
gcloud dataproc clusters create ${CLUSTER_NAME} \
    --region ${REGION} \
    --zone ${ZONE} \
    --single-node \
    --master-machine-type n1-standard-4 \
    --master-boot-disk-type pd-ssd \
    --master-boot-disk-size 50 \
    --image-version ${IMAGE_VERSION} \
    --max-idle=90m \
    --enable-component-gateway \
    --scopes 'https://www.googleapis.com/auth/cloud-platform'

The following file is the Scala code used to write JSON structured data to a BigQuery table using Spark. The file following this one can be executed from your single-node Dataproc cluster.

Main.scala

import org.apache.spark.sql.functions.col
import org.apache.spark.sql.types.{Metadata, StringType, StructField, StructType}
import org.apache.spark.sql.{Row, SaveMode, SparkSession}
import org.apache.spark.sql.avro
import org.apache.avro.specific

  val env = "x"
  val my_bucket = "cjac-docker-on-yarn"
  val my_table = "dataset.testavro2"
    val spark = env match {
      case "local" =>
        SparkSession
          .builder()
          .config("temporaryGcsBucket", my_bucket)
          .master("local")
          .appName("isssue_115574")
          .getOrCreate()
      case _ =>
        SparkSession
          .builder()
          .config("temporaryGcsBucket", my_bucket)
          .appName("isssue_115574")
          .getOrCreate()
    }

  // create DF with some data
  val someData = Seq(
    Row("""{"name":"name1", "age": 10 }""", "id1"),
    Row("""{"name":"name2", "age": 20 }""", "id2")
  )
  val schema = StructType(
    Seq(
      StructField("user_age", StringType, true),
      StructField("id", StringType, true)
    )
  )

  val avroFileName = s"gs://${my_bucket}/issue_115574/someData.avro"
  
  val someDF = spark.createDataFrame(spark.sparkContext.parallelize(someData), schema)
  someDF.write.format("avro").mode("overwrite").save(avroFileName)

  val avroDF = spark.read.format("avro").load(avroFileName)
  // set metadata
  val dfJSON = avroDF
    .withColumn("user_age_no_metadata", col("user_age"))
    .withMetadata("user_age", Metadata.fromJson("""{"sqlType":"JSON"}"""))

  dfJSON.show()
  dfJSON.printSchema

  // write to BigQuery
  dfJSON.write.format("bigquery")
    .mode(SaveMode.Overwrite)
    .option("writeMethod", "indirect")
    .option("intermediateFormat", "avro")
    .option("useAvroLogicalTypes", "true")
    .option("table", my_table)
    .save()


repro.sh:

#!/bin/bash

PROJECT_ID=set-yours-here
DATASET_NAME=dataset
TABLE_NAME=testavro2

# We have to remove all of the existing spark bigquery jars from the local
# filesystem, as we will be using the symbols from the
# spark-3.3-bigquery-0.30.0.jar below.  Having existing jar files on the
# local filesystem will result in those symbols having higher precedence
# than the one loaded with the spark-shell.
sudo find /usr -name 'spark*bigquery*jar' -delete

# Remove the table from the bigquery dataset if it exists
bq rm -f -t $PROJECT_ID:$DATASET_NAME.$TABLE_NAME

# Create the table with a JSON type column
bq mk --table $PROJECT_ID:$DATASET_NAME.$TABLE_NAME \
  user_age:JSON,id:STRING,user_age_no_metadata:STRING

# Load the example Main.scala 
spark-shell -i Main.scala \
  --jars /usr/lib/spark/external/spark-avro.jar,gs://spark-lib/bigquery/spark-3.3-bigquery-0.30.0.jar

# Show the table schema when we use `bq mk --table` and then load the avro
bq query --use_legacy_sql=false \
  "SELECT ddl FROM $DATASET_NAME.INFORMATION_SCHEMA.TABLES where table_name='$TABLE_NAME'"

# Remove the table so that we can see that the table is created should it not exist
bq rm -f -t $PROJECT_ID:$DATASET_NAME.$TABLE_NAME

# Dynamically generate a DataFrame, store it to avro, load that avro,
# and write the avro to BigQuery, creating the table if it does not already exist

spark-shell -i Main.scala \
  --jars /usr/lib/spark/external/spark-avro.jar,gs://spark-lib/bigquery/spark-3.3-bigquery-0.30.0.jar

# Show that the table schema does not differ from one created with a bq mk --table
bq query --use_legacy_sql=false \
  "SELECT ddl FROM $DATASET_NAME.INFORMATION_SCHEMA.TABLES where table_name='$TABLE_NAME'"

Google BigQuery has supported JSON data since October of 2022, but until now, it has not been possible, on generally available Dataproc clusters, to interact with these columns using the Spark BigQuery Connector.

JSON column type support was introduced in spark-bigquery-connector release 0.28.0.

14 May, 2023 03:52AM by C.J. Collier

May 13, 2023

Sergio Durigan Junior

Ubuntu debuginfod and source code indexing

You might remember that in my last post about the Ubuntu debuginfod service I talked about wanting to extend it and make it index and serve source code from packages. I’m excited to announce that this is now a reality since the Ubuntu Lunar (23.04) release.

The feature should work for a lot of packages from the archive, but not all of them. Keep reading to better understand why.

The problem

While debugging a package in Ubuntu, one of the first steps you need to take is to install its source code. There are some problems with this:

  • apt-get source required dpkg-dev to be installed, which ends up pulling in a lot of other dependencies.
  • GDB needs to be taught how to find the source code for the package being debugged. This can usually be done by using the dir command, but finding the proper path to be is usually not trivial, and you find yourself having to use more “complex” commands like set substitute-path, for example.
  • You have to make sure that the version of the source package is the same as the version of the binary package(s) you want to debug.
  • If you want to debug the libraries that the package links against, you will face the same problems described above for each library.

So yeah, not a trivial/pleasant task after all.

The solution…

Debuginfod can index source code as well as debug symbols. It is smart enough to keep a relationship between the source package and the corresponding binary’s Build-ID, which is what GDB will use when making a request for a specific source file. This means that, just like what happens for debug symbol files, the user does not need to keep track of the source package version.

While indexing source code, debuginfod will also maintain a record of the relative pathname of each source file. No more fiddling with paths inside the debugger to get things working properly.

Last, but not least, if there’s a need for a library source file and if it’s indexed by debuginfod, then it will get downloaded automatically as well.

… but not a perfect one

In order to make debuginfod happy when indexing source files, I had to patch dpkg and make it always use -fdebug-prefix-map when compiling stuff. This GCC option is used to remap pathnames inside the DWARF, which is needed because in Debian/Ubuntu we build our packages inside chroots and the build directories end up containing a bunch of random cruft (like /build/ayusd-ASDSEA/something/here). So we need to make sure the path prefix (the /build/ayusd-ASDSEA part) is uniform across all packages, and that’s where -fdebug-prefix-map helps.

This means that the package must honour dpkg-buildflags during its build process, otherwise the magic flag won’t be passed and your DWARF will end up with bogus paths. This should not be a big problem, because most of our packages do honour dpkg-buildflags, and those who don’t should be fixed anyway.

… especially if you’re using LTO

Ubuntu enables LTO by default, and unfortunately we are affected by an annoying (and complex) bug that results in those bogus pathnames not being properly remapped. The bug doesn’t affect all packages, but if you see GDB having trouble finding a source file whose full path starts without /usr/src/..., that is a good indication that you’re being affected by this bug. Hopefully we should see some progress in the following weeks.

Your feedback is important to us

If you have any comments, or if you found something strange that looks like a bug in the service, please reach out. You can either send an email to my public inbox (see below) or file a bug against the ubuntu-debuginfod project on Launchpad.

13 May, 2023 08:43PM

Petter Reinholdtsen

OpenSnitch in Debian ready for prime time

A bit delayed, the interactive application firewall OpenSnitch package in Debian now got the latest fixes ready for Debian Bookworm. Because it depend on a package missing on some architectures, the autopkgtest check of the testing migration script did not understand that the tests were actually working, so the migration was delayed. A bug in the package dependencies is also fixed, so those installing the firewall package (opensnitch) now also get the GUI admin tool (python3-opensnitch-ui) installed by default. I am very grateful to Gustavo Iñiguez Goya for his work on getting the package ready for Debian Bookworm.

Armed with this package I have discovered some surprising connections from programs I believed were able to work completly offline, and it has already proven its worth, at least to me. If you too want to get more familiar with the kind of programs using Internett connections on your machine, I recommend testing apt install opensnitch in Bookworm and see what you think.

The package is still not able to build its eBPF module within Debian. Not sure how much work it would be to get it working, but suspect some kernel related packages need to be extended with more header files to get it working.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 May, 2023 10:10AM

May 12, 2023

hackergotchi for Holger Levsen

Holger Levsen

20230512-Debian-Reunion-Hamburg-2023

Small reminder for the Debian Reunion Hamburg 2023 from May 23 to 30

As in previous years there will be a rather small Debian Reunion Hamburg 2023 event taking place from May 23rd until the 30th (with the 29th being a public holiday in Germany and elsewhere).

We'll have days of hacking (inside and outside), a day trip and a small cheese & wine party, as well as daily standup meetings to learn what others are doing, and there shall also be talks and workshops. At the moment there are even still some beds on site available and the CfP is still open!

For more information on all of this: please check the above wiki page!

May the force be with you.

12 May, 2023 02:28PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

crc32c 0.0.2 on CRAN: Build Fixes

A first follow-up to the initial announcement just days ago of the new crc32c package. The package offers cyclical checksum with parity in hardware-accelerated form on (recent enough) intel cpus as well as on arm64.

This follow-up was needed because I missed, when switching to a default static library build, that newest compilers would complain if -fPIC was not set. gcc-12 on my box was happy, gcc-13 on recent Fedora as used at CRAN was not. A second error was assuming that saying SystemRequirements: cmake would suffice. But hold on whippersnapper: macOS always has a surprise for you! As described at the end of the appropriate section in Writing R Extensions, on that OS you have to go the basement, open four cupboards, rearrange three shelves and then you get to use it. And then in doing so (from an added configure script) I failed to realize Windows needed a fallback. Gee.

The NEWS entry for this (as well the initial release) follows.

Changes in version 0.0.2 (2023-05-11)

  • Explicitly set cmake property for position-independent code

  • Help macOS find its cmake binary as detailed also in WRE

  • Help Windows with a non-conditional Makevars.win pointing at cmake

  • Add more badges to README.md

Changes in version 0.0.1 (2023-05-07)

  • Initial release version and CRAN upload

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 May, 2023 12:37AM

Valhalla's Things

I hate proprietary software

Posted on May 12, 2023

Even when it’s \m/.

Years ago I watched my SO play Brütal Legend and of course loved it, but I’ve been only using used computers for a long time, and none of them was really able to run modern games.

Admittedly, he told me that I could use his computer to play the game while he wasn’t home (and I do have an account on that computer, that I’ve sporadically used to do computationally intensive stuff, but always remotely), but it was a hassle, and I never did.

This year, however, he gifted me a shiny new CPU and motherboard, and among other things that meant games from this century!

The first thing I’ve spent time on was 0ad (which admittedly already worked on one of the old computers, as long as the map wasn’t too big), but now it was time to play basically the one recent proprietary game I had been wanting to play.

So, this afternoon I started by trying to copy the installer (it was bought from an humble bundle, I don’t have steam) from the home server to my PC, and the home server froze. Ok, I could copy it through something else than git annex (or from the offline hard disk backup, as I did).

Then I tried to run the installer, which resulted in the really helpful error message:

bash: ./BrutalLegend-Linux-2013-05-07-setup.bin: cannot execute: required file not found

ok, then surely ldd can help:

not a dynamic executable

maybe it doesn’t like being a symlink (remember, git annex), but no, that wasn’t the problem. ah! maybe file can help, and indeed:

BrutalLegend-Linux-2013-05-07-setup.bin: ELF 32-bit LSB executable

argh. Why does proprietary software hate us?

Oh, well, https://wiki.debian.org/Multiarch/HOWTO , dpkg --add-architecture i386 followed by apt update and apt install libc6-i386 and the installer started.

Of course this didn’t mean that the game could run, but at least it was spitting out the right error messages, and I could quickly see what the other missing packages were:

apt install lib32z1 libbz2-1.0:i386 libgl1:i386 libglu1-mesa:i386

and the game started!

and…

no. audio.

I often play games with no audio, because I can’t wear headphones, but here the soundtrack is basically 50% of the reason one would play this game.

Back when my SO had played the game audio was still through pulseaudio, while now I’m using pipewire (and I wasn’t sure that the game wasn’t old enough to be wanting to use alsa), so I started to worry a bit.

And this time, there was no error message to help, but some googling (on searx) and trial and error gave me this list of packages:

apt install pipewire-audio libpipewire-0.3-0:i386 libpulse0:i386 pipewire-alsa:i386

and that was it! the game started AND I could hear music!

And then it was time for dinner, and I couldn’t play.

(You may notice that this post has been posted quite some time after dinner. Most of this time wasn’t spent writing the post.)

Anyway, as soon as I’ve defeated and crushed Doviculus I’m going back to 0ad. or maybe wesnoth. or some other Free Software and frustration-free game.

12 May, 2023 12:00AM

May 11, 2023

Simon Josefsson

Streamlined NTRU Prime sntrup761 goes to IETF

The OpenSSH project added support for a hybrid Streamlined NTRU Prime post-quantum key encapsulation method sntrup761 to strengthen their X25519-based default in their version 8.5 released on 2021-03-03. While there has been a lot of talk about post-quantum crypto generally, my impression has been that there has been a slowdown in implementing and deploying them in the past two years. Why is that? Regardless of the answer, we can try to collaboratively change things, and one effort that appears strangely missing are IETF documents for these algorithms.

Building on some earlier work that added X25519/X448 to SSH, writing a similar document was relatively straight-forward once I had spent a day reading OpenSSH and TinySSH source code to understand how it worked. While I am not perfectly happy with how the final key is derived from the sntrup761/X25519 secrets – it is a SHA512 call on the concatenated secrets – I think the construct deserves to be better documented, to pave the road for increased confidence or better designs. Also, reusing the RFC5656§4 structs makes for a worse specification (one unnecessary normative reference), but probably a simpler implementation. I have published draft-josefsson-ntruprime-ssh-00 here. Credit here goes to Jan Mojžíš of TinySSH that designed the earlier sntrup4591761x25519-sha512@tinyssh.org in 2018, Markus Friedl who added it to OpenSSH in 2019, and Damien Miller that changed it to sntrup761 in 2020. Does anyone have more to add to the history of this work?

Once I had sharpened my xml2rfc skills, preparing a document describing the hybrid construct between the sntrup761 key-encapsulation mechanism and the X25519 key agreement method in a non-SSH fashion was easy. I do not know if this work is useful, but it may serve as a reference for further study. I published draft-josefsson-ntruprime-hybrid-00 here.

Finally, how about a IETF document on the base Streamlined NTRU Prime? Explaining all the details, and especially the math behind it would be a significant effort. I started doing that, but realized it is a subjective call when to stop explaining things. If we can’t assume that the reader knows about lattice math, is a document like this the best place to teach it? I settled for the most minimal approach instead, merely giving an introduction to the algorithm, included SageMath and C reference implementations together with test vectors. The IETF audience rarely understands math, so I think it is better to focus on the bits on the wire and the algorithm interfaces. Everything here was created by the Streamlined NTRU Prime team, I merely modified it a bit hoping I didn’t break too much. I have now published draft-josefsson-ntruprime-streamlined-00 here.

I maintain the IETF documents on my ietf-ntruprime GitLab page, feel free to open merge requests or raise issues to help improve them.

To have confidence in the code was working properly, I ended up preparing a branch with sntrup761 for the GNU-project Nettle and have submitted it upstream for review. I had the misfortune of having to understand and implement NIST’s DRBG-CTR to compute the sntrup761 known-answer tests, and what a mess it is. Why does a deterministic random generator support re-seeding? Why does it support non-full entropy derivation? What’s with the key size vs block size confusion? What’s with the optional parameters? What’s with having multiple algorithm descriptions? Luckily I was able to extract a minimal but working implementation that is easy to read. I can’t locate DRBG-CTR test vectors, anyone? Does anyone have sntrup761 test vectors that doesn’t use DRBG-CTR? One final reflection on publishing known-answer tests for an algorithm that uses random data: are the test vectors stable over different ways to implement the algorithm? Just consider of some optimization moved one randomness-extraction call before another, then wouldn’t the output be different? Are there other ways to verify correctness of implementations?

As always, happy hacking!

11 May, 2023 10:03PM by simon

hackergotchi for Shirish Agarwal

Shirish Agarwal

India Press freedom, Profiteering, AMD issues in the wild.

India Press Freedom

Just about a week back, India again slipped in the Freedom index, this time falling to 161 out of 180 countries. The RW again made lot of noise as they cannot fathom why it has been happening so. A recent news story gives some idea. Every year NCRB (National Crime Records Bureau) puts out its statistics of crimes happening across the country. The report is in public domain. Now according to report shared, around 40k women from Gujarat alone disappeared in the last five years. This is a state where BJP has been ruling for the last 30 odd years. When this report became viral, almost all national newspapers the news was censored/blacked out. For e.g. check out newindianexpress.com, likewise TOI and other newspapers, the news has been 404. The only place that you can get that news is in minority papers like siasat. But the story didn’t remain till there. While the NCW (National Commission of Women) pointed out similar stuff happening in J&K, Gujarat Police claimed they got almost 39k women back. Now ideally, it should have been in NCRB data as an addendum as the report can be challenged. But as this news was made viral, nobody knows the truth or false in the above. What BJP has been doing is whenever they get questioned, they try to muddy the waters like that. And most of the time, such news doesn’t make to court so the party gets a freebie in a sort as they are not legally challenged. Even if somebody asks why didn’t Gujarat Police do it as NCRB report is jointly made with the help of all states, and especially with BJP both in Center and States, they cannot give any excuse. The only excuse you see or hear is whataboutism unfortunately 😦

Profiteering on I.T. Hardware

I was chatting with a friend yesterday who is an enthusiast like me but has been more alert about what has been happening in the CPU, motherboard, RAM world. I was simply shocked to hear the prices of motherboards which are three years old, even a middling motherboard. For e.g. the last time I bought a mobo, I spent about 6k but that was for an ATX motherboard. Most ITX motherboards usually sold for around INR 4k/- or even lower. I remember Via especially as their mobos were even cheaper around INR 1.5-2k/-. Even before pandemic, many motherboard manufacturers had closed down shop leaving only a few in the market. As only a few remained, prices started going higher. The pandemic turned it to a seller’s market overnight as most people were stuck at home and needed good rigs for either work or leisure or both. The manufacturers of CPU, motherboards, GPU’s, Powersupply (SMPS) named their prices and people bought it. So in 2023, high prices remained while warranty periods started coming down. Governments also upped customs and various other duties. So all are in hand in glove in the situation. So as shared before, what I have been offered is a 4 year motherboard with a CPU of that time. I haven’t bought it nor do I intend to in short-term future but extremely disappointed with the state of affairs 😦

AMD Issues

It’s just been couple of hard weeks apparently for AMD. The first has been the TPM (Trusted Platform Module) issue that was shown by couple of security researchers. From what is known, apparently with $200 worth of tools and with sometime you can hack into somebody machine if you have physical access. Ironically, MS made a huge show about TPM and also made it sort of a requirement if a person wanted to have Windows 11. I remember Matthew Garett sharing about TPM and issues with Lenovo laptops. While AMD has acknowledged the issue, its response has been somewhat wishy-washy. But this is not the only issue that has been plaguing AMD. There have been reports of AMD chips literally exploding and again AMD issuing a somewhat wishy-washy response. 😦 Asus though made some changes but is it for Zen4 or only 5 parts, not known. Most people are expecting a recession in I.T. hardware this year as well as next year due to high prices. No idea if things will change, if ever 😦

11 May, 2023 06:17AM by shirishag75

May 10, 2023

hackergotchi for Charles Plessy

Charles Plessy

Upvote to patch Firefox to render Markdown

I previously wrote that when Firefox receives a file whose media type is text/markdown, it prompts the user to download it, whereas other browsers display rendered results.

Now it is possible to upvote a proposal on connect.mozilla.org asking that Firefox renders Markdown by default.

10 May, 2023 11:43PM