March 02, 2024

Ravi Dwivedi

Malaysia Trip

Last month, I had a trip to Malaysia and Thailand. I stayed for six days in each of the countries. The selection of these countries was due to both of them granting visa-free entry to Indian tourists for some time window. This post covers the Malaysia part and Thailand part will be covered in the next post. If you want to travel to any of these countries in the visa-free time period, I have written all the questions asked during immigration and at airports during this trip here which might be of help.

I mostly stayed in Kuala Lumpur and went to places around it. Although before the trip, I planned to visit Ipoh and Cameron Highlands too, but could not cover it during the trip. I found planning a trip to Malaysia a little difficult. The country is divided into two main islands - Peninsular Malaysia and Borneo. Then there are more islands - Langkawi, Penang island, Perhentian and Redang Islands. Reaching those islands seemed a little difficult to plan and I wish to visit more places in my next Malaysia trip.

My first day hostel was booked in Chinatown part of Kuala Lumpur, near Pasar Seni LRT station. As soon as I checked-in and entered my room, I met another Indian named Fletcher, and after that we accompanied each other in the trip. That day, we went to Muzium Negara and Little India. I realized that if you know the right places to buy what you want, Malaysia could be quite cheap. Malaysian currency is Malaysian Ringgit (MYR). 1 MYR is equal to 18 INR. For 2 MYR, you can get a good masala tea in Little India and it costs like 4-5 MYR for a masala dosa. The vegetarian food has good availability in Kuala Lumpur, thanks to the Tamil community. I also tried Mee Goreng, which was vegetarian, and I found it fine in terms of taste. When I checked about Mee Goreng on Wikipedia, I found out that it is unique to Indian immigrants in Malaysia (and neighboring countries) but you don’t get it in India!

Mee Goreng, a dish made of noodles in Malaysia.

For the next day, Fletcher had planned a trip to Genting Highlands and pre booked everything. I also planned to join him but when we went to KL Sentral to take the bus, his bus tickets were sold out. I could take a bus at a different time, but decided to visit some other place for the day and cover Genting Highlands later. At the ticket counter, I met a family from Delhi and they wanted to go to Genting Highlands but due to not getting bus tickets for that day, they decided to buy a ticket for the next day and instead planned for Batu Caves that day. I joined them and went to Batu Caves.

After returning from Batu Caves, we went our separate ways. I went back and took rest at my hostel and later went to Petronas Towers at night. Petronas Towers is the icon of Kuala Lumpur. Having a photo there was a must. I was at Petronas Towers at around 9 PM. Around that time, Fletcher came back from Genting Highlands and we planned to meet at KL Sentral to head for dinner.

Me at Petronas Towers.

We went back to the same place as the day before where I had Mee Goreng. This time we had dosa and a masala tea. Their masala tea from the last day was tasty and that’s why I was looking for them in the first place. We also met a Malaysian family having Indian ancestry dining there and had a nice conversation. Then we went to a place to eat roti canai in Pasar Seni market. Roti canai is a popular non-vegetarian dish in Malaysia but I took the vegetarian version.

Photo with Malaysians.

The next day, we went to Berjaya Time Square has a shopping place which sells pretty cheap items for daily use and souveniers too. However, I bought souveniers from Petaling Street, which is in Chinatown. At night, we explored Bukit Bintang, which is the heart of Kuala Lumpur and is famous for its nightlife.

After that, Fletcher went to Bangkok and I was in Malaysia for two more days. Next day, I went to Genting Highlands and took the cable car, which had awesome views. I came back to Kuala Lumpur by the night. The remaining day I just roamed around in Bukit Bintang. Then I took a flight for Bangkok on 7th Feb, which I will cover in the next post.

In Malaysia, I met so many people from different countries - apart from people from Indian subcontinent, I met Syrians, Indonesians (Malaysia seems to be a popular destination for Indonesian tourists) and Burmese people. Meeting people from other cultures is an integral part of travel for me.

My expenses for Food + Accommodation + Travel added to 10,000 INR for a week in Malaysia, while flight costs were: 13,000 INR (Delhi to Kuala Lumpur) + 10,000 INR (Kuala Lumpur to Bangkok) + 12,000 INR (Bangkok to Delhi).

For OpenStreetMap users, good news is Kuala Lumpur is fairly well-mapped on OpenStreetMap.

Tips

  • I bought local SIM from a shop at KL Sentral station complex which had “news” in their name (I forgot the exact name and there are two shops having “news” in their name) and it was the cheapest option I could find. The SIM was 10 MYR for 5 GB data for a week. If you want to make calls too, then you need to spend extra 5 MYR.

  • 7-Eleven and KK Mart convenience stores are everywhere in the city and they are open all the time (24 hours a day). If you are a vegetarian, you can at least get some bread and cheese from there to eat.

  • A lot of people know English (and many - Indians, Pakistanis, Nepalis - know Hindi) in Kuala Lumpur, so I had no language problems most of the time.

  • For shopping on budget, you can go to Petaling Street, Berjaya Time Square or Bukit Bintang. In particular, there is a shop named I Love KL Gifts in Bukit Bintang which had very good prices. just near the metro/monorail stattion. Check out location of the shop on OpenStreetMap.

02 March, 2024 01:59PM

March 01, 2024

hackergotchi for Guido Günther

Guido Günther

Free Software Activities February 2024

A short status update what happened last month. Work in progress is marked as WiP:

GNOME Calls

  • Landed support to pick emergency calls numbers based on location (until now Calls picked the numbers from the SIM card only): Merge Request
  • Bugfix: Fix dial back - the action mistakenly got disabled in some circumstances: Merge Request, Issue.

Phosh and Phoc

As this often overlaps I've put them in a common section:

Phosh Tour

Phosh Mobile Settings

Phosh OSK Stub

Livi Video Player

Phosh.mobi Website

  • Directly link to tarballs from the release page, e.g. here

If you want to support my work see donations.

01 March, 2024 05:07PM

Scarlett Gately Moore

Kubuntu: Week 4, Feature Freeze and what comes next.

First I would like to give a big congratulations to KDE for a superb KDE 6 mega release 🙂 While we couldn’t go with 6 on our upcoming LTS release, I do recommend KDE neon if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up.

Kubuntu:

Continuing efforts from last week Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads. , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release 🙂 and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are:

  • Kept a close eye on Excuses and fixed tests as needed. Seems riscv64 tests were turned off by default which broke several of our builds.
  • I did a complete revamp of our seed / kubuntu-desktop meta package! I have ensured we are following KDE packaging recommendations. Unfortunately, we cannot ship maliit-keyboard as we get hit by LP 2039721 which makes for an unpleasant experience.
  • I did some more work on our custom plasma-welcome which now just needs some branding, which leads to a friendly reminder the contest is still open! https://kubuntu.org/news/kubuntu-graphic-design-contest/
  • Bug triage! Oh so many bugs! From back when I worked on Kubuntu 10 years ago and plasma5 was new.. I am triaging and reducing this list to more recent bugs ( which is a much smaller list ). This reaffirms our decision to go with a rock solid stable Plasma5 for this LTS release.
  • I spent some time debugging kio-gdrive which no longer works ( It works in Jammy ) so I am tracking down what is broken. I thought it was 2FA but my non 2FA doesn’t work either, it just repeatedly throws up the google auth dialog. So this is still a WIP. It was suggested to me to disable online accounts all together, but I would prefer to give users the full experience.
  • Fixed our ISO builds. We are still not quite ready for testers as we have some Calamares fixes in the pipeline. Be on the lookout for a call for testers soon 🙂
  • Wrote a script to update our ( Kubuntu ) packageset to cover all the new packages accumulated over the years and remove packages that are defunct / removed.

What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down.

Snaps:

I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late.

Happy with my work? My continued employment depends on you! Please consider a donation http://kubuntu.org/donate

Thank you!

01 March, 2024 04:38PM by sgmoore

hackergotchi for Junichi Uekawa

Junichi Uekawa

March.

March. Busy days.

01 March, 2024 01:05PM by Junichi Uekawa

Ravi Dwivedi

Fixing Mobile Data issue on Lineage OS

I have used Lineage OS on a couple of phones, but I noticed that internet using my mobile data was not working well on it. I am not sure why. This was the case in Xiaomi MI A2 and OnePlus 9 Pro phones. One day I met contrapunctus and they looked at their phone settings and used the same in mine and it worked. So, I am going to write here what worked for me.

The trick is to add an access point.

Go to Settings -> Network Settings -> Your SIM settings -> Access Point Names -> Click on ‘+’ symbol.

In the Name section, you can write anything, I wrote test. And in the APN section write www, then save. Below is a screenshot demonstrating the settings you have to change.

APN settings screenshot. Notice the circled entries.

This APN will show in the list of APNs and you need to select this one.

After this, my mobile data started working well and I started getting speeds according to my data plan. This is what worked for me in Lineage OS. Hopefully, it was of help to you :D

I will meet you in the next post.

01 March, 2024 09:04AM

Reproducible Builds (diffoscope)

diffoscope 259 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 259. This version includes the following changes:

[ Chris Lamb ]
* Don't error-out with a traceback if we encounter "struct.unpack"-related
  errors when parsing .pyc files. (Closes: #1064973)
* Fix compatibility with PyTest 8.0. (Closes: reproducible-builds/diffoscope#365)
* Don't try and compare rdb_expected_diff on non-GNU systems as %p formatting
  can vary. (Re: reproducible-builds/diffoscope#364)

You find out more by visiting the project homepage.

01 March, 2024 12:00AM

February 29, 2024

Russell Coker

February 28, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppEigen 0.3.4.0.0 on CRAN: New Upstream, At Last

We are thrilled to share that RcppEigen has now upgraded to Eigen release 3.4.0! The new release 0.3.4.0.0 arrived on CRAN earlier today, and has been shipped to Debian as well. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

This update has been in the works for a full two and a half years! It all started with a PR #102 by Yixuan bringing the package-local changes for R integration forward to usptream release 3.4.0. We opened issue #103 to steer possible changes from reverse-dependency checking through. Lo and behold, this just … stalled because a few substantial changes were needed and not coming. But after a long wait, and like a bolt out of a perfectly blue sky, Andrew revived it in January with a reverse depends run of his own along with a set of PRs. That was the push that was needed, and I steered it along with a number of reverse dependency checks, and occassional emails to maintainers. We managed to bring it down to only three packages having a hickup, and all three had received PRs thanks to Andrew – and even merged them. So the plan became to release today following a final fourteen day window. And CRAN was convinced by our arguments that we followed due process. So there it is! Big big thanks to all who helped it along, especially Yixuan and Andrew but also Mikael who updated another patch set he had prepared for the previous release series.

The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.4.0.0 (2024-02-28)

  • The Eigen version has been upgrade to release 3.4.0 (Yixuan)

  • Extensive reverse-dependency checks ensure only three out of over 400 packages at CRAN are affected; PRs and patches helped other packages

  • The long-running branch also contains substantial contributions from Mikael Jagan (for the lme4 interface) and Andrew Johnson (revdep PRs)

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 February, 2024 10:58PM

hackergotchi for Daniel Lange

Daniel Lange

Opencollective shutting down

Update 28.02.2024 19:45 CET: There is now a blog entry at https://blog.opencollective.com/open-collective-official-statement-ocf-dissolution/ trying to discern the legal entities in the Open Collective ecosystem and recommending potential ways forward.


Gee, there is nothing on their blog yet, but I just [28.02.2023 00:07 CET] received this email from Mike Strode, Program Officer at the Open Collective Foundation:

Dear Daniel Lange,

It is with a heavy heart that I'm writing today to inform you that the Board of Directors of the Open Collective Foundation (OCF) has made the difficult decision to dissolve OCF, effective December 31, 2024.

We are proud of the work we have been able to do together. We have been honored to build community with you and the hundreds of other collectives hosted at the Open Collective Foundation.

What you need to know:

We are beginning a staged dissolution process that will allow our over 600 collectives the time to close or transition their work. Dissolving OCF will take many months, and involves settling all liabilities while spending down all funds in a legally compliant manner.

Our priority is to support our collectives in navigating this change. We want to provide collectives the longest possible runway to wind down or transition their operations while we focus on the many legal and financial tasks associated with dissolving a nonprofit.

March 15 is the last day to accept donations. You will have until September 30 to work with us to develop and implement a plan to spend down the money in your fund. Key dates are included at the bottom of this email.

We know this is going to be difficult, and we will do everything we can to ease the transition for you.

How we will support collectives:

It remains our fiduciary responsibility to safeguard each collective's charitable assets and ensure funds are used solely for specified charitable purposes.

We will be providing assistance and support to you, whether you choose to spend out and close down your collective or continue your work through another 501(c)(3) organization or fiscal sponsor.

Unfortunately, we had to say goodbye to several of our colleagues today as we pare down our core staff to reduce costs. I will be staying on staff to support collectives through this transition, along with Wayne Kleppe, our Finance Administrator.

What led to this decision:

From day one, OCF was committed to experimentation and innovation. We were dedicated to finding new ways to open up the nonprofit space, making it easier for people to raise and access funding so they can do good in their communities.

OCF was created by Open Collective Inc. (OCI), a company formed in 2015 with the goal of "enabling groups to quickly set up a collective, raise funds and manage them transparently." Soon after being founded by OCI, OCF went through a period of rapid growth. We responded to increased demand arising from the COVID-19 pandemic without taking the time to establish the appropriate systems and infrastructure to sustain that growth.

Unfortunately, over the past year, we have learned that Open Collective Foundation's business model is not sustainable with the number of complex services we have offered and the fees we pay to the Open Collective Inc. tech platform.

In late 2023, we made the decision to pause accepting new collectives in order to create space for us to address the issues. Unfortunately, it became clear that it would not be financially feasible to make the necessary corrections, and we determined that OCF is not viable.

What's next:

We know this news will raise questions for many of our collectives. We will be making space for questions and reactions in the coming weeks.

In the meantime, we have developed this FAQ which we will keep updated as more questions come in.

What you need to do next:

  • Review the FAQ
  • Discuss your options within your collective. Your options are:
    • spend down and close out your collective
    • spend down and transfer your collective to another fiscal sponsor, or
    • transfer your collective and funds to another charitable organization.
  • Reply-all to this email with any questions, requests, or to set up a time to talk. Please make sure generalinquiries@opencollective.org is copied on your email.

Dates to know:

  • Last day to accept funds/receive donations: March 15, 2024
  • Last day collectives can have employees: June 30, 2024
  • Last day to spend or transfer funds: September 30, 2024

In Care & Accompaniment,
Mike Strode
Program Officer
Open Collective Foundation

Our mailing address has changed! We are now located at 440 N. Barranca Avenue #3717, Covina, CA 91723, USA

28 February, 2024 07:45AM by Daniel Lange

February 26, 2024

hackergotchi for Adnan Hodzic

Adnan Hodzic

App architecture with reliability in mind: From Kubernetes to Serverless with GCP Cloud Build & Cloud Run

The blog post you’re reading is hosted on a private Kubernetes cluster that runs inside my home. Another workload that’s running on same cluster is...

The post App architecture with reliability in mind: From Kubernetes to Serverless with GCP Cloud Build & Cloud Run appeared first on FoolControl: Phear the penguin.

26 February, 2024 08:00PM by Adnan Hodzic

Sergio Durigan Junior

Planning to orphan Pagure on Debian

I have been thinking more and more about orphaning the Pagure Debian package. I don’t have the time to maintain it properly anymore, and I have also lost interest in doing so.

What’s Pagure

Pagure is a git forge written entirely in Python using pygit2. It was almost entirely developed by one person, Pierre-Yves Chibon. He is (was?) a Red Hat employee and started working on this new git forge almost 10 years ago because the company wanted to develop something in-house for Fedora. The software is amazing and I admire Pierre-Yves quite a lot for what he was able to achieve basically alone. Unfortunately, a few years ago Fedora decided to move to Gitlab and the Pagure development pretty much stalled.

Pagure in Debian

Packaging Pagure for Debian was hard, but it was also very fun. I learned quite a bit about many things (packaging and non-packaging related), interacted with the upstream community, decided to dogfood my own work and run my Pagure instance for a while, and tried to get newcomers to help me with the package (without much success, unfortunately).

I remember that when I had started to package Pagure, Debian was also moving away from Alioth and discussing options. For a brief moment Pagure was a contender, but in the end the community decided to self-host Gitlab, and that’s why we have Salsa now. I feel like I could have tipped the scales in favour of Pagure had I finished packaging it for Debian before the decision was made, but then again, to the best of my knowledge Salsa doesn’t use our Gitlab package anyway…

Are you interested in maintaining it?

If you’re interested in maintaining the package, please get in touch with me. I will happily pass the torch to someone else who is still using the software and wants to keep it healthy in Debian. If there is nobody interested, then I will just orphan it.

26 February, 2024 03:23AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Long term support for Samba 4.17

Freexian is pleased to announce a partnership with Catalyst to extend the security support of Samba 4.17, which is the version packaged in Debian 12 Bookworm. Samba 4.17 will reach upstream’s end-of-support this upcoming March (2024), and the goal of this partnership is to extend it until June 2028 (i.e. the end of Debian 12’s regular security support).

One of the main aspects of this project is that it will also include support for Samba as Active Directory Domain Controller (AD-DC). Unfortunately, support for Samba as AD-DC in Debian 11 Bullseye, Debian 10 Buster and older releases has been discontinued before the end of the life cycle of those Debian releases. So we really expect to improve the situation of Samba in Debian 12 Bookworm, ensuring full support during the 5 years of regular security support.

We would like to mention that this is an experiment, and we will do our best to make it a success, and to try to continue it for Samba versions included in future Debian releases.

Our long term goal is to bring confidence to Samba’s upstream development community that they can mark some releases as being supported for 5 years (or more) and that the corresponding work will be funded by companies that benefit from this work (because we would have already built that community).

If your company relies on Samba and wants to help sustain LTS versions of Samba, please reach out to us. For companies using Debian, the simplest way is to subscribe to our Debian LTS offer at a gold level (or above) and let us know that you want to contribute to Samba LTS when you send your subscription form. For others, please reach out to us at sales@freexian.com and we will figure out a way to contribute.

In the mean time, this project has been possible thanks to the current LTS sponsors and ELTS customers. We hope the whole community of Debian and Samba users will benefit from it.

For any question, don’t hesitate to contact us.

26 February, 2024 12:00AM

February 25, 2024

hackergotchi for Ben Hutchings

Ben Hutchings

Converted from Pyblosxom to Jekyll

I’ve been using Pyblosxom here for nearly 17 years, but have become increasingly dissatisfied with having to write HTML instead of Markdown.

Today I looked at upgrading my web server and discovered that Pyblosxom was removed from Debian after Debian 10, presumably because it wasn’t updated for Python 3.

I keep hearing about Jekyll as a static site generator for blogs, so I finally investigated how to use that and how to convert my existing entries. Fortunately it supports both HTML and Markdown (and probably other) input formats, so this was mostly a matter of converting metadata.

I have my own crappy script for drafting, publishing, and listing blog entries, which also needed a bit of work to update, but that is now done.

If all has gone to plan, you should be seeing just one new entry in the feed but all permalinks to older entries still working.

25 February, 2024 08:55PM by Ben Hutchings

Russ Allbery

Review: The Fund

Review: The Fund, by Rob Copeland

Publisher: St. Martin's Press
Copyright: 2023
ISBN: 1-250-27694-2
Format: Kindle
Pages: 310

I first became aware of Ray Dalio when either he or his publisher plastered advertisements for The Principles all over the San Francisco 4th and King Caltrain station. If I recall correctly, there were also constant radio commercials; it was a whole thing in 2017. My brain is very good at tuning out advertisements, so my only thought at the time was "some business guy wrote a self-help book." I think I vaguely assumed he was a CEO of some traditional business, since that's usually who writes heavily marketed books like this. I did not connect him with hedge funds or Bridgewater, which I have a bad habit of confusing with Blackwater.

The Principles turns out to be more of a laundered cult manual than a self-help book. And therein lies a story.

Rob Copeland is currently with The New York Times, but for many years he was the hedge fund reporter for The Wall Street Journal. He covered, among other things, Bridgewater Associates, the enormous hedge fund founded by Ray Dalio. The Fund is a biography of Ray Dalio and a history of Bridgewater from its founding as a vehicle for Dalio's advising business until 2022 when Dalio, after multiple false starts and title shuffles, finally retired from running the company. (Maybe. Based on the history recounted here, it wouldn't surprise me if he was back at the helm by the time you read this.)

It is one of the wildest, creepiest, and most abusive business histories that I have ever read.

It's probably worth mentioning, as Copeland does explicitly, that Ray Dalio and Bridgewater hate this book and claim it's a pack of lies. Copeland includes some of their denials (and many non-denials that sound as good as confirmations to me) in footnotes that I found increasingly amusing.

A lawyer for Dalio said he "treated all employees equally, giving people at all levels the same respect and extending them the same perks."

Uh-huh.

Anyway, I personally know nothing about Bridgewater other than what I learned here and the occasional mention in Matt Levine's newsletter (which is where I got the recommendation for this book). I have no independent information whether anything Copeland describes here is true, but Copeland provides the typical extensive list of notes and sourcing one expects in a book like this, and Levine's comments indicated it's generally consistent with Bridgewater's industry reputation. I think this book is true, but since the clear implication is that the world's largest hedge fund was primarily a deranged cult whose employees mostly spied on and rated each other rather than doing any real investment work, I also have questions, not all of which Copeland answers to my satisfaction. But more on that later.

The center of this book are the Principles. These were an ever-changing list of rules and maxims for how people should conduct themselves within Bridgewater. Per Copeland, although Dalio later published a book by that name, the version of the Principles that made it into the book was sanitized and significantly edited down from the version used inside the company. Dalio was constantly adding new ones and sometimes changing them, but the common theme was radical, confrontational "honesty": never being silent about problems, confronting people directly about anything that they did wrong, and telling people all of their faults so that they could "know themselves better."

If this sounds like textbook abusive behavior, you have the right idea. This part Dalio admits to openly, describing Bridgewater as a firm that isn't for everyone but that achieves great results because of this culture. But the uncomfortably confrontational vibes are only the tip of the iceberg of dysfunction. Here are just a few of the ways this played out according to Copeland:

  • Dalio decided that everyone's opinions should be weighted by the accuracy of their previous decisions, to create a "meritocracy," and therefore hired people to build a social credit system in which people could use an app to constantly rate all of their co-workers. This almost immediately devolved into out-group bullying worthy of a high school, with employees hurriedly down-rating and ostracizing any co-worker that Dalio down-rated.

  • When an early version of the system uncovered two employees at Bridgewater with more credibility than Dalio, Dalio had the system rigged to ensure that he always had the highest ratings and was not affected by other people's ratings.

  • Dalio became so obsessed with the principle of confronting problems that he created a centralized log of problems at Bridgewater and required employees find and report a quota of ten or twenty new issues every week or have their bonus docked. He would then regularly pick some issue out of the issue log, no matter how petty, and treat it like a referendum on the worth of the person responsible for the issue.

  • Dalio's favorite way of dealing with a problem was to put someone on trial. This involved extensive investigations followed by a meeting where Dalio would berate the person and harshly catalog their flaws, often reducing them to tears or panic attacks, while smugly insisting that having an emotional reaction to criticism was a personality flaw. These meetings were then filmed and added to a library available to all Bridgewater employees, often edited to remove Dalio's personal abuse and to make the emotional reaction of the target look disproportionate. The ones Dalio liked the best were shown to all new employees as part of their training in the Principles.

  • One of the best ways to gain institutional power in Bridgewater was to become sycophantically obsessed with the Principles and to be an eager participant in Dalio's trials. The highest levels of Bridgewater featured constant jockeying for power, often by trying to catch rivals in violations of the Principles so that they would be put on trial.

In one of the common and all-too-disturbing connections between Wall Street finance and the United States' dysfunctional government, James Comey (yes, that James Comey) ran internal security for Bridgewater for three years, meaning that he was the one who pulled evidence from surveillance cameras for Dalio to use to confront employees during his trials.

In case the cult vibes weren't strong enough already, Bridgewater developed its own idiosyncratic language worthy of Scientology. The trials were called "probings," firing someone was called "sorting" them, and rating them was called "dotting," among many other Bridgewater-specific terms. Needless to say, no one ever probed Dalio himself. You will also be completely unsurprised to learn that Copeland documents instances of sexual harassment and discrimination at Bridgewater, including some by Dalio himself, although that seems to be a relatively small part of the overall dysfunction. Dalio was happy to publicly humiliate anyone regardless of gender.

If you're like me, at this point you're probably wondering how Bridgewater continued operating for so long in this environment. (Per Copeland, since Dalio's retirement in 2022, Bridgewater has drastically reduced the cult-like behaviors, deleted its archive of probings, and de-emphasized the Principles.) It was not actually a religious cult; it was a hedge fund that has to provide investment services to huge, sophisticated clients, and by all accounts it's a very successful one. Why did this bizarre nightmare of a workplace not interfere with Bridgewater's business?

This, I think, is the weakest part of this book. Copeland makes a few gestures at answering this question, but none of them are very satisfying.

First, it's clear from Copeland's account that almost none of the employees of Bridgewater had any control over Bridgewater's investments. Nearly everyone was working on other parts of the business (sales, investor relations) or on cult-related obsessions. Investment decisions (largely incorporated into algorithms) were made by a tiny core of people and often by Dalio himself. Bridgewater also appears to not trade frequently, unlike some other hedge funds, meaning that they probably stay clear of the more labor-intensive high-frequency parts of the business.

Second, Bridgewater took off as a hedge fund just before the hedge fund boom in the 1990s. It transformed from Dalio's personal consulting business and investment newsletter to a hedge fund in 1990 (with an earlier investment from the World Bank in 1987), and the 1990s were a very good decade for hedge funds. Bridgewater, in part due to Dalio's connections and effective marketing via his newsletter, became one of the largest hedge funds in the world, which gave it a sort of institutional momentum. No one was questioned for putting money into Bridgewater even in years when it did poorly compared to its rivals.

Third, Dalio used the tried and true method of getting free publicity from the financial press: constantly predict an upcoming downturn, and aggressively take credit whenever you were right. From nearly the start of his career, Dalio predicted economic downturns year after year. Bridgewater did very well in the 2000 to 2003 downturn, and again during the 2008 financial crisis. Dalio aggressively takes credit for predicting both of those downturns and positioning Bridgewater correctly going into them. This is correct; what he avoids mentioning is that he also predicted downturns in every other year, the majority of which never happened.

These points together create a bit of an answer, but they don't feel like the whole picture and Copeland doesn't connect the pieces. It seems possible that Dalio may simply be good at investing; he reads obsessively and clearly enjoys thinking about markets, and being an abusive cult leader doesn't take up all of his time. It's also true that to some extent hedge funds are semi-free money machines, in that once you have a sufficient quantity of money and political connections you gain access to investment opportunities and mechanisms that are very likely to make money and that the typical investor simply cannot access. Dalio is clearly good at making personal connections, and invested a lot of effort into forming close ties with tricky clients such as pools of Chinese money.

Perhaps the most compelling explanation isn't mentioned directly in this book but instead comes from Matt Levine. Bridgewater touts its algorithmic trading over humans making individual trades, and there is some reason to believe that consistently applying an algorithm without regard to human emotion is a solid trading strategy in at least some investment areas. Levine has asked in his newsletter, tongue firmly in cheek, whether the bizarre cult-like behavior and constant infighting is a strategy to distract all the humans and keep them from messing with the algorithm and thus making bad decisions.

Copeland leaves this question unsettled. Instead, one comes away from this book with a clear vision of the most dysfunctional workplace I have ever heard of, and an endless litany of bizarre events each more astonishing than the last. If you like watching train wrecks, this is the book for you. The only drawback is that, unlike other entries in this genre such as Bad Blood or Billion Dollar Loser, Bridgewater is a wildly successful company, so you don't get the schadenfreude of seeing a house of cards collapse. You do, however, get a helpful mental model to apply to the next person who tries to talk to you about "radical honesty" and "idea meritocracy."

The flaw in this book is that the existence of an organization like Bridgewater is pointing to systematic flaws in how our society works, which Copeland is largely uninterested in interrogating. "How could this have happened?" is a rather large question to leave unanswered. The sheer outrageousness of Dalio's behavior also gets a bit tiring by the end of the book, when you've seen the patterns and are hearing about the fourth variation. But this is still an astonishing book, and a worthy entry in the genre of capitalism disasters.

Rating: 7 out of 10

25 February, 2024 03:46AM

Jacob Adams

AAC and Debian

Currently, in a default installation of Debian with the GNOME desktop, Bluetooth headphones that require the AAC codec1 cannot be used. As the Debian wiki outlines, using the AAC codec over Bluetooth, while technically supported by PipeWire, is explicitly disabled in Debian at this time. This is because the fdk-aac library needed to enable this support is currently in the non-free component of the repository, meaning that PipeWire, which is in the main component, cannot depend on it.

How to Fix it Yourself

If what you, like me, need is simply for Bluetooth Audio to work with AAC in Debian’s default desktop environment2, then you’ll need to rebuild the pipewire package to include the AAC codec. While the current version in Debian main has been built with AAC deliberately disabled, it is trivial to enable if you can install a version of the fdk-aac library.

I preface this with the usual caveats when it comes to patent and licensing controversies. I am not a lawyer, building this package and/or using it could get you into legal trouble.

These instructions have only been tested on an up-to-date copy of Debian 12.

  1. Install pipewire’s build dependencies
    sudo apt install build-essential devscripts
    sudo apt build-dep pipewire
    
  2. Install libfdk-aac-dev
    sudo apt install libfdk-aac-dev
    

    If the above doesn’t work you’ll likely need to enable non-free and try again

    sudo sed -i 's/main/main non-free/g' /etc/apt/sources.list
    sudo apt update
    

    Alternatively, if you wish to ensure you are maximally license-compliant and patent un-infringing3, you can instead build fdk-aac-free which includes only those components of AAC that are known to be patent-free3. This is what should eventually end up in Debian to resolve this problem (see below).

    sudo apt install git-buildpackage
    mkdir fdk-aac-source
    cd fdk-aac-source
    git clone https://salsa.debian.org/multimedia-team/fdk-aac
    cd fdk-aac
    gbp buildpackage
    sudo dpkg -i ../libfdk-aac2_*deb ../libfdk-aac-dev_*deb
    
  3. Get the pipewire source code
    mkdir pipewire-source
    cd pipewire-source
    apt source pipewire
    

    This will create a bunch of files within the pipewire-source directory, but you’ll only need the pipewire-<version> folder, this contains all the files you’ll need to build the package, with all the debian-specific patches already applied. Note that you don’t want to run the apt source command as root, as it will then create files that your regular user cannot edit.

  4. Fix the dependencies and build options To fix up the build scripts to use the fdk-aac library, you need to save the following as pipewire-source/aac.patch
    --- debian/control.orig
    +++ debian/control
    @@ -40,8 +40,8 @@
                 modemmanager-dev,
                 pkg-config,
                 python3-docutils,
    -               systemd [linux-any]
    -Build-Conflicts: libfdk-aac-dev
    +               systemd [linux-any],
    +               libfdk-aac-dev
     Standards-Version: 4.6.2
     Vcs-Browser: https://salsa.debian.org/utopia-team/pipewire
     Vcs-Git: https://salsa.debian.org/utopia-team/pipewire.git
    --- debian/rules.orig
    +++ debian/rules
    @@ -37,7 +37,7 @@
     		-Dauto_features=enabled \
     		-Davahi=enabled \
     		-Dbluez5-backend-native-mm=enabled \
    -		-Dbluez5-codec-aac=disabled \
    +		-Dbluez5-codec-aac=enabled \
     		-Dbluez5-codec-aptx=enabled \
     		-Dbluez5-codec-lc3=enabled \
     		-Dbluez5-codec-lc3plus=disabled \
    

    Then you’ll need to run patch from within the pipewire-<version> folder created by apt source:

    patch -p0 < ../aac.patch
    
  5. Build pipewire
    cd pipewire-*
    debuild
    

    Note that you will likely see an error from debsign at the end of this process, this is harmless, you simply don’t have a GPG key set up to sign your newly-built package4. Packages don’t need to be signed to be installed, and debsign uses a somewhat non-standard signing process that dpkg does not check anyway.

  1. Install libspa-0.2-bluetooth
    sudo dpkg -i libspa-0.2-bluetooth_*.deb
    
  2. Restart PipeWire and/or Reboot
    sudo reboot
    

    Theoretically there’s a set of services to restart here that would get pipewire to pick up the new library, probably just pipewire itself. But it’s just as easy to restart and ensure everything is using the correct library.

Why

This is a slightly unusual situation, as the fdk-aac library is licensed under what even the GNU project acknowledges is a free software license. However, this license explicitly informs the user that they need to acquire a patent license to use this software5:

3. NO PATENT LICENSE

NO EXPRESS OR IMPLIED LICENSES TO ANY PATENT CLAIMS, including without limitation the patents of Fraunhofer, ARE GRANTED BY THIS SOFTWARE LICENSE. Fraunhofer provides no warranty of patent non-infringement with respect to this software. You may use this FDK AAC Codec software or modifications thereto only for purposes that are authorized by appropriate patent licenses.

To quote the GNU project:

Because of this, and because the license author is a known patent aggressor, we encourage you to be careful about using or redistributing software under this license: you should first consider whether the licensor might aim to lure you into patent infringement.

AAC is covered by a number of patents, which expire at some point in the 2030s6. As such the current version of the library is potentially legally dubious to ship with any other software, as it could be considered patent-infringing3.

Fedora’s solution

Since 2017, Fedora has included a modified version of the library as fdk-aac-free, see the announcement and the bugzilla bug requesting review.

This version of the library includes only the AAC LC profile, which is believed to be entirely patent-free3.

Based on this, there is an open bug report in Debian requesting that the fdk-aac package be moved to the main component and that the pipwire package be updated to build against it.

The Debian NEW queue

To resolve these bugs, a version of fdk-aac-free has been uploaded to Debian by Jeremy Bicha. However, to make it into Debian proper, it must first pass through the ftpmaster’s NEW queue. The current version of fdk-aac-free has been in the NEW queue since July 2023.

Based on conversations in some of the bugs above, it’s been there since at least 20227.

I hope this helps anyone stuck with AAC to get their hardware working for them while we wait for the package to eventually make it through the NEW queue.

Discuss on Hacker News

  1. Such as, for example, any Apple AirPods, which only support AAC AFAICT. 

  2. Which, as of Debian 12 is GNOME 3 under Wayland with PipeWire. 

  3. I’m not a lawyer, I don’t know what kinds of infringement might or might not be possible here, do your own research, etc.  2 3 4

  4. And if you DO have a key setup with debsign you almost certainly don’t need these instructions. 

  5. This was originally phrased as “explicitly does not grant any patent rights.” It was pointed out on Hacker News that this is not exactly what it says, as it also includes a specific note that you’ll need to acquire your own patent license. I’ve now quoted the relevant section of the license for clarity. 

  6. Wikipedia claims the “base” patents expire in 2031, with the extensions expiring in 2038, but its source for these claims is some guy’s spreadsheet in a forum. The same discussion also brings up Wikipedia’s claim and casts some doubt on it, so I’m not entirely sure what’s correct here, but I didn’t feel like doing a patent deep-dive today. If someone can provide a clear answer that would be much appreciated. 

  7. According to Jeremy Bícha: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1021370#17 

25 February, 2024 12:00AM

February 24, 2024

Niels Thykier

Language Server for Debian: Spellchecking

This is my third update on writing a language server for Debian packaging files, which aims at providing a better developer experience for Debian packagers.

Lets go over what have done since the last report.

Semantic token support

I have added support for what the Language Server Protocol (LSP) call semantic tokens. These are used to provide the editor insights into tokens of interest for users. Allegedly, this is what editors would use for syntax highlighting as well.

Unfortunately, eglot (emacs) does not support semantic tokens, so I was not able to test this. There is a 3-year old PR for supporting with the last update being ~3 month basically saying "Please sign the Copyright Assignment". I pinged the GitHub issue in the hopes it will get unstuck.

For good measure, I also checked if I could try it via neovim. Before installing, I read the neovim docs, which helpfully listed the features supported. Sadly, I did not spot semantic tokens among those and parked from there.

That was a bit of a bummer, but I left the feature in for now. If you have an LSP capable editor that supports semantic tokens, let me know how it works for you! :)

Spellchecking

Finally, I implemented something Otto was missing! :)

This stared with Paul Wise reminding me that there were Python binding for the hunspell spellchecker. This enabled me to get started with a quick prototype that spellchecked the Description fields in debian/control. I also added spellchecking of comments while I was add it.

The spellchecker runs with the standard en_US dictionary from hunspell-en-us, which does not have a lot of technical terms in it. Much less any of the Debian specific slang. I spend considerable time providing a "built-in" wordlist for technical and Debian specific slang to overcome this. I also made a "wordlist" for known Debian people that the spellchecker did not recognise. Said wordlist is fairly short as a proof of concept, and I fully expect it to be community maintained if the language server becomes a success.

My second problem was performance. As I had suspected that spellchecking was not the fastest thing in the world. Therefore, I added a very small language server for the debian/changelog, which only supports spellchecking the textual part. Even for a small changelog of a 1000 lines, the spellchecking takes about 5 seconds, which confirmed my suspicion. With every change you do, the existing diagnostics hangs around for 5 seconds before being updated. Notably, in emacs, it seems that diagnostics gets translated into an absolute character offset, so all diagnostics after the change gets misplaced for every character you type.

Now, there is little I could do to speed up hunspell. But I can, as always, cheat. The way diagnostics work in the LSP is that the server listens to a set of notifications like "document opened" or "document changed". In a response to that, the LSP can start its diagnostics scanning of the document and eventually publish all the diagnostics to the editor. The spec is quite clear that the server owns the diagnostics and the diagnostics are sent as a "notification" (that is, fire-and-forgot). Accordingly, there is nothing that prevents the server from publishing diagnostics multiple times for a single trigger. The only requirement is that the server publishes the accumulated diagnostics in every publish (that is, no delta updating).

Leveraging this, I had the language server for debian/changelog scan the document and publish once for approximately every 25 typos (diagnostics) spotted. This means you quickly get your first result and that clears the obsolete diagnostics. Thereafter, you get frequent updates to the remainder of the document if you do not perform any further changes. That is, up to a predefined max of typos, so we do not overload the client for longer changelogs. If you do any changes, it resets and starts over.

The only bit missing was dealing with concurrency. By default, a pygls language server is single threaded. It is not great if the language server hangs for 5 seconds everytime you type anything. Fortunately, pygls has builtin support for asyncio and threaded handlers. For now, I did an async handler that await after each line and setup some manual detection to stop an obsolete diagnostics run. This means the server will fairly quickly abandon an obsolete run.

Also, as a side-effect of working on the spellchecking, I fixed multiple typos in the changelog of debputy. :)

Follow up on the "What next?" from my previous update

In my previous update, I mentioned I had to finish up my python-debian changes to support getting the location of a token in a deb822 file. That was done, the MR is now filed, and is pending review. Hopefully, it will be merged and uploaded soon. :)

I also submitted my proposal for a different way of handling relationship substvars to debian-devel. So far, it seems to have received only positive feedback. I hope it stays that way and we will have this feature soon. Guillem proposed to move some of this into dpkg, which might delay my plans a bit. However, it might be for the better in the long run, so I will wait a bit to see what happens on that front. :)

As noted above, I managed to add debian/changelog as a support format for the language server. Even if it only does spellchecking and trimming of trailing newlines on save, it technically is a new format and therefore cross that item off my list. :D

Unfortunately, I did not manage to write a linter variant that does not involve using an LSP-capable editor. So that is still pending. Instead, I submitted an MR against elpa-dpkg-dev-el to have it recognize all the fields that the debian/control LSP knows about at this time to offset the lack of semantic token support in eglot.

From here...

My sprinting on this topic will soon come to an end, so I have to a bit more careful now with what tasks I open!

I think I will narrow my focus to providing a batch linting interface. Ideally, with an auto-fix for some of the more mechanical issues, where this is little doubt about the answer.

Additionally, I think the spellchecking will need a bit more maturing. My current code still trips on naming patterns that are "clearly" verbatim or code references like things written in CamelCase or SCREAMING_SNAKE_CASE. That gets annoying really quickly. It also trips on a lot of commands like dpkg-gencontrol, but that is harder to fix since it could have been a real word. I think those will have to be fixed people using quotes around the commands. Maybe the most popular ones will end up in the wordlist.

Beyond that, I will play it by ear if I have any time left. :)

24 February, 2024 08:45AM by Niels Thykier

February 23, 2024

Scarlett Gately Moore

Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads.

Witch Wells AZ SunsetWitch Wells AZ Sunset

It has been a very busy 3 weeks here in Kubuntu!

Kubuntu 22.04.4 LTS has been released and can be downloaded from here: https://kubuntu.org/getkubuntu/

Work done for the upcoming 24.04 LTS release:

  • Frameworks 5.115 is in proposed waiting for the Qt transition to complete.
  • Debian merges for Plasma 5.27.10 are done, and I have confirmed there will be another bugfix release on March 6th.
  • Applications 23.08.5 is being worked on right now.
  • Added support for riscv64 hardware.
  • Bug triaging and several fixes!
  • I am working on Kubuntu branded Plasma-Welcome, Orca support and much more!
  • Aaron and the Kfocus team has been doing some amazing work getting Calamares perfected for release! Thank you!
  • Rick has been working hard on revamping kubuntu.org, stay tuned! Thank you!
  • I have added several more apparmor profiles for packages affected by https://bugs.launchpad.net/ubuntu/+source/kgeotag/+bug/2046844
  • I have aligned our meta package to adhere to https://community.kde.org/Distributions/Packaging_Recommendations and will continue to apply the rest of the fixes suggested there. Thanks for the tip Nate!

We have a branding contest! Please do enter, there are some exciting prizes https://kubuntu.org/news/kubuntu-graphic-design-contest/

Debian:

I have uploaded to NEW the following packages:

  • kde-inotify-survey
  • plank-player
  • aura-browser

I am currently working on:

  • alligator
  • xwaylandvideobridge

KDE Snaps:

KDE applications 23.08.5 have been uploaded to Candidate channel, testing help welcome. https://snapcraft.io/search?q=KDE I have also working on bug fixes, time allowing.

My continued employment depends on you, please consider a donation! https://kubuntu.org/donate/

Thank you for stopping by!

~Scarlett

23 February, 2024 11:42AM by sgmoore

hackergotchi for Gunnar Wolf

Gunnar Wolf

10 things software developers should learn about learning

This post is a review for Computing Reviews for 10 things software developers should learn about learning , a article published in Communications of the ACM

As software developers, we understand the detailed workings of the different components of our computer systems. And–probably due to how computers were presented since their appearance as “digital brains” in the 1940s–we sometimes believe we can transpose that knowledge to how our biological brains work, be it as learners or as problem solvers. This article aims at making the reader understand several mechanisms related to how learning and problem solving actually work in our brains. It focuses on helping expert developers convey knowledge to new learners, as well as learners who need to get up to speed and “start coding.” The article’s narrative revolves around software developers, but much of what it presents can be applied to different problem domains.

The article takes this mission through ten points, with roughly the same space given to each of them, starting with wrong assumptions many people have about the similarities between computers and our brains. The first section, “Human Memory Is Not Made of Bits,” explains the brain processes of remembering as a way of strengthening the force of a memory (“reconsolidation”) and the role of activation in related network pathways. The second section, “Human Memory Is Composed of One Limited and One Unlimited System,” goes on to explain the organization of memories in the brain between long-term memory (functionally limitless, permanent storage) and working memory (storing little amounts of information used for solving a problem at hand). However, the focus soon shifts to how experience in knowledge leads to different ways of using the same concepts, the importance of going from abstract to concrete knowledge applications and back, and the role of skills repetition over time.

Toward the end of the article, the focus shifts from the mechanical act of learning to expertise. Section 6, “The Internet Has Not Made Learning Obsolete,” emphasizes that problem solving is not just putting together the pieces of a puzzle; searching online for solutions to a problem does not activate the neural pathways that would get fired up otherwise. The final sections tackle the differences that expertise brings to play when teaching or training a newcomer: the same tools that help the beginner’s productivity as “training wheels” will often hamper the expert user’s as their knowledge has become automated.

The article is written with a very informal and easy-to-read tone and vocabulary, and brings forward several issues that might seem like commonsense but do ring bells when it comes to my own experiences both as a software developer and as a teacher. The article closes by suggesting several books that further expand on the issues it brings forward. While I could not identify a single focus or thesis with which to characterize this article, the several points it makes will likely help readers better understand (and bring forward to consciousness) mental processes often taken for granted, and consider often-overlooked aspects when transmitting knowledge to newcomers.

23 February, 2024 01:56AM

Reproducible Builds (diffoscope)

diffoscope 258 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 258. This version includes the following changes:

[ Chris Lamb ]
* Use the 7zip package (over p7zip-full) after package transition.
  (Closes: #1063559)
* Update debian/tests/control.

[ Vagrant Cascadian ]
* Fix a typo in the package name field (!) within debian/changelog.

You find out more by visiting the project homepage.

23 February, 2024 12:00AM

February 21, 2024

Niels Thykier

Expanding on the Language Server (LSP) support for debian/control

I have spent some more time on improving my language server for debian/control. Today, I managed to provide the following features:

  • The X- style prefixes for field names are now understood and handled. This means the language server now considers XC-Package-Type the same as Package-Type.

  • More diagnostics:

    • Fields without values now trigger an error marker
    • Duplicated fields now trigger an error marker
    • Fields used in the wrong paragraph now trigger an error marker
    • Typos in field names or values now trigger a warning marker. For field names, X- style prefixes are stripped before typo detection is done.
    • The value of the Section field is now validated against a dataset of known sections and trigger a warning marker if not known.
  • The "on-save trim end of line whitespace" now works. I had a logic bug in the server side code that made it submit "no change" edits to the editor.

  • The language server now provides "hover" documentation for field names. There is a small screenshot of this below. Sadly, emacs does not support markdown or, if it does, it does not announce the support for markdown. For now, all the documentation is always in markdown format and the language server will tag it as either markdown or plaintext depending on the announced support.

  • The language server now provides quick fixes for some of the more trivial problems such as deprecated fields or typos of fields and values.

  • Added more known fields including the XS-Autobuild field for non-free packages along with a link to the relevant devref section in its hover doc.

This covers basically all my known omissions from last update except spellchecking of the Description field.

An image of emacs showing documentation for the Provides field from the language server.

Spellchecking

Personally, I feel spellchecking would be a very welcome addition to the current feature set. However, reviewing my options, it seems that most of the spellchecking python libraries out there are not packaged for Debian, or at least not other the name I assumed they would be.

The alternative is to pipe the spellchecking to another program like aspell list. I did not test this fully, but aspell list does seem to do some input buffering that I cannot easily default (at least not in the shell). Though, either way, the logic for this will not be trivial and aspell list does not seem to include the corrections either. So best case, you would get typo markers but no suggestions for what you should have typed. Not ideal.

Additionally, I am also concerned with the performance for this feature. For d/control, it will be a trivial matter in practice. However, I would be reusing this for d/changelog which is 99% free text with plenty of room for typos. For a regular linter, some slowness is acceptable as it is basically a batch tool. However, for a language server, this potentially translates into latency for your edits and that gets annoying.

While it is definitely on my long term todo list, I am a bit afraid that it can easily become a time sink. Admittedly, this does annoy me, because I wanted to cross off at least one of Otto's requested features soon.

On wrap-and-sort support

The other obvious request from Otto would be to automate wrap-and-sort formatting. Here, the problem is that "we" in Debian do not agree on the one true formatting of debian/control. In fact, I am fairly certain we do not even agree on whether we should all use wrap-and-sort. This implies we need a style configuration.

However, if we have a style configuration per person, then you get style "ping-pong" for packages where the co-maintainers do not all have the same style configuration. Additionally, it is very likely that you are a member of multiple packaging teams or groups that all have their own unique style. Ergo, only having a personal config file is doomed to fail.

The only "sane" option here that I can think of is to have or support "per package" style configuration. Something that would be committed to git, so the tooling would automatically pick up the configuration. Obviously, that is not fun for large packaging teams where you have to maintain one file per package if you want a consistent style across all packages. But it beats "style ping-pong" any day of the week.

Note that I am perfectly open to having a personal configuration file as a fallback for when the "per package" configuration file is absent.

The second problem is the question of which format to use and what to name this file. Since file formats and naming has never been controversial at all, this will obviously be the easy part of this problem. But the file should be parsable by both wrap-and-sort and the language server, so you get the same result regardless of which tool you use. If we do not ensure this, then we still have the style ping-pong problem as people use different tools.

This also seems like time sink with no end. So, what next then...?

What next?

On the language server front, I will have a look at its support for providing semantic hints to the editors that might be used for syntax highlighting. While I think most common Debian editors have built syntax highlighting already, I would like this language server to stand on its own. I would like us to be in a situation where we do not have implement yet another editor extension for Debian packaging files. At least not for editors that support the LSP spec.

On a different front, I have an idea for how we go about relationship related substvars. It is not directly related to this language server, except I got triggered by the language server "missing" a diagnostic for reminding people to add the magic Depends: ${misc:Depends}[, ${shlibs:Depends}] boilerplate. The magic boilerplate that you have to write even though we really should just fix this at a tooling level instead. Energy permitting, I will formulate a proposal for that and send it to debian-devel.

Beyond that, I think I might start adding support for another file. I also need to wrap up my python-debian branch, so I can get the position support into the Debian soon, which would remove one papercut for using this language server.

Finally, it might be interesting to see if I can extract a "batch-linter" version of the diagnostics and related quickfix features. If nothing else, the "linter" variant would enable many of you to get a "mini-Lintian" without having to do a package build first.

21 February, 2024 10:00PM by Niels Thykier

February 20, 2024

Language Server (LSP) support for debian/control

About a month ago, Otto Kekäläinen asked for editor extensions for debian related files on the debian-devel mailing list. In that thread, I concluded that what we were missing was a "Language Server" (LSP) for our packaging files.

Last week, I started a prototype for such a LSP for the debian/control file as a starting point based on the pygls library. The initial prototype worked and I could do very basic diagnostics plus completion suggestion for field names.

Current features

I got 4 basic features implemented, though I have only been able to test two of them in emacs.

  • Diagnostics or linting of basic issues.
  • Completion suggestions for all known field names that I could think of and values for some fields.
  • Folding ranges (untested). This feature enables the editor to "fold" multiple lines. It is often used with multi-line comments and that is the feature currently supported.
  • On save, trim trailing whitespace at the end of lines (untested). Might not be registered correctly on the server end.

Despite its very limited feature set, I feel editing debian/control in emacs is now a much more pleasant experience.

Coming back to the features that Otto requested, the above covers a grand total of zero. Sorry, Otto. It is not you, it is me.

Completion suggestions

For completion, all known fields are completed. Place the cursor at the start of the line or in a partially written out field name and trigger the completion in your editor. In my case, I can type R-R-R and trigger the completion and the editor will automatically replace it with Rules-Requires-Root as the only applicable match. Your milage may vary since I delegate most of the filtering to the editor, meaning the editor has the final say about whether your input matches anything.

The only filtering done on the server side is that the server prunes out fields already used in the paragraph, so you are not presented with the option to repeat an already used field, which would be an error. Admittedly, not an error the language server detects at the moment, but other tools will.

When completing field, if the field only has one non-default value such as Essential which can be either no (the default, but you should not use it) or yes, then the completion suggestion will complete the field along with its value.

This is mostly only applicable for "yes/no" fields such as Essential and Protected. But it does also trigger for Package-Type at the moment.

As for completing values, here the language server can complete the value for simple fields such as "yes/no" fields, Multi-Arch, Package-Type and Priority. I intend to add support for Section as well - maybe also Architecture.

Diagnostics

On the diagnostic front, I have added multiple diagnostics:

  • An error marker for syntax errors.
  • An error marker for missing a mandatory field like Package or Architecture. This also includes Standards-Version, which is admittedly mandatory by policy rather than tooling falling part.
  • An error marker for adding Multi-Arch: same to an Architecture: all package.
  • Error marker for providing an unknown value to a field with a set of known values. As an example, writing foo in Multi-Arch would trigger this one.
  • Warning marker for using deprecated fields such as DM-Upload-Allowed, or when setting a field to its default value for fields like Essential. The latter rule only applies to selected fields and notably Multi-Arch: no does not trigger a warning.
  • Info level marker if a field like Priority duplicates the value of the Source paragraph.

Notable omission at this time:

  • No errors are raised if a field does not have a value.
  • No errors are raised if a field is duplicated inside a paragraph.
  • No errors are used if a field is used in the wrong paragraph.
  • No spellchecking of the Description field.
  • No understanding that Foo and X[CBS]-Foo are related. As an example, XC-Package-Type is completely ignored despite being the old name for Package-Type.
  • Quick fixes to solve these problems... :)

Trying it out

If you want to try, it is sadly a bit more involved due to things not being uploaded or merged yet. Also, be advised that I will regularly rebase my git branches as I revise the code.

The setup:

  • Build and install the deb of the main branch of pygls from https://salsa.debian.org/debian/pygls The package is in NEW and hopefully this step will soon just be a regular apt install.
  • Build and install the deb of the rts-locatable branch of my python-debian fork from https://salsa.debian.org/nthykier/python-debian There is a draft MR of it as well on the main repo.
  • Build and install the deb of the lsp-support branch of debputy from https://salsa.debian.org/debian/debputy
  • Configure your editor to run debputy lsp debian/control as the language server for debian/control. This is depends on your editor. I figured out how to do it for emacs (see below). I also found a guide for neovim at https://neovim.io/doc/user/lsp. Note that debputy can be run from any directory here. The debian/control is a reference to the file format and not a concrete file in this case.

Obviously, the setup should get easier over time. The first three bullet points should eventually get resolved by merges and upload meaning you end up with an apt install command instead of them.

For the editor part, I would obviously love it if we can add snippets for editors to make the automatically pick up the language server when the relevant file is installed.

Using the debputy LSP in emacs

The guide I found so far relies on eglot. The guide below assumes you have the elpa-dpkg-dev-el package installed for the debian-control-mode. Though it should be a trivially matter to replace debian-control-mode with a different mode if you use a different mode for your debian/control file.

In your emacs init file (such as ~/.emacs or ~/.emacs.d/init.el), you add the follow blob.

(with-eval-after-load 'eglot
    (add-to-list 'eglot-server-programs
        '(debian-control-mode . ("debputy" "lsp" "debian/control"))))

Once you open the debian/control file in emacs, you can type M-x eglot to activate the language server. Not sure why that manual step is needed and if someone knows how to automate it such that eglot activates automatically on opening debian/control, please let me know.

For testing completions, I often have to manually activate them (with C-M-i or M-x complete-symbol). Though, it is a bit unclear to me whether this is an emacs setting that I have not toggled or something I need to do on the language server side.

From here

As next steps, I will probably look into fixing some of the "known missing" items under diagnostics. The quick fix would be a considerable improvement to assisting users.

In the not so distant future, I will probably start to look at supporting other files such as debian/changelog or look into supporting configuration, so I can cover formatting features like wrap-and-sort.

I am also very much open to how we can provide integrations for this feature into editors by default. I will probably create a separate binary package for specifically this feature that pulls all relevant dependencies that would be able to provide editor integrations as well.

20 February, 2024 03:45PM by Niels Thykier

Language Server (LSP) support for debian/control

Work done:

  • [X] No errors are raised if a field does not have a value.
  • [X] No errors are raised if a field is duplicated inside a paragraph.
  • [X] No errors are used if a field is used in the wrong paragraph.
  • [ ] No spellchecking of the Description field.
  • [X] No understanding that Foo and X[CBS]-Foo are related. As an example, XC-Package-Type is completely ignored despite being the old name for Package-Type.
  • [X] Fixed the on-save trim end of line whitespace. Bug in the server end.
  • [X] Hover text for field names

20 February, 2024 03:45PM by Niels Thykier

hackergotchi for Jonathan Dowland

Jonathan Dowland

Propaganda — A Secret Wish

How can I not have done one of these for Propaganda already?

Propaganda: A Secret Wish, and 12"s of Duel and p:Machinery

Propaganda/A Secret Wish is criminally underrated. There seem to be a zillion variants of each track, which keeps completionists busy. Of the variants of Jewel/Duel/etc., I'm fond of the 03:10, almost instrumental mix of Jewel; preferring the lyrics to be exclusive to the more radio friendly Duel (04:42); I don't need them conflating (Jewel 06:21); but there are further depths I've yet to explore (Do Well cassette mix, the 20:07 The First Cut / Duel / Jewel (Cut Rough)/ Wonder / Bejewelled mega-mix...)

I recently watched The Fall of the House of Usher which I think has Poe lodged in my brain, which is how this album popped back into my conciousness this morning, with the opening lines of Dream within a Dream.

But are they Goth?

20 February, 2024 09:26AM

February 19, 2024

hackergotchi for Matthew Garrett

Matthew Garrett

Debugging an odd inability to stream video

We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think. tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

(Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

(Edit: thanks to this comment, it sounds like the underlying issue was a kernel bug that Fastly developed a fix for - under certain configurations, the kernel fails to associate the MTU update with the egress interface and so it continues sending overly large packets)

comment count unavailable comments

19 February, 2024 10:30PM

Valhalla's Things

Jeans, step one

Posted on February 19, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

CW for body size change mentions

A woman wearing a pair of tight jeans.

Just like the corset, I also needed a new pair of jeans.

Back when my body size changed drastically of course my jeans no longer fit. While I was waiting for my size to stabilize I kept wearing them with a somewhat tight belt, but it was ugly and somewhat uncomfortable.

When I had stopped changing a lot I tried to buy new ones in the same model, and found out that I was too thin for the menswear jeans of that shop. I could have gone back to wearing women’s jeans, but I didn’t want to have to deal with the crappy fabric and short pockets, so I basically spent a few years wearing mostly skirts, and oversized jeans when I really needed trousers.

Meanwhile, I had drafted a jeans pattern for my SO, which we had planned to make in technical fabric, but ended up being made in a cotton-wool mystery mix for winter and in linen-cotton for summer, and the technical fabric version was no longer needed (yay for natural fibres!)

It was clear what the solution to my jeans problems would have been, I just had to stop getting distracted by other projects and draft a new pattern using a womanswear block instead of a menswear one.

Which, in January 2024 I finally did, and I believe it took a bit less time than the previous one, even if it had all of the same fiddly pieces.

I already had a cut of the same cotton-linen I had used for my SO, except in black, and used it to make the pair this post is about.

The parametric pattern is of course online, as #FreeSoftWear, at the usual place. This time it was faster, since I didn’t have to write step-by-step instructions, as they are exactly the same as the other pattern.

Same as above, from the back, with the crotch seam pulling a bit. A faint decoration can be seen on the pockets, with the line art version of the logo seen on this blog.

Making also went smoothly, and the result was fitting. Very fitting. A big too fitting, and the standard bum adjustment of the back was just enough for what apparently still qualifies as a big bum, so I adjusted the pattern to be able to add a custom amount of ease in a few places.

But at least I had a pair of jeans-shaped trousers that fit!

Except, at 200 g/m² I can’t say that fabric is the proper weight for a pair of trousers, and I may have looked around online1 for some denim, and, well, it’s 2024, so my no-fabric-buy 2023 has not been broken, right?

Let us just say that there may be other jeans-related posts in the near future.


  1. I had already asked years ago for denim at my local fabric shops, but they don’t have the proper, sturdy, type I was looking for.↩︎

19 February, 2024 12:00AM

February 18, 2024

Iustin Pop

New skis ⛷️ , new fun!

As I wrote a bit back, I had a really, really bad fourth quarter in 2023. As new years approached, and we were getting ready to go on a ski trip, I wasn’t even sure if and how much I’ll be able to ski.

And I felt so out of it that I didn’t even buy a ski pass for the whole week, just bought one day to see if a) I still like, and b) my knee can deal with it. And, of course, it was good.

It was good enough that I ended up skiing the entire week, and my knee got better during the week. WTH?! I don’t understand this anymore, but it was good. Good enough that this early year trip put me back on track and I started doing sports again.

But the main point is, that during this ski week, and talking to the teacher, I realised that my ski equipment is getting a bit old. I bought everything roughly ten years ago, and while they still hold up OK, my ski skills have improved since then. I said to myself, 10 years is a good run, I’ll replace this year the skis, next year the boot & helmet, etc.

I didn’t expect much from new skis - I mean, yes, better skis, but what does “better” mean? Well, once I’ve read enough forum posts, apparently the skis I selected are “that good”, which to me meant they’re not bad.

Oh my, how wrong I was! Double, triple wrong! Rather than fighting with the skis, it’s enough to think what I wand to do, and the skis do it. I felt OK-ish, maybe 10% limited by my previous skis, but the new skis are really good and also I know that I’m just at 30% or so of the new skis - so room to grow. For now, I am able to ski faster, longer, and I feel less tired than before. I’ve actually compared and I can do twice the distance in a day and feel slightly less tired at the end. I’ve moved from “this black is cool but a bit difficult, I’ll do another run later in the day when I’ve recovered” to “how cool, this blacks is quite empty of people, let’s stay here for 2-3 more rounds”.

The skis are new, and I haven’t used them on all the places I’m familiar with - but the upgrade is huge. The people on the ski forum were actually not exaggerating, I realise now. Stöckli++, and they’re also made in Switzerland. Can’t wait to get back to Saas Fee and to the couple of slopes that were killing me before, to see how they feel now.

So, very happy with this choice. I’d be even happier if my legs were less damaged, but well, you can’t win them all.

And not last, the skis are also very cool looking 😉

18 February, 2024 06:00AM

Russell Coker

Release Years

In 2008 I wrote about the idea of having a scheduled release for Debian and other distributions as Mark Shuttleworth had proposed [1]. I still believe that Mark’s original idea for synchronised release dates of Linux distributions (or at least synchronised feature sets) is a good one but unfortunately it didn’t take off.

Having been using Ubuntu a bit recently I’ve found the version numbering system to be really good. Ubuntu version 16.04 was release in April 2016, it’s support ended 5 years later in about April 2021, so any commonly available computers from 2020 should run it well and versions of applications released in about 2017 should run on it. If I have to support a Debian 10 (Buster) system I need to start with a web search to discover when it was released (July 2019). That suggests that applications packaged for Ubuntu 18.04 are likely to run on it.

If we had the same numbering system for Debian and Ubuntu then it would be easier to compare versions. Debian 19.06 would be obviously similar to Ubuntu 18.04, or we could plan for the future and call it Debian 2019.

Then it would be ideal if hardware vendors did the same thing (as car manufacturers have been doing for a long time). Which versions of Ubuntu and Debian would run well on a Dell PowerEdge R750? It takes a little searching to discover that the R750 was released in 2021, but if they called it a PowerEdge 2021R70 then it would be quite obvious that Ubuntu 2022.04 would run well on it and that Ubuntu 2020.04 probably has a kernel update with all the hardware supported.

One of the benefits for the car industry in naming model years is that it drives the purchase of a new car. A 2015 car probably isn’t going to impress anyone and you will know that it is missing some of the features in more recent models. It would be easier to get management to sign off on replacing old computers if they had 2015 on the front, trying to estimate hidden costs of support and lost productivity of using old computers is hard but saying “it’s a 2015 model and way out of date” is easy.

There is a history of using dates as software versions. The “Reference Policy” for SE Linux [2] (which is used for Debian) has releases based on date. During the Debian development process I upload policy to Debian based on the upstream Git and use the same version numbering scheme which is more convenient than the “append git date to last full release” system that some maintainers are forced to use. The users can get an idea of how much the Debian/Unstable policy has diverged from the last full release by looking at the dates. Also an observer might see the short difference between release dates of SE Linux policy and Debian release freeze dates as an indication that I beg the upstream maintainers to make a new release just before each Debian freeze – which is expactly what I do.

When I took over the Portslave [3] program I made releases based on date because there were several forks with different version numbering schemes so my options were to just choose a higher number (which is OK initially but doesn’t scale if there are more forks) or use a date and have people know that the recent date is the most recent version. The fact that the latest release of Portslave is version 2010.04.19 shows that I have not been maintaining it in recent years (due to lack of hardware as well as lack of interest), so if someone wants to take over the project I would be happy to help them do so!

I don’t expect people at Dell and other hardware vendors to take much notice of my ideas (I have tweeted them photographic evidence of a problem with no good response). But hopefully this will start a discussion in the free software community.

18 February, 2024 04:00AM by etbe

February 17, 2024

James Valleroy

snac2: a minimalist ActivityPub server in Debian

snac2, currently available in Debian testing and unstable, is described by its upstream as “A simple, minimalistic ActivityPub instance written in portable C.” It provides an ActivityPub server with a bare-bones web interface. It does not use JavaScript or require a database.

Basic forms for creating a new post, or following someone

ActivityPub is the protocol for federated social networks that is implemented by Mastodon, Pleroma, and other similar server software. Federated social networks are most often used for “micro-blogging”, or making many small posts. You can decide to follow another user (or bot) to see their posts, even if they happen to be on a different server (as long as the server software is compatible with the ActivityPub standard).

The timeline shows posts from accounts that you follow

In addition, snac2 has preliminary support for the Mastodon Client API. This allows basic support for mobile apps that support Mastodon, but you should expect that many features are not available yet.

If you are interested in running a minimalist ActivityPub server on Debian, please try out snac2, and report any bugs that you find.

17 February, 2024 12:58PM by James Valleroy

February 16, 2024

Melissa Wen

Keep an eye out: We are preparing the 2024 Linux Display Next Hackfest!

Igalia is preparing the 2024 Linux Display Next Hackfest and we are thrilled to announce that this year’s hackfest will take place from May 14th to 16th at our HQ in A Coruña, Spain.

This unconference-style event aims to bring together the most relevant players in the Linux display community to tackle current challenges and chart the future of the display stack.

Key goals for the hackfest include:

  • Releasing the power of collaboration: We’ll work to remove bottlenecks and pave the way for smoother, more performant displays.
  • Problem-solving powerhouse: Brainstorming sessions and collaborative coding will target issues like HDR, color management, variable refresh rates, and more.
  • Building on past commitments: Let’s solidify the progress made in recent years and push the boundaries even further.

The hackfest fosters an intimate and focused environment to brainstorm, hack, and design solutions alongside fellow display experts. Participants will dive into discussions, tinker with code, and contribute to shaping the future of the Linux display stack.

More details are available on the official website.

Stay tuned! Keep an eye out for more information, mark your calendars and start prepping your hacking gear.

16 February, 2024 05:25PM

hackergotchi for David Bremner

David Bremner

Generating ikiwiki markdown from org

My web pages are (still) in ikiwiki, but lately I have started authoring things like assignments and lectures in org-mode so that I can have some literate programming facilities. There is is org-mode export built-in, but it just exports source blocks as examples (i.e. unhighlighted verbatim). I added a custom exporter to mark up source blocks in a way ikiwiki can understand. Luckily this is not too hard the second time.

(with-eval-after-load "ox-md"
  (org-export-define-derived-backend 'ik 'md
    :translate-alist '((src-block . ik-src-block))
    :menu-entry '(?m 1 ((?i "ikiwiki" ik-export-to-ikiwiki)))))

(defun ik-normalize-language  (str)
  (cond
   ((string-equal str "plait") "racket")
   ((string-equal str "smol") "racket")
   (t str)))

(defun ik-src-block (src-block contents info)
  "Transcode a SRC-BLOCK element from Org to beamer
         CONTENTS is nil.  INFO is a plist used as a communication
         channel."
  (let* ((body  (org-element-property :value src-block))
         (lang  (ik-normalize-language (org-element-property :language src-block))))
    (format "[[!format <span class="error">Error: unsupported page format &#37;s</span>]]" lang body)))

(defun ik-export-to-ikiwiki
    (&optional async subtreep visible-only body-only ext-plist)
  "Export current buffer as an ikiwiki markdown file.
    See org-md-export-to-markdown for full docs"
  (require 'ox)
  (interactive)
  (let ((file (org-export-output-file-name ".mdwn" subtreep)))
    (org-export-to-file 'ik file
      async subtreep visible-only body-only ext-plist)))

16 February, 2024 04:01PM

hackergotchi for Mike Gabriel

Mike Gabriel

Debian Edu 12 - Call for Testing

This is a call for testing of Debian Edu based on Debian bookworm. With the Debian 12.5 point release all required packages have landed in the Debian Edu ISO images that allow you to install a Debian Edu system based on Debian 12.

ISO Image Downloads

You can find the Blueray Disc ISO image (use for main server installation) at: http://cdimage.debian.org/cdimage/release/current/amd64/iso-bd/debian-ed...

For standalone workstation installations or installations on an already up-and-running Debian Edu site, please use the netinst ISO image: http://cdimage.debian.org/cdimage/release/current/amd64/iso-cd/debian-ed...

Quick Start HowTo

For testing Debian Edu 12, set up e.g. LXD or libVirt and install (at least) three virtual machines. In your virtualization software prepare an internal network where the VMs can reach one another without needing access to your local network.

The three VMs:

  • setup a gateway VM (no DHCP service) at 10.0.0.1/8 (e.g. OPNsense, pfSense, Debian Edu Router, etc.), two NICs: one uplink, one NIC in the internal network
  • install the Debian Edu mainserver from the ISO image on another VM, one NIC in the internal network
  • then boot a 3rd VM via PXE and install your first workstation, on NIC in the internal network

Happy testing!

Further Readings

Overall installation profile concept of Debian Edu:
https://wiki.debian.org/DebianEdu/BeforeGettingStarted

Debian Edu 12 manual:
https://jenkins.debian.net/userContent/debian-edu-doc/

Debian Edu 12 status page:
https://wiki.debian.org/DebianEdu/Status/Bookworm

16 February, 2024 07:09AM by sunweaver

February 13, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Random Postgres wishlist

In no particular order, things it would be neat if Postgres had (some of these I've implemented myself earlier, though not in Postgres):

  • A JIT that is not based on LLVM (using an AOT compiler framework for JIT is just not a salvagable choice; witness the absolute dearth of really successful LLVM-based JITs out there).
  • Interesting orders that understand functional dependencies.
  • Cross-join correlation statistics.
  • Better combination of multi-column statistics (the standard way in academia seems to be maximum entropy).
  • Longer-term, some built-in really solid column store or at least multi-row handling, for larger OLAP jobs.
  • Ability to hold an in-memory sample, also for OLAP stuff.
  • Ditching GEQO for something more modern (there are tons of choices depending on how you want the optimizer to evolve, pick your poison).

I'm sure everyone would want something like multimaster clustering, but I honestly don't care for myself. :-) I only have one server anyway.

13 February, 2024 10:16PM

Arturo Borrero González

Back to the Wikimedia Foundation!

Wikimedia Foundation logo

In October 2023, I departed from the Wikimedia Foundation, the non-profit organization behind well-known projects like Wikipedia and others, to join Spryker.

However, in January 2024 Spryker conducted a round of layoffs reportedly due to budget and business reasons. I was among those affected, being let go just three months after joining the company.

Fortunately, the Wikimedia Cloud Services team, where I previously worked, was still seeking to backfill my position, so I reached out to them. They graciously welcomed me back as a Senior Site Reliability Engineer, in the same team and position as before.

Although this three-month career “detour” wasn’t the outcome I initially envisioned, I found it to be a valuable experience. During this time, I gained knowledge in a new tech stack, based on AWS, and discovered new engineering methodologies. Additionally, I had the opportunity to meet some wonderful individuals. I believe I have emerged stronger from this experience.

Returning to the Wikimedia Foundation is truly motivating. It feels privileged to be part of this mature organization, its community, and movement, with its inspiring mission and values.

In addition, I’m hoping that this also means I can once again dedicate a bit more attention to my FLOSS activities, such as my duties within the Debian project.

My email address is back online: aborrero@wikimedia.org. You can find me again in the IRC libera.chat server, in the usual wikimedia channels, nick arturo.

13 February, 2024 09:30AM

hackergotchi for Matthew Palmer

Matthew Palmer

Not all TLDs are Created Equal

In light of the recent cancellation of the queer.af domain registration by the Taliban, the fragile and difficult nature of country-code top-level domains (ccTLDs) has once again been comprehensively demonstrated. Since many people may not be aware of the risks, I thought I’d give a solid explainer of the whole situation, and explain why you should, in general, not have anything to do with domains which are registered under ccTLDs.

Top-level What-Now?

A top-level domain (TLD) is the last part of a domain name (the collection of words, separated by periods, after the https:// in your web browser’s location bar). It’s the “com” in example.com, or the “af” in queer.af.

There are two kinds of TLDs: country-code TLDs (ccTLDs) and generic TLDs (gTLDs). Despite all being TLDs, they’re very different beasts under the hood.

What’s the Difference?

Generic TLDs are what most organisations and individuals register their domains under: old-school technobabble like “com”, “net”, or “org”, historical oddities like “gov”, and the new-fangled world of words like “tech”, “social”, and “bank”. These gTLDs are all regulated under a set of rules created and administered by ICANN (the “Internet Corporation for Assigned Names and Numbers”), which try to ensure that things aren’t a complete wild-west, limiting things like price hikes (well, sometimes, anyway), and providing means for disputes over names1.

Country-code TLDs, in contrast, are all two letters long2, and are given out to countries to do with as they please. While ICANN kinda-sorta has something to do with ccTLDs (in the sense that it makes them exist on the Internet), it has no authority to control how a ccTLD is managed. If a country decides to raise prices by 100x, or cancel all registrations that were made on the 12th of the month, there’s nothing anyone can do about it.

If that sounds bad, that’s because it is. Also, it’s not a theoretical problem – the Taliban deciding to asssert its bigotry over the little corner of the Internet namespace it has taken control of is far from the first time that ccTLDs have caused grief.

Shifting Sands

The queer.af cancellation is interesting because, at the time the domain was reportedly registered, 2018, Afghanistan had what one might describe as, at least, a different political climate. Since then, of course, things have changed, and the new bosses have decided to get a bit more active.

Those running queer.af seem to have seen the writing on the wall, and were planning on moving to another, less fraught, domain, but hadn’t completed that move when the Taliban came knocking.

The Curious Case of Brexit

When the United Kingdom decided to leave the European Union, it fell foul of the EU’s rules for the registration of domains under the “eu” ccTLD3. To register (and maintain) a domain name ending in .eu, you have to be a resident of the EU. When the UK ceased to be part of the EU, residents of the UK were no longer EU residents.

Cue much unhappiness, wailing, and gnashing of teeth when this was pointed out to Britons. Some decided to give up their domains, and move to other parts of the Internet, while others managed to hold onto them by various legal sleight-of-hand (like having an EU company maintain the registration on their behalf).

In any event, all very unpleasant for everyone involved.

Geopolitics… on the Internet?!?

After Russia invaded Ukraine in February 2022, the Ukranian Vice Prime Minister asked ICANN to suspend ccTLDs associated with Russia. While ICANN said that it wasn’t going to do that, because it wouldn’t do anything useful, some domain registrars (the companies you pay to register domain names) ceased to deal in Russian ccTLDs, and some websites restricted links to domains with Russian ccTLDs.

Whether or not you agree with the sort of activism implied by these actions, the fact remains that even the actions of a government that aren’t directly related to the Internet can have grave consequences for your domain name if it’s registered under a ccTLD. I don’t think any gTLD operator will be invading a neighbouring country any time soon.

Money, Money, Money, Must Be Funny

When you register a domain name, you pay a registration fee to a registrar, who does administrative gubbins and causes you to be able to control the domain name in the DNS. However, you don’t “own” that domain name4 – you’re only renting it. When the registration period comes to an end, you have to renew the domain name, or you’ll cease to be able to control it.

Given that a domain name is typically your “brand” or “identity” online, the chances are you’d prefer to keep it over time, because moving to a new domain name is a massive pain, having to tell all your customers or users that now you’re somewhere else, plus having to accept the risk of someone registering the domain name you used to have and capturing your traffic… it’s all a gigantic hassle.

For gTLDs, ICANN has various rules around price increases and bait-and-switch pricing that tries to keep a lid on the worst excesses of registries. While there are any number of reasonable criticisms of the rules, and the Internet community has to stay on their toes to keep ICANN from totally succumbing to regulatory capture, at least in the gTLD space there’s some degree of control over price gouging.

On the other hand, ccTLDs have no effective controls over their pricing. For example, in 2008 the Seychelles increased the price of .sc domain names from US$25 to US$75. No reason, no warning, just “pay up”.

Who Is Even Getting That Money?

A closely related concern about ccTLDs is that some of the “cool” ones are assigned to countries that are… not great.

The poster child for this is almost certainly Libya, which has the ccTLD “ly”. While Libya was being run by a terrorist-supporting extremist, companies thought it was a great idea to have domain names that ended in .ly. These domain registrations weren’t (and aren’t) cheap, and it’s hard to imagine that at least some of that money wasn’t going to benefit the Gaddafi regime.

Similarly, the British Indian Ocean Territory, which has the “io” ccTLD, was created in a colonialist piece of chicanery that expelled thousands of native Chagossians from Diego Garcia. Money from the registration of .io domains doesn’t go to the (former) residents of the Chagos islands, instead it gets paid to the UK government.

Again, I’m not trying to suggest that all gTLD operators are wonderful people, but it’s not particularly likely that the direct beneficiaries of the operation of a gTLD stole an island chain and evicted the residents.

Are ccTLDs Ever Useful?

The answer to that question is an unqualified “maybe”. I certainly don’t think it’s a good idea to register a domain under a ccTLD for “vanity” purposes: because it makes a word, is the same as a file extension you like, or because it looks cool.

Those ccTLDs that clearly represent and are associated with a particular country are more likely to be OK, because there is less impetus for the registry to try a naked cash grab. Unfortunately, ccTLD registries have a disconcerting habit of changing their minds on whether they serve their geographic locality, such as when auDA decided to declare an open season in the .au namespace some years ago. Essentially, while a ccTLD may have geographic connotations now, there’s not a lot of guarantee that they won’t fall victim to scope creep in the future.

Finally, it might be somewhat safer to register under a ccTLD if you live in the location involved. At least then you might have a better idea of whether your domain is likely to get pulled out from underneath you. Unfortunately, as the .eu example shows, living somewhere today is no guarantee you’ll still be living there tomorrow, even if you don’t move house.

In short, I’d suggest sticking to gTLDs. They’re at least lower risk than ccTLDs.

“+1, Helpful”

If you’ve found this post informative, why not buy me a refreshing beverage? My typing fingers (both of them) thank you in advance for your generosity.


Footnotes

  1. don’t make the mistake of thinking that I approve of ICANN or how it operates; it’s an omnishambles of poor governance and incomprehensible decision-making. 

  2. corresponding roughly, though not precisely (because everything has to be complicated, because humans are complicated), to the entries in the ISO standard for “Codes for the representation of names of countries and their subdivisions”, ISO 3166. 

  3. yes, the EU is not a country; it’s part of the “roughly, though not precisely” caveat mentioned previously. 

  4. despite what domain registrars try very hard to imply, without falling foul of deceptive advertising regulations. 

13 February, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

February 12, 2024

Andrew Cater

Lessons from (and for) colleagues - and, implicitly, how NOT to get on

I have had excellent colleagues both at my day job and, especially, in Debian over the last thirty-odd years. Several have attempted to give me good advice - others have been exemplars. People retire: sadly, people die. What impression do you want to leave behind when you leave here?

Belatedly, I've come to realise that obduracy, sheer bloody mindedness, force of will and obstinacy will only get you so far. The following began very much as a tongue in cheek private memo to myself a good few years ago. I showed it to a colleague who suggested at the time that I should share it to a wider audience.

SOME ADVICE YOU MAY BENEFIT FROM

Personal conduct

  • Never argue with someone you believe to be arguing idiotically - a dispassionate bystander may have difficulty telling who's who.
  • You can't make yourself seem reasonable by behaving unreasonably
  • It does not matter how correct your point of view is if you get people's backs up
  • They may all be #####, @@@@@, %%%% and ******* - saying so out loud doesn't help improve matters and may make you seem intemperate.

 Working with others

  • Be the change you want to be and behave the way you want others to behave in order to achieve the desired outcome.
  • You can demolish someone's argument constructively and add weight to good points rather than tearing down their ideas and hard work and being ultra-critical and negative - no-one likes to be told "You know - you've got a REALLY ugly baby there"
  • It's easier to work with someone than to work against them and have to apologise repeatedly.
  • Even when you're outstanding and superlative, even you had to learn it all once. Be generous to help others learn: you shouldn't have to teach too many times if you teach correctly once and take time in doing so.

Getting the message across

  • Stop: think: write: review: (peer review if necessary): publish.
  • Clarity is all: just because you understand it doesn't mean anyone else will.
  • It does not matter how correct your point of view is if you put it across badly.
  • If you're giving advice: make sure it is:
  • Considered
  • Constructive
  • Correct as far as you can (and)
  • Refers to other people who may be able to help
  • Say thank you promptly if someone helps you and be prepared to give full credit where credit's due.

Work is like that

  • You may not know all the answers or even have the whole picture - consult, take advice - LISTEN TO THE ADVICE
  • Sometimes the right answer is not the immediately correct answer
  • Corollary: Sometimes the right answer for the business is not your suggested/preferred outcome
  • Corollary: Just because you can do it like that in the real world doesn't mean that you can do it that way inside the business. [This realisation is INTENSELY frustrating but you have to learn to deal with it]
  • DON'T ALWAYS DO IT YOURSELF - Attempt to fix the system, sometimes allow the corporate monster to fail - then do it yourself and fix it. It is always easier and tempting to work round the system and Just Flaming Do It but it doesn't solve problems in the longer term and may create more problems and ill-feeling than it solves.

    [Worked out for Andy Cater for himself after many years of fighting the system as a misguided missile - though he will freely admit that he doesn't always follow them as often as he should :) ]

     

 

 

12 February, 2024 10:13PM by Andrew Cater (noreply@blogger.com)

Emanuele Rocca

Enabling Kernel Settings in Debian

This time it’s about enabling new kernel config options in the official Debian kernel packages. A few dependencies are needed to run the various scripts used by the Debian kernel folks, as well as to build the kernel itself:

apt install git gpg python3-debian python3-dacite
apt build-dep linux

With that in place, fetch the linux and kernel-team repos:

git clone --depth 1 https://salsa.debian.org/kernel-team/linux
git clone --depth 1 https://salsa.debian.org/kernel-team/kernel-team

So far you’ve only got the Debian-specific bits. Fetch the actual kernel sources now. In the likely case that you’re building a stable kernel, run the following from within the linux directory:

debian/bin/genorig.py https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git

Use the torvalds repo if you’re building an RC version instead:

debian/bin/genorig.py https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Now generate the upstream tarball as well as debian/control. The first command will take a bit, and the second command will fail: but that’s success — just as the output says.

debian/rules orig
debian/rules debian/control

Now generate patched sources with:

debian/rules source

Time to edit the Kconfig and enable/disable whatever setting you wanted to change. Take a look around the files under debian/config/ to see where your changes should go. If it’s a setting shared among multiple architectures that may be debian/config/config. For x86-specific things, the file is debian/config/amd64/config. On aarch64 debian/config/arm64/config. If in doubt, you could try asking #debian-kernel on IRC.

It may look like you need to figure out where exactly in the file the setting should be placed. That is not the case. There’s a helpful script fixing things up for you:

../kernel-team/utils/kconfigeditor2/process.py .

The above will fail if you forgot to run debian/rules source. The debian/build/source_rt/Kconfig file is needed by the script:

Traceback (most recent call last):
  File "/tmp/linux/../kernel-team/utils/kconfigeditor2/process.py", line 19, in __init__
    menu = fs_menu[featureset or 'none']
           ~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'rt'

During handling of the above exception, another exception occurred:
[...]
FileNotFoundError: [Errno 2] No such file or directory: './debian/build/source_rt/Kconfig'

If that happens, run:

debian/rules source

Now process.py should work fine and fix your config file.

Excellent, now the config is updated and we’re ready to build the kernel. Off we go:

export MAKEFLAGS=-j$(nproc)
export DEB_BUILD_PROFILES='pkg.linux.nokerneldbg pkg.linux.nokerneldbginfo pkg.linux.notools nodoc'
dpkg-buildpackage -b -nc -uc

12 February, 2024 07:52AM by Emanuele Rocca (ema@linux.it)

hackergotchi for Gunnar Wolf

Gunnar Wolf

Heads up! A miniDebConf is approaching in Santa Fe, Argentina

I realize it’s a bit late to start publicly organizing this, but… better late than never 😉 I’m happy some Debian people I have directly contacted have already expressed interest. So, lets make this public!

For all interested people who are reasonably close to central Argentina, or can be persuaded to come here in a month’s time… You are all welcome!

It seems I managed to convince my good friend Martín Bayo (some Debian people will remember him, as he was present in DebConf19 in Curitiba, Brazil) to get some facilities for us to have a nice Debian get-together in Central Argentina.

Where?

We will meet at APUL — Asociación de Personal no-docente de la Universidad Nacional del Litoral, in downtown Santa Fe, Argentina.

When?

Saturday, 2024.03.09. It is quite likely we can get some spaces for continuing over Sunday if there is demand.

What are we planning?

We have little time for planning… but we want to have a space for Debian-related outreach (so, please think about a topic or two you’d like to share with general free software-interested, not too technical, audience). Please tell me by mail (gwolf@debian.org) about any ideas you might have.

We also want to have a general hacklab-style area to hang out, work a bit in our projects, and spend a good time together.

Logistics

I have briefly commented about this with our dear and always mighty DPL, and Debian will support Debian-related people interested in attending; please check personally with me for specifics on how to handle this case by case. My intention is to cover costs for travel, accomodation (one or two nights) and food for whoever is interested in coming over.

More information

As I don’t want to direct people to keep an eye on my blog post for updates, I’ll copy this information (and keep it updated!) at the Debian Wiki / DebianEvents / ar / 2024 / MiniDebConf / Santa Fe — please refer to that page!

Contact

Codes of Conduct

DebConf and Debian Code of Conduct apply.

See the DebConf Code of Conduct and the Debian Code of Conduct.

Registration

Registration is free, but needed. See the separate Registration page.

Talks

Please, send your proposal to gwolf@debian.org

12 February, 2024 12:03AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, January 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In January, 16 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 14.0h (out of 7.0h assigned and 7.0h from previous period).
  • Bastien Roucariès did 22.0h (out of 16.0h assigned and 6.0h from previous period).
  • Ben Hutchings did 14.5h (out of 8.0h assigned and 16.0h from previous period), thus carrying over 9.5h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 10.0h (out of 10.0h assigned).
  • Emilio Pozuelo Monfort did 10.0h (out of 14.75h assigned and 27.0h from previous period), thus carrying over 31.75h to the next month.
  • Guilhem Moulin did 9.75h (out of 25.0h assigned), thus carrying over 15.25h to the next month.
  • Holger Levsen did 3.5h (out of 12.0h assigned), thus carrying over 8.5h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Roberto C. Sánchez did 8.75h (out of 9.5h assigned and 2.5h from previous period), thus carrying over 3.25h to the next month.
  • Santiago Ruano Rincón did 13.5h (out of 8.25h assigned and 7.75h from previous period), thus carrying over 2.5h to the next month.
  • Sean Whitton did 0.5h (out of 0.25h assigned and 5.75h from previous period), thus carrying over 5.5h to the next month.
  • Sylvain Beucler did 9.5h (out of 23.25h assigned and 18.5h from previous period), thus carrying over 32.25h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 12.0h (out of 10.25h assigned and 1.75h from previous period).
  • Utkarsh Gupta did 8.5h (out of 35.75h assigned), thus carrying over 24.75h to the next month.

Evolution of the situation

In January, we have released 25 DLAs.

A variety of particularly notable packages were updated during January. Among those updates were the Linux kernel (both versions 5.10 and 4.19), mariadb-10.3, openjdk-11, firefox-esr, and thunderbird.

In addition to the many other LTS package updates which were released in January, LTS contributors continue their efforts to make impactful contributions both within the Debian community.

Thanks to our sponsors

Sponsors that joined recently are in bold.

12 February, 2024 12:00AM by Roberto C. Sánchez

Reproducible Builds (diffoscope)

diffoscope 257 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 257. This version includes the following changes:

[ James Addison ]
* Parse the header and hunksize of diffs strictly before parsing the context
  below. (Closes: reproducible-builds/diffoscope#363)
* Reformat code to comply with the latest version of Black (24.1.1).

[ Chris Lamb ]
* Expand the previous changelog entry to include the CVE number that was
  subsequently assigned.
* Bump the miniumum Black requirement to run the "Black clean" test and make
  test_zip.py Black clean.

You find out more by visiting the project homepage.

12 February, 2024 12:00AM

February 11, 2024

hackergotchi for Marco d'Itri

Marco d'Itri

Extending access to the systemd RuntimeDirectory with a POSIX ACL

inn2 uses ephemeral UNIX domain sockets in /run/news/ to communicate with the ctlinnd program. Since the directory is only writeable by the "news" user, other unprivileged users are not able to use the command.

I solved this by extending the inn2.service systemd unit with a drop-in file which uses setfacl to give access to my user "md" to the RuntimeDirectory created by systemd. This is the content of /etc/systemd/system/inn2.service.d/md-ctlinnd.conf:

[Service]
# innd will change the permissions of /run/news/ when started: without
# creating it now with mode 0775 then that will change the ACL mask.
RuntimeDirectoryMode=0775
# allow user md to run ctlinnd(8), which creates sockets in /run/news/
ExecStartPost=/usr/bin/setfacl --modify user:md:rwx $RUNTIME_DIRECTORY

The non-obvious issue here is that the innd daemon on startup will change the directory permissions in a way which sets a more restrictive (non group-writeable) ACL mask, and this would make the newly created user ACL ineffective. The solution is to create the directory group-writeable from start.

(Beware: this creates a trivial privileges escalation from md to news.)

11 February, 2024 04:12PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Upcoming Improvements to Salsa CI, /usr-move, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Upcoming Improvements to Salsa CI, by Santiago Ruano Rincón

Santiago started picking up the work made by Outreachy Intern, Enock Kashada (a big thanks to him!), to solve some long-standing issues in Salsa CI. Currently, the first job in a Salsa CI pipeline is the extract-source job, used to produce a debianize source tree of the project. This job was introduced to make it possible to build the projects on different architectures, on the subsequent build jobs. However, that extract-source approach is sub-optimal: not only it increases the execution time of the pipeline by some minutes, but also projects whose source tree is too large are not able to use the pipeline. The debianize source tree is passed as an artifact to the build jobs, and for those large projects, the size of their source tree exceeds the Salsa’s limits. This is specific issue is documented as issue #195, and the proposed solution is to get rid of the extract-source job, relying on sbuild in the very build job (see issue #296).

Switching to sbuild would also help to improve the build source job, solving issues such as #187 and #298.

The current work-in-progress is very preliminary, but it has already been possible to run the build (amd64), build-i386 and build-source job using sbuild with the unshare mode. The image on the right shows a pipeline that builds grep. All the test jobs use the artifacts of the new build job. There is a lot of remaining work, mainly making the integration with ccache work. This change could break some things, it will also be important to test how the new pipeline works with complex projects.

Also, thanks to Emmanuel Arias, we are proposing a Google Summer of Code 2024 project to improve Salsa CI. As part of the ongoing work in preparation for the GSoC 2024 project, Santiago has proposed a merge request to make more efficient how contributors can test their changes on the Salsa CI pipeline.

/usr-move, by Helmut Grohne

In January, we sent most of the moving patches for the set of packages involved with debootstrap. Notably missing is glibc, which turns out harder than anticipated via dumat, because it has Conflicts between different architectures, which dumat does not analyze.

Patches for diversion mitigations have been updated in a way to not exhibit any loss anymore.

The main change here is that packages which are being diverted now support the diverting packages in transitioning their diversions. We also supported a few packages with non-trivial changes such as netplan.io. dumat has been enhanced to better support derivatives such as Ubuntu.

Miscellaneous contributions

  • Python 3.12 migration trundles on. Stefano Rivera helped port several new packages to support 3.12.
  • Stefano updated the Sphinx configuration of DebConf Video Team’s documentation, which was broken by Sphinx 7.
  • Stefano published the videos from the Cambridge MiniDebConf to YouTube and PeerTube.
  • DebConf 24 planning has begun, and Stefano & Utkarsh have started work on this.
  • Utkarsh re-sponsored the upload of golang-github-prometheus-community-pgbouncer-exporter for Lena.
  • Colin Watson added Incus support to autopkgtest.
  • Colin discovered Perl::Critic and used it to tidy up some poor practices in several of his packages, including debconf.
  • Colin did some overdue debconf maintenance, mainly around tidying up error message handling in several places (1, 2, 3).
  • Colin figured out how to update the mirror size documentation in debmirror, last updated in 2010. It should now be much easier to keep it up to date regularly.
  • Colin issued a man-db buster update to clean up some irritations due to strict sandboxing.
  • Thorsten Alteholz adopted two more packages, magicfilter and ifhp, for the debian-printing team. Those packages are the last ones of the latest round of adoptions to preserve the old printing protocol within Debian. If you know of other packages that should be retained, please don’t hesitate to contact Thorsten.
  • Enrico participated in /usr-merge discussions with Helmut.
  • Helmut sent patches for 16 cross build failures.
  • Helmut supported Matthias Klose (not affiliated with Freexian) with adding -for-host support to gcc-defaults.
  • Helmut uploaded dput-ng enabling dcut migrate and merging two MRs of Ben Hutchings.
  • Santiago took part in the discussions relating to the EU Cyber Resilience Act (CRA) and the Debian public statement that was published last year. He participated in a meeting with Members of the European Parliament (MEPs), Marcel Kolaja and Karen Melchior, and their teams to clarify some points about the impact of the CRA and Debian and downstream projects, and the improvements in the last version of the proposed regulation.

11 February, 2024 12:00AM by Utkarsh Gupta

February 10, 2024

Andrew Cater

Debian point releases - updated media for Bullseye (11.9) and Bookworm (12.5) - 2024-02-10

 It's been a LONG day: two point releases in a day takes of the order of twelve or thirteen hours of fairly solid work on behalf of those doing the releases and testing.

Thanks firstly to the main Debian release team for all the initial work.

Thanks to Isy, RattusRattus, Sledge and egw in Cottenham, smcv and Helen closer to the centre of Cambridge, cacin and others who have dropped in and out of IRC and helped testing.

I've been at home but active on IRC - missing the team (and the food) and drinking far too much coffee/eating too many biscuits.

We've found relatively few bugs that we haven't previously noted: it's been a good day. Back again, at some point a couple of months from now to do this all over again.

With luck, I can embed a picture of the Cottenham folk below: it's fun to know _exactly_ where people are because you've been there yourself.



10 February, 2024 10:54PM by Andrew Cater (noreply@blogger.com)

February 09, 2024

Thorsten Alteholz

My Debian Activities in January 2024

FTP master

This month I accepted 333 and rejected 31 packages. The overall number of packages that got accepted was 342.

Hooray, I already accepted package number 30000.

The statistic, where I get my numbers from, started in February 2002. Up to now 81694 packages got accepted. Given that I accepted package 20000 in October 2020, would I be able to accept half of the packages that made it through NEW?

Debian LTS

This was my hundred-fifteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3726-1] bind9 security update for one CVEs to fix stack exhaustion
  • [#1060186] Bookworm PU-bug for libde265; yes, this is a new one.
  • [#1056935] Bullseye PU-bug for libde; yes, this is a new one as well

This month I was finally able to really run the test suite of bind9. I already wanted to give up with this package, but Santiago encouraged me to proceed. So, here you are fixed-Buster-version. Jessie and Stretch have to wait a bit until the dust has settled.

Last but not least I also did a few days of frontdesk duties.

Debian ELTS

This month was the sixty-sixth ELTS month. During my allocated time I uploaded:

  • [ELA-1031-1]xerces-c security update for one CVE to fix an out-of-bound access in Jessie and Stretch
  • [ELA-1036-1] jasper security update for one CVE to fix an invalid memory write

This month I also worked on the Jessie and Stretch updates for bind9. The uploads should happen soon. I also started to work on an update for exim4. Last but not least I did a few days of frontdesk duties.

Debian Printing

This month I adopted:

At the moment these packages are the last adoptions to preserve the old printing protocol within Debian. If you know of other packages that should be retained, please don’t hesitate to ask me. But don’t wait for too long, I have fun to process RM-bugs :-).

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version of:

Debian IoT

This month I uploaded new upstream versions of:

  • pyicloud to remove the deprecated dependency python3-future

Other stuff

This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

09 February, 2024 03:23PM by alteholz

Abhijith PA

A new(kind of) phone

I was using a refurbished Xiaomi Redmi 6 Pro (codename: sakura . I do remember buying this phone to run LineageOS as I found this model had support. But by the time I bought it (there was a quite a gap between my research and the actual purchase) lineageOS ended the suport for this device. The maintainer might’ve ended this port, I don’t blame them. Its a non rewarding work.

Later I found there is a DotOS custom rom available. I did a week of test run. There seems lot of bugs and it all was minor and I can live with that. Since there is no other official port for redmi 6 pro at the time, I settled on this buggy DotOS nightly build.

The phone was doing fine all these(3+) years, but recently the camera started showing greyish dots on some areas to a point that I can’t even properly take a photo. The outer body of the phone wasn’t original, as it was a refurbished, thus the colors started to peel off and wasn’t aesthetically pleasing. Then there is the Location detection issue which took a while to figure out location and battery drains fast. I recently started mapping public transportation routes, and the GPS issue was a problem for me. With all these problems combined, I decided to buy a new phone.

But choosing a phone wasn’t easy. There are far too many options in the market. However, one thing I was quite sure of was that I wouldn’t be buying a brand new phone, but a refurbished one. It is my protest against all these phone manufacturers who have convinced the general public that a mobile phone’s lifespan is only 2 years. Of course, there are a few exceptions, but the majority of players in the Indian market are not.

Now I think about it, I haven’t bought any brand new computing device, be it phones or laptops since 2015. All are refurbished devices except for an Orange pi. My Samsung R439 laptop which I already blogged about is still going strong.

Back to picking phone. So I began by comparing LineageOS website to an online refurbished store to pick a phone that has an official lineage support then to reddit search for any reported hardware failures

Google Pixel phones are well-known in the custom ROM community for ease of installation, good hardware and Android security releases. My above claims are from privacyguides.org and XDA-developers. I was convinced to go with this choice. But I have some one at home who is at the age of throwing things. So I didn’t want to risk buying an expensive Pixel phone only to have it end up with a broken screen and edges. Perhaps I will consider buying a Pixel next time. After doing some more research and cross comparison I landed up on Redmi note 9. Codename: merlinx.

Based on my previous experience I knew it is going to be a pain to unlock the bootloader of Xiaomi phones and I was prepared for that but this time there was an extra hoop. The phone came with ROM MIUI version 13. A small search on XDA forums and reddit told unless we downgrade to 12 or so it is difficult to unlock bootloader for this device. And performing this wasn’t exactly easy as we need tools like SP Flash etc.

It took some time, but I’ve completed the downgrade process. From there, the rest was cakewalk as everything was perfectly documented in LineageOS wiki. And ta-da, I have a phone running lineageOS. I been using it for some time and honestly I haven’t come across any bug in my way of usage.

One advice I will give to you if you are going to flash a custom ROM. There are plenty of videos about unlocking bootloader, installing ROMs in Youtube. Please REFRAIN from watching and experimenting it on your phone unless you figured a way to un brick your device first.

09 February, 2024 10:53AM

Reproducible Builds (diffoscope)

diffoscope 256 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 256. This version includes the following changes:

* CVE-2024-25711: Use a determistic name when extracting content from GPG
  artifacts instead of trusting the value of gpg's --use-embedded-filenames.

  This prevents a potential information disclosure vulnerability that could
  have been exploited by providing a specially-crafted GPG file with an
  embedded filename of, say, "../../.ssh/id_rsa".

  Many thanks to Daniel Kahn Gillmor <dkg@debian.org> for reporting this
  issue and providing feedback.

  (Closes: reproducible-builds/diffoscope#361)

* Temporarily fix support for Python 3.11.8 re. a potential regression
  with the handling of ZIP files. (See reproducible-builds/diffoscope#362)

You find out more by visiting the project homepage.

09 February, 2024 12:00AM

February 08, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.8.0.0 on CRAN: New Upstream, Interface Polish

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1119 other packages on CRAN, downloaded 32.5 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 575 times according to Google Scholar.

This release brings a new (stable) upstream (minor) release Armadillo 12.8.0 prepared by Conrad two days ago. We, as usual, prepared a release candidate which we tested against the over 1100 CRAN packages using RcppArmadillo. This found no issues, which was confirmed by CRAN once we uploaded and so it arrived as a new release today in a fully automated fashion.

We also made a small change that had been prepared by GitHub issue #400: a few internal header files that were cluttering the top-level of the include directory have been moved to internal directories. The standard header is of course unaffected, and the set of ‘full / light / lighter / lightest’ headers (matching we did a while back in Rcpp) also continue to work as one expects. This change was also tested in a full reverse-dependency check in January but had not been released to CRAN yet.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.0.0 (2024-02-06)

  • Upgraded to Armadillo release 12.8.0 (Cortisol Injector)

    • Faster detection of symmetric expressions by pinv() and rank()

    • Expanded shift() to handle sparse matrices

    • Expanded conv_to for more flexible conversions between sparse and dense matrices

    • Added cbrt()

    • More compact representation of integers when saving matrices in CSV format

  • Five non-user facing top-level include files have been removed (#432 closing #400 and building on #395 and #396)

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 February, 2024 11:33PM

Sven Hoexter

Use GitHub CLI to List all Repository Secrets

Write it down before I forget about it again:

for x in $(gh api graphql --paginate -f query='query($endCursor:String) { organization(login:"myorg") {
    repositories(first: 100, after: $endCursor, isArchived:false) {
        pageInfo {
            hasNextPage
            endCursor
        }
        nodes {
            name
        }
    }
    }
    }' --jq '.data.organization.repositories.nodes[].name'); do

    secrets=$(gh secret list --json name --jq '.[].name' -R "myorg/${x}" | tr '\n' ',')
    if ! [ -z "${secrets}" ]; then
        echo "${x},${secrets}"
    fi
done

Requests a list of all not archived repositories in a GitHub org and queries repository secrets. If we find some we output the repo name and the secrets in a comma separated list. Not real CSV, but good enough for further processing. I've to admit it's kinda beautiful what you can do with the gh cli by now. Sadly it seems the secrets are not yet available via GraphQL (or I missed it in the docs), so I just use the gh cli to do the REST calls.

08 February, 2024 10:12AM

hackergotchi for Bits from Debian

Bits from Debian

DebConf24 Logo Contest Results

Earlier this month the DebConf team announced the DebConf24 Logo Contest asking aspiring artists, designers, and contributors to submit an image that would represent the host city of Busan, the host nation of South Korea, and promote the next Debian Developer Conference.

The logo contest for DebConf24 received 10 submissions and garnered 354 responses with 3 proposals in particular getting very close to first place. The winning logo received 88 votes, the 2nd favored logo received 87 votes, and the 3rd most favored received 86 votes.

Thank you to Woohee Yang and Junsang Moon for sharing their artistic visions.

A very special Thank You to everyone who took the time to vote for our beautiful new logo!

The DebConf24 Team is proud to share for preview only the winning logo for the 24th Debian Developer Conference:

[DebConf24 Logo Contest Winner]

'sun-seagull-sea' by Woohee Yang

This is a preview copy, other revisions will occur for sizing, print, and media... but we had to share it with you all now. :).

Looking forward to seeing you all at #debconf24 in #Busan, South Korea 2024!

08 February, 2024 05:00AM by Donald Norwood

Reproducible Builds

Reproducible Builds at FOSDEM 2024

Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. Titled Reproducible Builds: The First Ten Years

In this talk Holger ‘h01ger’ Levsen will give an overview about Reproducible Builds: How it started with a small BoF at DebConf13 (and before), then grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an Executive Order of the President of the United States. And of course, the talk will not end there, but rather outline where we are today and where we still need to be going, until Debian stable (and other distros!) will be 100% reproducible, verified by many.

h01ger has been involved in reproducible builds since 2014 and so far has set up automated reproducibility testing for Debian, Fedora, Arch Linux, FreeBSD, NetBSD and coreboot.

More information can be found on FOSDEM’s own page for the talk, including a video recording and slides.


Separate from Holger’s talk, however, there were a number of other talks about reproducible builds at FOSDEM this year:

… and there was even an entire track on Software Bill of Materials.

08 February, 2024 12:00AM

February 07, 2024

Reproducible Builds in January 2024

Welcome to the January 2024 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. If you are interested in contributing to the project, please visit our Contribute page on our website.


“How we executed a critical supply chain attack on PyTorch”

John Stawinski and Adnan Khan published a lengthy blog post detailing how they executed a supply-chain attack against PyTorch, a popular machine learning platform “used by titans like Google, Meta, Boeing, and Lockheed Martin”:

Our exploit path resulted in the ability to upload malicious PyTorch releases to GitHub, upload releases to [Amazon Web Services], potentially add code to the main repository branch, backdoor PyTorch dependencies – the list goes on. In short, it was bad. Quite bad.

The attack pivoted on PyTorch’s use of “self-hosted runners” as well as submitting a pull request to address a trivial typo in the project’s README file to gain access to repository secrets and API keys that could subsequently be used for malicious purposes.


New Arch Linux forensic filesystem tool

On our mailing list this month, long-time Reproducible Builds developer kpcyrd announced a new tool designed to forensically analyse Arch Linux filesystem images.

Called archlinux-userland-fs-cmp, the tool is “supposed to be used from a rescue image (any Linux) with an Arch install mounted to, [for example], /mnt.” Crucially, however, “at no point is any file from the mounted filesystem eval’d or otherwise executed. Parsers are written in a memory safe language.”

More information about the tool can be found on their announcement message, as well as on the tool’s homepage. A GIF of the tool in action is also available.


Issues with our SOURCE_DATE_EPOCH code?

Chris Lamb started a thread on our mailing list summarising some potential problems with the source code snippet the Reproducible Builds project has been using to parse the SOURCE_DATE_EPOCH environment variable:

I’m not 100% sure who originally wrote this code, but it was probably sometime in the ~2015 era, and it must be in a huge number of codebases by now.

Anyway, Alejandro Colomar was working on the shadow security tool and pinged me regarding some potential issues with the code. You can see this conversation here.

Chris ends his message with a request that those with intimate or low-level knowledge of time_t, C types, overflows and the various parsing libraries in the C standard library (etc.) contribute with further info.


Distribution updates

In Debian this month, Roland Clobus posted another detailed update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that “all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours)”. Additionally 7 of the 8 bookworm images from the official download link build reproducibly at any later time.

In addition to this, three reviews of Debian packages were added, 17 were updated and 15 were removed this month adding to our knowledge about identified issues.

Elsewhere, Bernhard posted another monthly update for his work elsewhere in openSUSE.


Community updates

There were made a number of improvements to our website, including Bernhard M. Wiedemann fixing a number of typos of the term ‘nondeterministic’. [] and Jan Zerebecki adding a substantial and highly welcome section to our page about SOURCE_DATE_EPOCH to document its interaction with distribution rebuilds. [].


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 245 and 255 to Debian but focusing on triaging and/or merging code from other contributors. This included adding support for comparing eXtensible ARchive’ (.XAR/.PKG) files courtesy of Seth Michael Larson [][], as well considerable work from Vekhir in order to fix compatibility between various and subtle incompatible versions of the progressbar libraries in Python [][][][]. Thanks!


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Reduce the number of arm64 architecture workers from 24 to 16. []
    • Use diffoscope from the Debian release being tested again. []
    • Improve the handling when killing unwanted processes [][][] and be more verbose about it, too [].
    • Don’t mark a job as ‘failed’ if process marked as ‘to-be-killed’ is already gone. []
    • Display the architecture of builds that have been running for more than 48 hours. []
    • Reboot arm64 nodes when they hit an OOM (out of memory) state. []
  • Package rescheduling changes:

    • Reduce IRC notifications to ‘1’ when rescheduling due to package status changes. []
    • Correctly set SUDO_USER when rescheduling packages. []
    • Automatically reschedule packages regressing to FTBFS (build failure) or FTBR (build success, but unreproducible). []
  • OpenWrt-related changes:

    • Install the python3-dev and python3-pyelftools packages as they are now needed for the sunxi target. [][]
    • Also install the libpam0g-dev which is needed by some OpenWrt hardware targets. []
  • Misc:

    • As it’s January, set the real_year variable to 2024 [] and bump various copyright years as well [].
    • Fix a large (!) number of spelling mistakes in various scripts. [][][]
    • Prevent Squid and Systemd processes from being killed by the kernel’s OOM killer. []
    • Install the iptables tool everywhere, else our custom rc.local script fails. []
    • Cleanup the /srv/workspace/pbuilder directory on boot. []
    • Automatically restart Squid if it fails. []
    • Limit the execution of chroot-installation jobs to a maximum of 4 concurrent runs. [][]

Significant amounts of node maintenance was performed by Holger Levsen (eg. [][][][][][][] etc.) and Vagrant Cascadian (eg. [][][][][][][][]). Indeed, Vagrant Cascadian handled an extended power outage for the network running the Debian armhf architecture test infrastructure. This provided the incentive to replace the UPS batteries and consolidate infrastructure to reduce future UPS load. []

Elsewhere in our infrastructure, however, Holger Levsen also adjusted the email configuration for @reproducible-builds.org to deal with a new SMTP email attack. []


Upstream patches

The Reproducible Builds project tries to detects, dissects and fix as many (currently) unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Separate to this, Vagrant Cascadian followed up with the relevant maintainers when reproducibility fixes were not included in newly-uploaded versions of the mm-common package in Debian — this was quickly fixed, however. []



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

07 February, 2024 10:16PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

carbon

I got a new work laptop this year: A Thinkpad X1 Carbon (Gen 11). It wasn't the one I wanted: I'd ordered an X1 Nano, which had a footprint very reminiscent to me of my beloved x40.

Never mind! The Carbon is lovely. Despite ostensibly the same size as the T470s it's replacing, it's significantly more portable, and more capable. The two USB-A ports, as well as the full-size HDMI port, are welcome and useful (over the Nano).

I used to keep notes on setting up Linux on different types of hardware, but I haven't really bothered now for years. Things Just Work. That's good!

My old machine naming schemes are stretched beyond breaking point (and I've re-used my favourite hostname, qusp, one too many times) so I went for something new this time: Riffing on Carbon, I settled (for now) on carbyne, a carbon allotrope which is of interest to nanotechnologists (Seems appropriate)

07 February, 2024 07:24PM

February 06, 2024

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - February 2024

New Year, Same Great People! Our Debian User Group met for the first of our 2024 bi-monthly meetings on February 4th and it was loads of fun. Around twelve different people made it this time to Koumbit, where the meeting happened.

As a reminder, our meetings are called "Debian & Stuff" because we want to be as open as possible and welcome people that want to work on "other stuff" than Debian.

Here is what we did:

pollo:

  • tested a laptop that had a defective battery with a known good one (the problem was indeed with the battery :D)
  • renewed his expiring OpenPGP key
  • worked on removing trapperkeeper-scheduler-clojure from the DebCI reject_list
  • helped anarcat with packaging sigal
  • managed to feed most of the people present :)

LeLutin:

  • worked with lavamind to upload new upstream release of smokeping

mjeanson:

  • migrated from lxd to incus on his servers
  • helped anarcat with flashing his AirGradient

lavamind:

  • submitted a bug report on r-cran-rserve (promptly fixed/uploaded by maintainer!)
  • reviewed and uploaded smokeping
  • bug triaged the facter package
  • worked on puppet-agent new upstream version 8.4.0

viashimo:

  • updated puppet-strings from 2.9.0 to 4.2.1
  • reported upstream test failures on puppet-strings with recent versions of mdl

tvaz & tassia:

  • drafted a call & request for funding for the Vanier College FLOSS Club hardware marathon at Eastern Bloc
  • worked on an application to conduct research at Vanier College on Debian usability
  • babysitted :-)

joeDoe:

  • worked on his AirGradient
  • debugged the WiFi and VPN setup on his new laptop

anarcat:

Pictures

I was pretty busy this time around and ended up not taking a lot of pictures. Here's a bad one of the ceiling at Koumbit I took, and a picture by anarcat of the content of his boxes of loot:

A picture of the ceiling at Koumbit The content of anarcat's boxes of loot

06 February, 2024 07:30PM by Louis-Philippe Véronneau

hackergotchi for Robert McQueen

Robert McQueen

Flathub: Pros and Cons of Direct Uploads

I attended FOSDEM last weekend and had the pleasure to participate in the Flathub / Flatpak BOF on Saturday. A lot of the session was used up by an extensive discussion about the merits (or not) of allowing direct uploads versus building everything centrally on Flathub’s infrastructure, and related concerns such as automated security/dependency scanning.

My original motivation behind the idea was essentially two things. The first was to offer a simpler way forward for applications that use language-specific build tools that resolve and retrieve their own dependencies from the internet. Flathub doesn’t allow network access during builds, and so a lot of manual work and additional tooling is currently needed (see Python and Electron Flatpak guides). And the second was to offer a maybe more familiar flow to developers from other platforms who would just build something and then run another command to upload it to the store, without having to learn the syntax of a new build tool. There were many valid concerns raised in the room, and I think on reflection that this is still worth doing, but might not be as valuable a way forward for Flathub as I had initially hoped.

Of course, for a proprietary application where Flathub never sees the source or where it’s built, whether that binary is uploaded to us or downloaded by us doesn’t change much. But for an FLOSS application, a direct upload driven by the developer causes a regression on a number of fronts. We’re not getting too hung up on the “malicious developer inserts evil code in the binary” case because Flathub already works on the model of verifying the developer and the user makes a decision to trust that app – we don’t review the source after all. But we do lose other things such as our infrastructure building on multiple architectures, and visibility on whether the build environment or upload credentials have been compromised unbeknownst to the developer.

There is now a manual review process for when apps change their metadata such as name, icon, license and permissions – which would apply to any direct uploads as well. It was suggested that if only heavily sandboxed apps (eg no direct filesystem access without proper use of portals) were permitted to make direct uploads, the impact of such concerns might be somewhat mitigated by the sandboxing.

However, it was also pointed out that my go-to example of “Electron app developers can upload to Flathub with one command” was also a bit of a fiction. At present, none of them would pass that stricter sandboxing requirement. Almost all Electron apps run old versions of Chromium with less complete portal support, needing sandbox escapes to function correctly, and Electron (and Chromium’s) sandboxing still needs additional tooling/downstream patching to run inside a Flatpak. Buh-boh.

I think for established projects who already ship their own binaries from their own centralised/trusted infrastructure, and for developers who have understandable sensitivities about binary integrity such such as encryption, password or financial tools, it’s a definite improvement that we’re able to set up direct uploads with such projects with less manual work. There are already quite a few applications – including verified ones – where the build recipe simply fetches a binary built elsewhere and unpacks it, and if this already done centrally by the developer, repeating the exercise on Flathub’s server adds little value.

However for the individual developer experience, I think we need to zoom out a bit and think about how to improve this from a tools and infrastructure perspective as we grow Flathub, and as we seek to raise funds for different sources for these improvements. I took notes for everything that was mentioned as a tooling limitation during the BOF, along with a few ideas about how we could improve things, and hope to share these soon as part of an RFP/RFI (Request For Proposals/Request for Information) process. We don’t have funding yet but if we have some prospective collaborators to help refine the scope and estimate the cost/effort, we can use this to go and pursue funding opportunities.

06 February, 2024 10:57AM by ramcq

February 02, 2024

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in January 2024

02 February, 2024 04:26PM by Ben Hutchings

FOSS activity in January 2024

02 February, 2024 04:26PM

Scarlett Gately Moore

Some exciting news! Kubuntu: I’m back!!!

It’s official, the Kubuntu Council has hired me part time to work on the 24.04 LTS release, preparation for Plasma 6, and to bring life back into the Distribution. First I want thank the Kubuntu Council for this opportunity and I plan a long and successful journey together!!!!

My first week ( I started midweek ):

It has been a busy one! Many meet and greets with the team and other interested parties. I had the chance to chat with Mike from Kubuntu Focus and I have to say I am absolutely amazed with the work they have done, and if you are in the market for a new laptop, you must check these out!!! https://kfocus.org Or if you want to try before you buy you can download the OS! All they ask is for an e-mail, which is completely reasonable. Hosting isn’t free! Besides, you can opt out anytime and they don’t share it with anyone. I look forward to working closely with this project.

We now have a Kubuntu Team in KDE invent https://invent.kde.org/teams/distribution-kubuntu if you would like to join us, please don’t hesitate to ask! I have started a new Wiki and our first page is the ever important Bug triaging! It is still a WIP but you can check it out here: https://invent.kde.org/teams/distribution-kubuntu/docs/-/wikis/Bug-Triage-Story-WIP , with that said I have started the launchpad work to make tracking our bugs easier buy subscribing kubuntu-bugs to all our packages and creating proper projects for our packages missing them.

We have compiled a list of our various documentation links that need updated and Rick Timmis is updating kubuntu.org! Aaron Honeycutt has been busy with the Kubuntu Manual https://github.com/kubuntu-team/kubuntu-manual which is in good shape. We just need to improve our developer story 🙂

I have been working on the rather massive Apparmor bug https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2046844 with testing the fixes from the ppa and writing profiles for the various KDE packages affected ( pretty much anything that uses webengine ) and making progress there.

My next order of business staging Frameworks 5.114 with guidance from our super awesome Rik Mills that has been doing most of the heavy lifting in Kubuntu for many years now. So thank you for that Rik 🙂

I will also start on our big transition to the Calamaras Installer! I do have experience here, so I expect it will be a smooth one.

I am so excited for the future of Kubuntu and the exciting things to come! With that said, the Kubuntu funding is community donation driven. There is enough to pay me part time for a couple contracts, but it will run out and a full-time contract would be super awesome. I am reaching out to anyone enjoying Kubuntu and want to help with the future of Kubuntu to please consider a donation! We are working on more donation options, but for now you can donate through paypal at https://kubuntu.org/donate/ Thank you!!!!!

02 February, 2024 04:22PM by sgmoore

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in December 2023

02 February, 2024 03:55PM by Ben Hutchings

FOSS activity in December 2023

02 February, 2024 03:55PM

Ian Jackson

UPS, the Useless Parcel Service; VAT and fees

I recently had the most astonishingly bad experience with UPS, the courier company. They severely damaged my parcels, and were very bad about UK import VAT, ultimately ending up harassing me on autopilot.

The only thing that got their attention was my draft Particulars of Claim for intended legal action.

Surprisingly, I got them to admit in writing that the “disbursement fee” they charge recipients alongside the actual VAT, is just something they made up with no legal basis.

What happened

Autumn last year I ordered some furniture from a company in Germany. This was to be shipped by them to me by courier. The supplier chose UPS.

UPS misrouted one of the three parcels to Denmark. When everything arrived, it had been sat on by elephants. The supplier had to replace most of it, with considerable inconvenience and delay to me, and of course a loss to the supplier.

But this post isn’t mostly about that. This post is about VAT. You see, import VAT was due, because of fucking Brexit.

UPS made a complete hash of collecting that VAT. Their computers can’t issue coherent documents, their email helpdesk is completely useless, and their automated debt collection systems run along uninfluenced by any external input.

The crazy, including legal threats and escalating late payment fees, continued even after I paid the VAT discrepancy (which I did despite them not yet having provided any coherent calculation for it).

This kind of behaviour is a very small and mild version of the kind of things British Gas did to Lisa Ferguson, who eventually won substantial damages for harassment, plus £10K of costs.

Having tried asking nicely, and sending stiff letters, I too threatened litigation.

I would have actually started a court claim, but it would have included a claim under the Protection from Harassment Act. Those have to be filed under the “Part 8 procedure”, which involves sending all of the written evidence you’re going to use along with the claim form. Collating all that would be a good deal of work, especially since UPS and ControlAccount didn’t engage with me at all, so I had no idea which things they might actually dispute. So I decided that before issuing proceedings, I’d send them a copy of my draft Particulars of Claim, along with an offer to settle if they would pay me a modest sum and stop being evil robots at me.

Rather than me typing the whole tale in again, you can read the full gory details in the PDF of my draft Particulars of Claim. (I’ve redacted the reference numbers).

Outcome

The draft Particulars finally got their attention. UPS sent me an offer: they agreed to pay me £50, in full and final settlement. That was close enough to my offer that I accepted it. I mostly wanted them to stop, and they do seem to have done so. And I’ve received the £50.

VAT calculation

They also finally included an actual explanation of the VAT calculation. It’s absurd, but it’s not UPS’s absurd:

The clearance was entered initially with estimated import charges of £400.03, consisting of £387.83 VAT, and £12.20 disbursement fee. This original entry regrettably did not include the freight cost for calculating the VAT, and as such when submitted for final entry the VAT value was adjusted to include this and an amended invoice was issued for an additional £39.84.

HMRC calculate the amount against which VAT is raised using the value of goods, insurance and freight, however they also may apply a VAT adjustment figure.

The VAT Adjustment is based on many factors (Incidental costs in regards to a shipment), which includes charge for currency conversion if the invoice does not list values in Sterling, but the main is due to the inland freight from airport of destination to the final delivery point, as this charge varies, for example, from EMA to Edinburgh would be £150, from EMA to Derby would be £1, so each year UPS must supply HMRC with all values incurred for entry build up and they give an average which UPS have to use on the entry build up as the VAT Adjustment.

The correct calculation for the import charges is therefore as follows:

Goods value divided by exchange rate 2,489.53 EUR / 1.1683 = 2,130.89 GBP

Duty: Goods value plus freight (%) 2,130.89 GBP + 5% = 2,237.43 GBP. That total times the duty rate. X 0 % = 0 GBP

VAT: Goods value plus freight (100%) 2,130.89 GBP + 0 = 2,130.89 GBP

That total plus duty and VAT adjustment 2,130.89 GBP + 0 GBP + 7.49 GBP = 2,348.08 GBP. That total times 20% VAT = 427.67 GBP

As detailed above we must confirm that the final VAT charges applied to the shipment were correct, and that no refund of this is therefore due.

This looks very like HMRC-originated nonsense. If only they had put it on the original bills! It’s completely ridiculous that it took four months and near-litigation to obtain it.

“Disbursement fee”

One more thing. UPS billed me a £12 “disbursement fee”.

When you import something, there’s often tax to pay. The courier company pays that to the government, and the consignee pays it to the courier. Usually the courier demands it before final delivery, since otherwise they end up having to chase it as a debt.

It is common for parcel companies to add a random fee of their own. As I note in my Particulars, there isn’t any legal basis for this.

In my own offer of settlement I proposed that UPS should:

State under what principle of English law (such as, what enactment or principle of Common Law), you levy the “disbursement fee” (or refund it).

To my surprise they actually responded to this in their own settlement letter. (They didn’t, for example, mention the harassment at all.) They said (emphasis mine):

A disbursement fee is a fee for amounts paid or processed on behalf of a client. It is an established category of charge used by legal firms, amongst other companies, for billing of various ancillary costs which may be incurred in completion of service. Disbursement fees are not covered by a specific law, nor are they legally prohibited.

Regarding UPS’ disbursement fee this is an administrative charge levied for the use of UPS’ deferment account to prepay import charges for clearance through CDS. This charge would therefore be billed to the party that is responsible for the import charges, normally the consignee or receiver of the shipment in question. The disbursement fee as applied is legitimate, and as you have stated is a commonly used and recognised charge throughout the courier industry, and I can confirm that this was charged correctly in this instance.

On UPS’s analysis, they can just make up whatever fee they like. That is clearly not right (and I don’t even need to refer to consumer protection law, which would also make it obviously unlawful).

And, that everyone does it doesn’t make it lawful. There are so many things that are ubiquitous but unlawful, especially nowadays when much of the legal system - especially consumer protection regulators - has been underfunded to beyond the point of collapse.

Next time this comes up I might have a go at getting the fee back. (Obviously I’ll have to pay it first, to get my parcel.)

ParcelForce and Royal Mail

I think this analysis doesn’t apply to ParcelForce and (probably) Royal Mail. I looked into this in 2009, and I found that Parcelforce had been given the ability to write their own private laws: “Schemes” made under section 89 of the Postal Services Act 2000.

This is obviously ridiculous but I think it was the law in 2009. I doubt the intervening governments have fixed it.

Furniture

Oh, yes, the actual furniture. The replacements arrived intact and are great :-).



comment count unavailable comments

02 February, 2024 12:38AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.21 on CRAN: Maintenance

A new minor release 0.4.21 of RQuantLib arrived at CRAN this afternoon, and has already been uploaded to Debian as well.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty years (!!) as it was one of the first packages I uploaded there.

This release of RQuantLib benefits from some kind attention that Jeroen has been paying to how we build (especially at CRAN) on both macOS and Windows. So the build processes are a little better now, and no internal code changed. QuantLib 1.33 built unchanged.

Changes in RQuantLib version 0.4.21 (2024-02-01)

  • Generalize macOS build to universal build (Jeroen in #179)

  • Generalize Windows build to arm64 (Jeroen in #181)

  • Generalize version string use to support cmake use (Jeroen in #181 fixing #180)

  • Minor update to 'ci.yaml' github action (Dirk)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 February, 2024 12:04AM