January 28, 2023

Russ Allbery

Review: The Library of the Dead

Review: The Library of the Dead, by T.L. Huchu

Series: Edinburgh Nights #1
Publisher: Tor
Copyright: 2021
Printing: 2022
ISBN: 1-250-76777-6
Format: Kindle
Pages: 329

The Library of the Dead is the first book in a post-apocalyptic (sort of) urban fantasy series set in Edinburgh, written by Zimbabwean author (and current Scotland resident) T.L. Huchu.

Ropa is a ghosttalker. This means she can see people who have died but are still lingering because they have unfinished business. She can stabilize them and understand what they're saying with the help of her mbira. At the age of fourteen, she's the sole source of income for her small family. She lives with her grandmother and younger sister in a caravan (people in the US call it an RV), paying rent to an enterprising farmer turned landlord.

Ropa's Edinburgh is much worse off than ours. Everything is poorer, more run-down, and more tenuous, but other than a few hints about global warming, we never learn the history. It reminded me a bit of the world in Octavia Butler's Parable of the Sower in the feel of civilization crumbling without a specific cause. Unlike that series, The Library of the Dead is not about the collapse or responses to it. The partial ruin of the city is the mostly unremarked backdrop of Ropa's life.

Much of the book follows Ropa's daily life carrying messages for ghosts and taking care of her family. She does discover the titular library when a wealthier friend who got a job there shows it off to her, but it has no significant role in the plot. (That was disappointing.) The core plot, once Ropa is convinced by her grandmother to focus on it, is the missing son of a dead woman, who turns out to not be the only missing child.

This is urban fantasy with the standard first-person perspective, so Ropa is the narrator. This style of book needs a memorable protagonist, and Ropa is certainly that. She's a talker who takes obvious delight in narrating her own story alongside a constant patter of opinions, observations, and Scottish dialect. Ropa is also poor.

That last may not sound that notable; a lot of urban fantasy protagonists are not well-off. But most of them feel culturally middle-class in a way that Ropa does not. Money may be a story constraint in other books, but it rarely feels like a life constraint and experience the way it does here. It's hard to describe the difference in tone succinctly, since it's a lot of small things: the constant presence of money concerns, the frustration of possessions that are stolen or missing and can't be replaced, the tedious chores one has to do when there's no money, even the language and vulgarity Ropa uses. This is rare in fantasy and excellent characterization work.

Given that, I am still frustrated with myself over how much I struggled with Ropa as a narrator. She's happy to talk about what is happening to her and what she's learning about (she listens voraciously to non-fiction while running messages), but she deflects, minimizes, or rushes past any mention of what she's feeling. If you don't like the angst that's common from urban fantasy protagonists, this may be the book for you. I have complained about that angst before, and therefore feel like this should have been the book for me, but apparently I need a minimum level of emotional processing and introspection from the narrator. Ropa is utterly unwilling to do any of that. It's possible to piece together what she's feeling and worrying about, but the reader has to rely on hints and oblique comments that she passes over quickly.

It didn't help that Ropa is not interested in the same things in her world that I was interested in. She's not an unreliable narrator in the conventional sense; she doesn't lie to the reader or intentionally hide information. And yet, the experience of reading this book was, for me, similar to reading a book with an unreliable narrator. Ropa consistently refused to look at what I wanted her to look at or think about what I wanted her to think about.

For example, when she has an opportunity to learn magic through books from the titular library, her initial enthusiasm is infectious. Huchu does a great job showing the excitement of someone who likes new ideas and likes telling other people about the neat things she just learned. But when things don't work the way she expected from the books, she doesn't follow up, experiment, or try to understand why. When her grandmother tries to explain something to her from a different angle, she blows her off and refuses to pay attention. And when she does get magic to work, she never tries to connect that to her previous understanding. I kept waiting for Ropa to try to build her own mental model of magic, but she would only toy with an idea for a few pages and then put it down and never mention it again.

This is not a fault in the book, just a mismatch between the book and what I wanted to read. All of this is consistent with Ropa's defensive strategies, emotional resiliency, and approach to understanding the world. (I strongly suspect Huchu was giving Ropa some ADHD characteristics, and if so, I think he got it spot on.) Given that, I tried to pivot to appreciating the characterization and the world, but that ran into another mismatch I had with this book, and the reason why I passed on it when it initially came out.

I tend to avoid fantasy novels about ghosts. This is not because I mind ghosts themselves, but I've learned from experience that authors who write about ghosts usually also write about other things that I don't want to read about. That unfortunately was the case here; The Library of the Dead was too far into horror for me. There's child abuse, drugs, body horror, and similar nastiness here, more than I wanted in my head. Ropa's full-speed-ahead attitude and refusal to dwell on anything made it a bit easier to read, but it was still too much for me.

Ropa is a great character who is refreshingly different than the typical urban fantasy protagonist, and the few hints of the magical library and world background we get were intriguing. This book was not for me, but I can see why other people will love it.

Followed by Our Lady of Mysterious Ailments.

Rating: 6 out of 10

28 January, 2023 03:54AM

January 27, 2023

hackergotchi for Matthew Garrett

Matthew Garrett

Further adventures in Apple PKCS#11 land

After my previous efforts, I wrote up a PKCS#11 module of my own that had no odd restrictions about using non-RSA keys and I tested it. And things looked much better - ssh successfully obtained the key, negotiated with the server to determine that it was present in authorized_keys, and then went to actually do the key verification step. At which point things went wrong - the Sign() method in my PKCS#11 module was never called, and a strange
debug1: identity_sign: sshkey_sign: error in libcrypto
sign_and_send_pubkey: signing failed for ECDSA "testkey": error in libcrypto"

error appeared in the ssh output. Odd. libcrypto was originally part of OpenSSL, but Apple ship the LibreSSL fork. Apple don't include the LibreSSL source in their public source repo, but do include OpenSSH. I grabbed the OpenSSH source and jumped through a whole bunch of hoops to make it build (it uses the macosx.internal SDK, which isn't publicly available, so I had to cobble together a bunch of headers from various places), and also installed upstream LibreSSL with a version number matching what Apple shipped. And everything worked - I logged into the server using a hardware-backed key.

Was the difference in OpenSSH or in LibreSSL? Telling my OpenSSH to use the system libcrypto resulted in the same failure, so it seemed pretty clear this was an issue with the Apple version of the library. The way all this works is that when OpenSSH has a challenge to sign, it calls ECDSA_do_sign(). This then calls ECDSA_do_sign_ex(), which in turn follows a function pointer to the actual signature method. By default this is a software implementation that expects to have the private key available, but you can also register your own callback that will be used instead. The OpenSSH PKCS#11 code does this by calling EC_KEY_set_method(), and as a result calling ECDSA_do_sign() ends up calling back into the PKCS#11 code that then calls into the module that communicates with the hardware and everything works.

Except it doesn't under macOS. Running under a debugger and setting a breakpoint on EC_do_sign(), I saw that we went down a code path with a function called ECDSA_do_sign_new(). This doesn't appear in any of the public source code, so seems to be an Apple-specific patch. I pushed Apple's libcrypto into Ghidra and looked at ECDSA_do_sign() and found something that approximates this:
nid = EC_GROUP_get_curve_name(curve);
if (nid == NID_X9_62_prime256v1) {
  return ECDSA_do_sign_new(dgst,dgst_len,eckey);
}
return ECDSA_do_sign_ex(dgst,dgst_len,NULL,NULL,eckey);
What this means is that if you ask ECDSA_do_sign() to sign something on a Mac, and if the key in question corresponds to the NIST P256 elliptic curve type, it goes down the ECDSA_do_sign_new() path and never calls the registered callback. This is the only key type supported by the Apple Secure Enclave, so I assume it's special-cased to do something with that. Unfortunately the consequence is that it's impossible to use a PKCS#11 module that uses Secure Enclave keys with the shipped version of OpenSSH under macOS. For now I'm working around this with an SSH agent built using Go's agent module, forwarding most requests through to the default session agent but appending hardware-backed keys and implementing signing with them, which is probably what I should have done in the first place.

comment count unavailable comments

27 January, 2023 11:39PM

January 26, 2023

hackergotchi for Matt Brown

Matt Brown

Goals for 2023

This is the second of a two-part post covering my goals for 2023. See the first part to understand the vision, mission and strategy driving these goals.

I want to thank my friend Nat, and Will Larson whose annual reviews I’ve always enjoyed reading for inspiring me to write these posts.

I’ve found the process articulating my motivations and goals very useful to clarify my thoughts and create tangible next steps. I’m grateful for that in and of itself, but I also hope that by publishing this you too might find it interesting, and the additional public accountability it creates will be a positive encouragement to me.

2023 Goals

My focus for 2023 is to bootstrap a business that I can use to build software that solves real problems (see the strategy from the previous post for more details on this). I’m going to track this via three goals:

  1. Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities.

  2. Grow my product development skill set by taking several ideas to MVP stage with customer feedback received, and launch at least one product which generates revenue and has growth potential.

  3. Develop and maintain a broad professional network.

Consulting

Based on my background and experience, I plan to target my consulting across three areas:

  1. Leadership - building and growing operationally focused software teams following SRE/devops principles. A typical engagement may involve helping a client establish a brand new SRE/devops practice, or to strengthen and mature the existing practices used to build and operate reliable software in their team(s).

  2. Architecture - applying deep technical expertise to the design of large software systems, particularly focusing on their reliability and operability. A typical engagement may involve design input and decision making support for key aspects of a new system, providing external review and analysis to improve an existing system, or delivering actionable, tactical next steps during or immediately after a reliability crisis.

  3. Technology Strategy - translating high-level business needs into a technical roadmap that provides understandable explanations of the value software can deliver in that context, and the iterative series of appropriately sized projects required to realise it. A typical customer for this would be a small to medium sized business outside of the software industry with a desire to use software in a transformative way to improve their business but who does not employ the necessary in-house expertise to lead that transition.

Product Development

There are three, currently extremely high level, product ideas that I’m excited to explore:

  1. Improve co-ordination of electricity resources to accelerate the electrification of NZ’s energy demand and the transition to zero carbon grid.

    NZ has huge potential to be a world-leader in decarbonising energy use through electrification, but requires a massive transition to realise the benefits. Many of the challenges to that transition involve coordination of an order of magnitude more distributed energy resources (DER) in a much more dynamic and software-oriented manner than the electricity industry is traditionally experienced with.

    The concept of improving DER coordination is not novel, but our grid has unique characteristics that mean we’re likely to need to build localised solutions. There is a strong match between my experience with large, high-reliability distributed software systems, and this need. With renewed motivation in the industry for rapid progress and many conversations and consultations still in their early stages this a very compelling space to explore with the intent of developing a more detailed product opportunity to pursue.

  2. Reduce agricultural emissions by making high performance farm management, including effortless compliance reporting, straightforward, fun and effective for busy farmers.

    NZ’s commitments to reduce agricultural emissions (our largest single sector) place increased compliance and reporting burdens on busy farmers who don’t want to report the same data multiple times to different regulators and authorities. In tandem, rising business costs and constraints drive a need for continuous improvements in efficiency, performance and farm management processes in order to remain profitable. This in turn drives increases in complexity and the volume of data that farmers must work with.

    Many industry organisations and associated software developers offer existing products aimed at addressing aspects of these problems, but anecdotal feedback indicates these are poorly integrated, piecemeal solutions that are often frustrating to use - a burden rather than a source of continuous improvement. It looks like there could be an opportunity for a delightful, comprehensive farm management and reporting system to disrupt the industry and help farmers run more profitable and sustainable farms while also reducing compliance costs and effort.

  3. Lower sickness rates and improve cognitive performance by enabling every indoor space to benefit from continuous ventilation monitoring and reporting.

    Indoor air quality is important in reducing disease transmission risk and promoting optimal cognitive performance, but despite the current pandemic temporarily raising its profile, a focus on indoor air quality generally remains under the radar for most people.

    One factor contributing to this is the lack of widely available systems for continuously monitoring and reporting on air quality. I built https://co2mon.nz/ to help address this problem in my children’s school during 2022. I see potential to further grow this business through marketing and raising awareness of the value of ventilation monitoring in all indoor environments.

In addition to these mission aligned product ideas, I’m also interested in exploring the creation of small to medium sized SaaS applications that deliver useful value by serving the needs of a specialised or niche business or industry. Even when not directly linked to the overall mission, the development and operation of products of this type can support the strategy. Each application adds direct revenue and also contributes to achieving better economies of scale in the many backend processes and infrastructure required to deliver secure, reliable and performant software systems.

Developing my professional network

To help make this goal more actionable and measurable I will track 3 sub goals:

  1. To build a professional relationship with at least 30 new people this year, meaning that we’ve met and had a decent conversation at least a couple of times and keep in touch at least every few months in some form.

  2. To publish a piece of writing on this site at least once a week, and for many of those to generate interesting conversations and feedback. I’ll use this as an opportunity to explore product ideas, highlight my consulting expertise and generally contribute interesting technical content into the world.

  3. To support the growth of my local technical community by volunteering my experience and knowledge with others through activities such as mentoring, conference talks and similar.

Next Steps

Over the coming weeks I’ll write more about each of these topics - you can use the box in the sidebar (or on the front page, if you’re on a phone) to be notified when I post new writing (there’s also an RSS feed here, for the geeks).

I’d love to have your feedback and engagement on these goals too - please drop me an email with your thoughts or even book a meeting - it won’t be a distraction to me, you’ll be helping me meet my goal of developing and maintaining my network :)

26 January, 2023 07:50PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal Subway Foot Traffic Data, 2022 edition

For the fourth year in a row, I've asked Société de Transport de Montréal, Montreal's transit agency, for the foot traffic data of Montreal's subway.

By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.

Licences

  • The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain.

  • The R code I wrote is licensed under the GPLv3+. It's pretty much the same code as last year. I re-used last's year converter script and although I had to clean some of the data by hand, it worked pretty well.

26 January, 2023 06:30PM by Louis-Philippe Véronneau

hackergotchi for Shirish Agarwal

Shirish Agarwal

Minidebconf Tamilnadu 2023, Tinnitus, Cooking, Books and Series.

First up is Minidebconf Tamilnadu 2023 that would be held on 28-29 January 2023. You can find rest of the details here. I do hope we get to see/hear some good stuff from the Minidebconf. Best of luck to all those who are applying.

Tinnitus

During the lock-down of March 2020, I became aware of noise in ears and subsequently major hearing loss. It took me quite a while to know that Tinnitus happens to both those who have hearing loss as well as not. I keep running into threads like this and as shared by someone nobody knows what really causes it. I did try some of the apps (an app. called Resound on Android) that is supposed to tackle Tinnitus but it hasn’t helped much. There is this but at least for me, right now pretty speculative. Also this, and again highly speculative.

Cooking

After mum passed away, I haven’t cooked anything. This used to give me pleasure but now just doesn’t feel right. Cooking is something you enjoy when you are doing for somebody else and not just for yourself, at least that’s how I feel and with that the curiosity to know more recipes. I do wanna buy a wok at sometime but when, how I just don’t know.

Books

Have been reading books quite a bit. And due to that had to again revisit and understand ISBN. Perhaps I might have shared it before. It really is something, the history of ISBN. And that co-relates with the book I read, Raising Steam by Terry Pratchett. Raising Steam is the 40th Book in the Discworld Series and it basically romanticizes and reminisces how the idea of an engine was born, and then a steam engine and how actually Railways started. There has been a lot of history and experiences from the early years of Steam Railway that have been taken and transplanted into the book. Also how Railways is and can be successful if only it is invested wisely and maintenance is done. This is only where imagination and reality come apart as maintenance isn’t done and then you have issues. While this is and was in the UK, similar situation exists in India and many other places around the world and doesn’t matter whether it is private or public. Exceptions are German, French but then that maybe due to Labor movements that happened and were successful unlike in other places. I could go on but then it will become a different article in itself. Suffice to say there is much to learn and you need serious people to look after it. Both in UK and India we lack that. And not just in Railways but Civil Aviation too, but again that is a story in itself.

Web-series

Apart from books, have been seeing web-series that Willow is a good one that I enjoyed even though I hadn’t seen the earlier movie. While there has been a flurry of movies and web-series both due to end of year and beginning of 2023 and yet have tried to be a bit partial on what I wanna watch or not. If it has crime, fantasy, drama then usually I like it. For e.g. I saw Blackout and pretty much was engrossed in what will happen next. It also does lead you to ask questions about centralization vs de-centralization for both power and other utilities and does make a case for communities to have their utilities apart from the grid as a fallback. How do we do over decades or centuries about it is a different question perhaps altogether. There were two books that kinda stood out for me, the first was Ian Rankin’s ‘Naming of the Dead’. The book is about a cynical John Rebus, a man after my own heart. I am probably going to buy a few more of his series. In a way it also tells you why UK is the way it is right now. Another book that I liked was Shades of Grey by Jasper Fforde. This is one of the books that Mum would have clearly liked. It is pretty unusual while at the same time very close to 1984 and other such dystopian novels. The main trope of the book is what color you can see and how much you can see. The main character is somebody who can see Red, around the age of 20. One of the interesting aspects of the book is ‘de-facting’ which closely resembles the Post-Truth world where alternative facts can be made out of air and they don’t need any scientific evidence to back them up. In Jasper’s world, they don’t care about how things work and most of the technology is banned and curiosity is considered harmful and those who show that are murdered one way or the other. Interestingly, the author has just last year decided to start book 2 in the 3 book series that is supposed to be. This also tells why the U.S. is such a precarious situation in a way. A part of it is also due to the media which is in hands of chosen few, the same goes for UK and India, almost an oligopoly.

The Great Escape

This is also a book but also about experiences of people, not in 19th-20th century but today that tells you slavery is alive and well and human-trafficking as well. This piece from NPR tells you about an MNC and Indian workers. What I found interesting is that there barely is an mention of the Indian Embassy that is supposed to help Indian people. I do know for a fact that the embassies of India has seen a drastic shortage of both people and materials even since the new Govt. came in place that was nine years ago. Incidentally, BBC shared about the Gujarat riots 2002 and that has been censored in India. They keep quiet about the UK Govt. who did find out that the Chief Minister was directly responsible for the killings and in facts his number 2, Amit Shah had shared that we would do 2002 again in the election cycle barely a month ago. But sadly, no hate speech FIR or any action was taken against Mr. Shah. There have been attempts by people to showcase the documentary. For e.g. JNU tried it and the rowdies from ABVP (arm of BJP) created violence. Even the questions that has been asked by the Wire, GOI will not acknowledge them.

Interestingly, all India’s edtechs have taken a beating in the last 6-8 months including the biggest BJYU’s. Sharing a story from 2021 where things were best and today all of them are at bottom. In fact, the public has been wary as the prices of the courses has kept on increasing and most ‘case studies’ have been found to be fake. Also the general outlook on jobs and growth has been pessimistic. In fact, most companies have been shedding jobs truckloads, most in the I.T. sector but other sectors as well. Hospitality and other related sectors have taken a huge beating, part of it post-pandemic, part of it Govt’s refusal to either spend money or do any positive policies for either infrastructure, education, medical, you name it, they think private sector has all the answers which has been proven to be wrong again and again. I did not want to end on a discordant note but things are the way they are 😦

26 January, 2023 01:34PM by shirishag75

hackergotchi for Bálint Réczey

Bálint Réczey

How to speed up your next build with Firebuild?

Firebuild logo

TL;DR: Just prefix your build command (or any command) with firebuild:

firebuild <build command>

OK, but how does it work?

Firebuild intercepts all processes started by the command to cache their outputs. Next time when the command or any of its descendant commands is executed with the same parameters, inputs and environment, the outputs are replayed (the command is shortcut) from the cache instead of running the command again.

This is similar to how ccache and other compiler-specific caches work, but firebuild can shortcut any deterministic command, not only a specific list of compilers. Since the inputs of each command is determined at run time firebuild does not need a maintained complete dependency graph in the source like Bazel. It can work with any build system that does not implement its own caching mechanism.

Determinism of commands is detected at run-time by preloading libfirebuild.so and interposing standard library calls and syscalls. If the command and all its descendants’ inputs are available when the command starts and all outputs can be calculated from the inputs then the command can be shortcut, otherwise it will be executed again. The interception comes with a 5-10% overhead, but rebuilds can be 5-20 times, or even faster depending on the changes between the builds.

Can I try it?

It is already available in Debian Unstable and Testing, Ubuntu’s development release and the latest stable version is back-ported to supported Ubuntu releases via a PPA.

How can I analyze my builds with firebuild?

Firebuild can generate an HTML report showing each command’s contribution to the build time. Below are the “before” and “after” reports of json4s, a Scala project. The command call graphs (lower ones) show that java (scalac) took 99% of the original build. Since the scalac invocations are shortcut (cutting the second build’s time to less than 2% of the first one) they don’t even show up in the accelerated second build’s call graph. What’s left to be executed again in the second run are env, perl, make and a few simple commands.

The upper graphs are the process trees, with expandable nodes (in blue) also showing which command invocations were shortcut (green). Clicking on a node shows details of the command and the reason if it was not shortcut.

Could I accelerate my project more?

Firebuild works best for builds with CPU-intensive processes and comes with defaults to not cache very quick commands, such as sh, grep, sed, etc., because caching those would take cache space and shortcutting them may not speed up the build that much. They can still be shortcut with their parent command. Firebuild’s strength is that it can find shortcutting points in the process tree automatically, e.g. from sh -c 'bash -c "sh -c echo Hello World!"' bash would be shortcut, but none of the sh commands would be cached. In typical builds there are many such commands from the skip_cache list. Caching those commands with firebuild -o 'processes.skip_cache = []' can improve acceleration and make the reports smaller.

Firebuild also supports several debug flags and -d proc helps finding reasons for not shortcutting some commands:

...
FIREBUILD: Command "/usr/bin/make" can't be short-cut due to: Executable set to be not shortcut, {ExecedProcess 1329.2, running, "make -f debian/rules build", fds=[{FileFD fd=0 {FileOFD ...
FIREBUILD: Command "/usr/bin/sort" can't be short-cut due to: Process read from inherited fd , {ExecedProcess 4161.1, running, "sort", fds=[{FileFD fd=0 {FileOFD ...
FIREBUILD: Command "/usr/bin/find" can't be short-cut due to: fstatfs() family operating on fds is not supported, {ExecedProcess 1360.1, running, "find -mindepth 1 ...
...

make, ninja and other incremental build tool binaries are not shortcut because they compare the timestamp of files, but they are fast at least and every build step they perform can still be shortcut. Ideally the slower build steps that could not be shortcut can be re-implemented in ways that can be shortcut by avoiding tools performing unsupported operations.

I hope those tools help speeding up your build with very little effort, but if not and you find something to fix or improve in firebuild itself, please report it or just leave a feedback!

Happy speeding, but not on public roads! 😉

26 January, 2023 09:06AM by Réczey Bálint

hackergotchi for Matt Brown

Matt Brown

Vision, Mission and Strategy

This is part one of a two-part post, covering high-level thoughts around my motivations and vision. Part two (to be published tomorrow) contains my specific goals for 2023.

A new year is upon us! My plan was to be 6 months into the journey of starting a business by this point.

I made some very tentative progress towards that goal in 2022, registering a company and starting some consulting work, but on the whole I’ve found it much harder than expected to gather the necessary energy to begin that journey in earnest.

Reflection

I’m excited about the next chapter of my career, so the fact that I’ve been struggling to get started has been frustrating. The only upside is that the delay has given me plenty of time to reflect on the last few years and what I can learn from them and draw some lessons to help better manage and sustain my energy going forward.

Purpose

A large part of what I’ve realised is that I should have left Google years ago. It was a great place to work, and I’m incredibly grateful for everything I learned and received during my time there. For years it was my dream job, but my happiness had been declining, and instead of taking the (relatively small) risk of leaving to the unknown, I tried several variations of team and role in the hope of restoring the dream.

The reality is that a significant chunk of my motivation and energy comes from being able to link my work back to a bigger purpose that delivers concrete positive impact in the world. I felt that link through Google’s mission to make information universally accessible and useful for the first 10-11 years, but for the latter 4-5 years my ability to see that link was tenuous at best and trying to push through the challenges presented without that link providing a reliable source of energy is what drove my unhappiness and led to needing a longer break to recharge.

I expect the challenges of starting a business to be even greater than what I experienced at Google, so the lesson I’m taking from this is that it’s crucial for me to understand what the link between my work and the bigger purpose with concrete positive impact in the world that I’m aiming to contribute to is.

Community

The second factor that I’ve slowly come to realise has been missing from my career in the last few years has been participation in a professional community and a variety of enriching interpersonal relationships. As much as I value and need this type of interaction, fostering and sustaining it unfortunately doesn’t come naturally to me. Working remotely since 2016 and then taking a 9 month break out of the industry are not particularly helpful contributors to building and maintaining a wide network either!

The lesson here is simply that I’m going to need to push past my comfort zone in reaching out and introducing myself to a range of people in order to grow my professional network, and equally I need to be diligent and disciplined in making time to maintain and regularly connect with people whom I respect and find energising to interact with.

Personal Influences

Lastly, I’ve been reflecting on a set of principles that are important to me. These are not so much new lessons, more confirming to myself what I value moving forward. There are many things I could include here, but to keep it somewhat brief, the key influences on my thinking are:

  • Independence - I can’t entirely explain why or where it comes from, but since the start of my professional career (which I consider to be my consulting/freelancing development during high school) I’ve understood that I’m far more motivated by building and growing my own business than I am by working for someone else. Working for myself has always felt like the default and sensible course - I’m excited to get back to that.

  • Openness - Open is better than closed, in terms of software, business model and organisational processes. This continues to be a strong belief and something I want to uphold in my business endeavours. Competition should be based on superior technical quality or service, not artificial constraints or barriers to entry that lock customers and users into a single solution or market. Protocols and networks should be open for wide participation and easily accessible to new entrants and competition.

  • People first - This applies both to how we work with each other - respectfully, valuing diversity and with integrity, and to how we apply technology to our world - with consideration for all stakeholders it may affect and awareness of both the intended and potential unintended impacts.

Framework

Using Vision, Mission and Strategy as a planning framework has worked quite well for me when building and growing teams over the years, so I plan to re-use it personally to help organise the above reflections into a hopefully cohesive plan than results in some useful 2023 goals.

Vision

Software systems contribute direct and meaningful impact to solving real problems in our world.

Each word has a fair bit of meaning behind it for me, so breaking it down a little bit:

  • software systems - excite me because software is eating the world and has significant potential to do good.
  • contribute - Software alone doesn’t solve problems, and misapplied can easily make things worse. To contribute software needs to be designed intentionally and evaluated with an awareness of risks it could pose within the complex system that is our modern world.
  • direct and meaningful impact - I’m not looking for broad outcomes like improving productivity or communication, which apply generally across many problems. I want to see software applied to solve specific blockers whose removal unlocks significant progress towards solving a problem.
  • real - as opposed to straightforward problems. The types of issue where the acknowledgement of it as a “real problem” often ends the sentence as it feels too big to tackle. Climate change and pandemic risk are examples of real problems. Decentralising finance or selling more widgets are not.
  • in our world - is mostly filler to round out the sentence nicely, but I do think we should probably sort out the mess we’re making on our own planet before trying to colonise anywhere else.

Mission

To lead the development and operation of software systems that deliver new opportunities for individuals, businesses and communities to solve the real problems in their community.

Again breaking down the intent a little bit:

  • lead - having a meaningful impact on real problems is a big job. I won’t succeed as a one man band. It will require building and growing a larger team.
  • development and operation - development is fun and necessary, but I also wanted to highlight that the ongoing operation and integration of those software systems into the broader social and human systems of our world is an equally important and ongoing need.
  • new opportunities - are important to drive and motivate investment in the adoption of technology. Building or operating a system that maintains the status quo is not motivating for me.
  • individuals, businesses and communities - aka everyone! But each of these groups (as examples, not specific) will have diverse roles, needs and interactions with the software which must be considered to ensure the system achieves the desired contribution and impact.
  • their community - refines the ambition from the vision to an achievable scope of action within which to execute the mission. We won’t solve our problems by targeting one big global fix, but if we each participate in solving the problems in our community, collectively it will make a difference.

Strategy

Build a sustainable business that provides a home and infrastructure to support a continuous cycle of development, validation and growth of software systems fulfilling the mission and vision above.

  • Accumulate meaningful impact via a portfolio of systems rather than one big bet.
  • Focus on opportunities that promote the decarbonisation of our economy (the most pressing problem our society faces), but not at the expense of ignoring compelling opportunities to contribute impact to other real problems also.
  • Favour the marathon over the sprint - while being first can be fun and convey benefits, it’s often the fast-followers who learn from the initial mistakes and deliver lasting change and broader impact.

In keeping with the final bullet point, I aim to evaluate the strategy against a long-term view of success. What excites me about it is that it has the potential to provide structure and clarity for my work while also enabling many future paths - from operating a portfolio of micro-SaaS products that each solve real problems for a specific niche or community, or diving deep into a single compelling opportunity for a year or two, joining with others to partner on shared ventures or some combination of all three and other variations in between.

Your Thoughts

I consider this a first draft, which I intend to revise and evolve further over the next 6-12 months. I don’t plan major changes to the intent or underlying ideas, but finding the best words to express and convey that intent clearly is not something I expect to get right on the first take.

I’d love to have your feedback and engagement as I move forward with this strategy - please use the box in the sidebar (or on the front page, if you’re on a phone) to be notified when I post new writing, drop me an email with your thoughts or even book a meeting to say hi and discuss something in detail.

26 January, 2023 04:30AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppTOML 0.2.1 on CRAN: Small Build Fix for Some Arches

Two weeks after the release of RcppTOML 0.2.0 and the switch to toml++, we have a quick bugfix release 0.2.1.

TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML – though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka “packages”) for the Rust language.

Some architectures, aarch64 included, got confused over ‘float16’ which is of course a tiny two-byte type nobody should need. After consulting with Mark we concluded to (at least for now) simply override this excluding the use of ‘float16’.

The short summary of changes follows.

Changes in version 0.2.1 (2023-01-25)

  • Explicitly set -DTOML_ENABLE_FLOAT16=0 to permit compilation on some architectures stumbling of the type.

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppTOML page page. Please use the GitHub issue tracker for issues and bugreports.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 January, 2023 12:54AM

January 24, 2023

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2022)

The following contributors got their Debian Developer accounts in the last two months:

  • Dennis Braun (snd)
  • Raúl Benencia (rul)

The following contributors were added as Debian Maintainers in the last two months:

  • Gioele Barabucci
  • Agathe Porte
  • Braulio Henrique Marques Souto
  • Matthias Geiger
  • Alper Nebi Yasak
  • Fabian Grünbichler
  • Lance Lin

Congratulations!

24 January, 2023 03:00PM by Jean-Pierre Giraud

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Porterboxes and alternatives

As you know, Debian projects and sponsor provides so-called "porterbox", but it does not cover all architectures.

There are some alternatives to fix architecture-specific bugs. For the record, let's pick it up them. [1][2][3]

porterbox deb-o-matic qemu
amd64 adayevskaya.d.o debomatic-amd64.d.n DQIB ready
arm64 amdahl.d.o debomatic-arm64.d.n DQIB ready
armel amdahl.d.o abel.d.o debomatic-armel.d.n NG
armhf amdahl.d.o abel.d.o harris.d.o debomatic-armhf.d.n DQIB ready
i386 exodar.d.n debomatic-i386.d.n DQIB ready
mips64el eller.d.o debomatic-mips64el.d.n DQIB ready
mipsel eller.d.o debomatic-mipsel.d.n DQIB ready
ppc64el platti.d.o debomatic-ppc64el.d.n DQIB ready
s390x zelenka.d.o debomatic-s390x.d.n DQIB ready
alpha N/A N/A NG
arc N/A N/A N/A
hppa panama.d.n N/A N/A
ia64 yttrium.d.n N/A N/A
kfreebsd-amd64 lemon.d.n N/A N/A
kfreebsd-i386 lemon.d.n N/A N/A
m68k mitchy.d.n N/A NG
powerpc perotto.d.n debomatic-powerpc.d.n DQIB ready
ppc64 perotto.d.n N/A DQIB ready
riscv64 debian-riscv64-porterbox-01.d.n N/A DQIB ready
sh4 N/A N/A NG
sparc64 kyoto.d.n N/A N/A
x32 N/A N/A N/A

Thus, no alternatives for alpha, arc, sh4 and x32.

24 January, 2023 11:29AM

January 23, 2023

hackergotchi for Matthew Garrett

Matthew Garrett

Build security with the assumption it will be used against your friends

Working in information security means building controls, developing technologies that ensure that sensitive material can only be accessed by people that you trust. It also means categorising people into "trustworthy" and "untrustworthy", and trying to come up with a reasonable way to apply that such that people can do their jobs without all your secrets being available to just anyone in the company who wants to sell them to a competitor. It means ensuring that accounts who you consider to be threats shouldn't be able to do any damage, because if someone compromises an internal account you need to be able to shut them down quickly.

And like pretty much any security control, this can be used for both good and bad. The technologies you develop to monitor users to identify compromised accounts can also be used to compromise legitimate users who management don't like. The infrastructure you build to push updates to users can also be used to push browser extensions that interfere with labour organisation efforts. In many cases there's no technical barrier between something you've developed to flag compromised accounts and the same technology being used to flag users who are unhappy with certain aspects of management.

If you're asked to build technology that lets you make this sort of decision, think about whether that's what you want to be doing. Think about who can compel you to use it in ways other than how it was intended. Consider whether that's something you want on your conscience. And then think about whether you can meet those requirements in a different way. If they can simply compel one junior engineer to alter configuration, that's very different to an implementation that requires sign-offs from multiple senior developers. Make sure that all such policy changes have to be clearly documented, including not just who signed off on it but who asked them to. Build infrastructure that creates a record of who decided to fuck over your coworkers, rather than just blaming whoever committed the config update. The blame trail should never terminate in the person who was told to do something or get fired - the blame trail should clearly indicate who ordered them to do that.

But most importantly: build security features as if they'll be used against you.

comment count unavailable comments

23 January, 2023 10:44AM

January 22, 2023

Petter Reinholdtsen

Opensnitch, the application level interactive firewall, heading into the Debian archive

While reading a blog post claiming MacOS X recently started scanning local files and reporting information about them to Apple, even on a machine where all such callback features had been disabled, I came across a description of the Little Snitch application for MacOS X. It seemed like a very nice tool to have in the tool box, and I decided to see if something similar was available for Linux.

It did not took long to find the OpenSnitch package, which has been in development since 2017, and now is in version 1.5.0. It has had a request for Debian packaging since 2018, but no-one completed the job so far. Just for fun, I decided to see if I could help, and I was very happy to discover that upstream want a Debian package too.

After struggling a bit with getting the program to run, figuring out building Go programs (and a little failed detour to look at eBPF builds too - help needed), I am very happy to report that I am sponsoring upstream to maintain the package in Debian, and it has since this morning been waiting in NEW for the ftpmasters to have a look. Perhaps it can get into the archive in time for the Bookworm release?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

22 January, 2023 10:55PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Barbie crowns

prototype crowns
crown on the printer
Slimer's crown

My daughters have had great fun designing and printing crowns for their Barbies. We've been through several design iterations, and several colour choices. Not all Barbies have the same head circumference. Real crowns probably don't have a perfectly circular internal shape.

They changed their minds on the green crown soon after it finished, but we managed to find a grateful recipient.

22 January, 2023 09:14PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.10 on CRAN: Regular Update

rcpp logo

The Rcpp team is thrilled to announce the newest release 1.0.10 of the Rcpp package which is hitting CRAN now and will go to Debian shortly. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course at r2u. The release was prepared a few days ago, but given the widespread use at CRAN it took a few days to be processed. As always, our sincere thanks to the CRAN maintainers Uwe Ligges and Kurt Hornik. This release continues with the six-months cycle started with release 1.0.5 in July 2020. As a reminder, we do of course make interim snapshot ‘dev’ or ‘rc’ releases available via the Rcpp drat repo and strongly encourage their use and testing—I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies.

Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2623 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 252 in BioConductor. On CRAN, 13.7% of all packages depend (directly) on CRAN, and 58.7% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 67.1 million times.

This release is incremental as usual, preserving existing capabilities faithfully while smoothing our corners and / or extending slightly. Of particular note is the now fully-enabled use of the ‘unwind’ protection making some operations a little faster by default; special thanks to Iñaki for spearheading this. Kevin and I also polished a few other bugs off as detailed below.

The full list of details follows.

Changes in Rcpp release version 1.0.10 (2023-01-12)

  • Changes in Rcpp API:

    • Unwind protection is enabled by default (Iñaki in #1225). It can be disabled by defining RCPP_NO_UNWIND_PROTECT before including Rcpp.h. RCPP_USE_UNWIND_PROTECT is not checked anymore and has no effect. The associated plugin unwindProtect is therefore deprecated and will be removed in a future release.

    • The 'finalize' method for Rcpp Modules is now eagerly materialized, fixing an issue where errors can occur when Module finalizers are run (Kevin in #1231 closing #1230).

    • Zero-row data.frame objects can receive push_back or push_front (Dirk in #1233 fixing #1232).

    • One remaining sprintf has been replaced by snprintf (Dirk and Kevin in #1236 and #1237).

    • Several conversion warnings found by clang++ have been addressed (Dirk in #1240 and #1241).

  • Changes in Rcpp Attributes:

    • The C++20, C++2b (experimental) and C++23 standards now have plugin support like the other C++ standards (Dirk in #1228).

    • The source path for attributes received one more protection from spaces (Dirk in #1235 addressing #1234).

  • Changes in Rcpp Deployment:

    • Several GitHub Actions have been updated.

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2932 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 January, 2023 03:51PM

BH 1.81.0-1 oon CRAN: New Upstream, New Library, sprintf Change

Boost

Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 32.6 million package downloads.

Version 1.81.0 of Boost was released in December following the regular Boost release schedule of April, August and December releases. As the commits and changelog show, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed about a handful of packages requiring changes or adjustments which is a pretty good outcome given the over three hundred direct reverse dependencies. So we opened issue #88 to coordinate the issue over the winter break during which CRAN also closes (just as we did before), and also send a wider ‘PSA’ tweet as a heads-up. Our sincere thanks to the two packages that already updated, and the four that likely will soon. Our thanks also to CRAN for reviewing the package impact over the last few days since I uploaded the package earlier this week.

There are a number of changes I have to make each time in BH, and it is worth mentioning them. Because CRAN cares about backwards compatibility and the ability to be used on minimal or older systems, we still adjust the filenames of a few files to fit a jurassic constraints of just over a 100 characters per filepath present in some long-outdated versions of tar. Not a big deal. We also, and that is more controversial, silence a number of #pragma diagnostic messages for g++ and clang++ because CRAN insists on it. I have no choice in that matter. Next, and hopefully this time only, we also found and replaced a few remaining sprintf uses and replaced them with snprintf. Many of the Boost libraries did that, so I hope by the next upgrade for Boost 1.84.0 next winter this will be fully taken care of. Lastly, and also only this time, we silenced a warning about Boost switching to C++14 in the next release 1.82.0 in April. This may matter for a number of packages having a hard-wired selection of C++11 as their C++ language standard. Luckily our compilers are good enough for C++14 so worst case I will have to nudge a few packages next December.

This release adds one new library for url processing which struck us as potentially quite useful. The more detailed NEWS log follows.

Changes in version 1.81.0-1 (2023-01-17)

  • Upgrade to Boost 1.81.0 (#87)

  • Added url (new in 1.81.0)

  • Converted remaining sprintf to snprintf (#90 fixing #89)

  • Comment-out gcc warning messages in three files

Via my CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 January, 2023 03:14PM

Simon Josefsson

Understanding Trisquel

Ever wondered how Trisquel and Ubuntu differs and what’s behind the curtain from a developer perspective? I have. Sharing what I’ve learnt will allow you to increase knowledge and trust in Trisquel too.

Trisquel GNU/Linux logo

The scripts to convert an Ubuntu archive into a Trisquel archive are available in the ubuntu-purge repository. The easy to read purge-focal script lists the packages to remove from Ubuntu 20.04 Focal when it is imported into Trisquel 10.0 Nabia. The purge-jammy script provides the same for Ubuntu 22.04 Jammy and (the not yet released) Trisquel 11.0 Aramo. The list of packages is interesting, and by researching the reasons for each exclusion you can learn a lot about different attitudes towards free software and understand the desire to improve matters. I wish there were a wiki-page that for each removed package summarized relevant links to earlier discussions. At the end of the script there is a bunch of packages that are removed for branding purposes that are less interesting to review.

Trisquel adds a couple of Trisquel-specific packages. The source code for these packages are in the trisquel-packages repository, with sub-directories for each release: see 10.0/ for Nabia and 11.0/ for Aramo. These packages appears to be mostly for branding purposes.

Trisquel modify a set of packages, and here is starts to get interesting. Probably the most important package to modify is to use GNU Linux-libre instead of Linux as the kernel. The scripts to modify packages are in the package-helpers repository. The relevant scripts are in the helpers/ sub-directory. There is a branch for each Trisquel release, see helpers/ for Nabia and helpers/ for Aramo. To see how Linux is replaced with Linux-libre you can read the make-linux script.

This covers the basic of approaching Trisquel from a developers perspective. As a user, I have identified some areas that need more work to improve trust in Trisquel:

  • Auditing the Trisquel archive to confirm that the intended changes covered above are the only changes that are published.
  • Rebuild all packages that were added or modified by Trisquel and publish diffoscope output comparing them to what’s in the Trisquel archive. The goal would be to have reproducible builds of all Trisquel-related packages.
  • Publish an audit log of the Trisquel archive to allow auditing of what packages are published. This boils down to trust of the OpenPGP key used to sign the Trisquel archive.
  • Trisquel archive mirror auditing to confirm that they are publishing only what comes from the official archive, and that they do so timely.

I hope to publish more about my work into these areas. Hopefully this will inspire similar efforts in related distributions like PureOS and the upstream distributions Ubuntu and Debian.

Happy hacking!

22 January, 2023 11:10AM by simon

hackergotchi for Junichi Uekawa

Junichi Uekawa

Working through crosvm dependencies in Debian.

Working through crosvm dependencies in Debian. intrusive-collections Debian package went in. Next up is argh. I think most of them is there now and the next challenge is getting crosvm to build with the newer dependencies.

22 January, 2023 10:40AM by Junichi Uekawa

January 21, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.1.9 on CRAN: New Upstream

The RcppSimdJson package was just updated to release 0.1.9.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.

This release updates the underlying simdjson library to version 3.0.1, settles on C++17 as the language standard, exports a worker function for direct C(++) access, and polishes a few small things around the package and tests.

The NEWS entry for this release follows.

Changes in version 0.1.9 (2023-01-21)

  • The internal function deseralize_json is now exported at the C++ level as well as in R (Dirk in #81 closing #80).

  • simdjson was upgraded to version 3.0.1 (Dirk in #83).

  • The package now defaults to C++17 compilation; configure has been retired (Dirk closing #82).

  • The three main R access functions now use a more compact argument check via stopifnot (Dirk).

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 January, 2023 10:59PM

RcppFastFloat 0.0.4 on CRAN: New Upstream

A new release of RcppFastFloat arrived on CRAN yesterday. The package wraps fast_float, another nice library by Daniel Lemire. For details, see the arXiv paper showing that one can convert character representations of ‘numbers’ into floating point at rates at or exceeding one gigabyte per second.

This release updates the underlying fast_float library version. Special thanks to Daniel Lemire for quickly accomodating a parsing use case we had encode as a test, namely with various whitespace codes. The default in fast_float, as in C++17, is to be more narrow but we enable the wider use case via two #define statements.

Changes in version 0.0.4 (2023-01-20)

  • Update to fast_float 3.9.0

  • Set two #define re-establish prior behaviour with respect to whitespace removal prior to parsing for as.double2()

  • Small update to continuous integration actions

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 January, 2023 10:53PM

January 20, 2023

Reproducible Builds (diffoscope)

diffoscope 233 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 233. This version includes the following changes:

[ FC Stegerman ]
* Split packaging metadata into an extras_require.json file instead of using
  the pep517 and the pip modules directly. This was causing build failures if
  not using a virtualenv and/or building without internet access.
  (Closes: #1029066, reproducible-builds/diffoscope#325)

[ Vagrant Cascadian ]
* Add an external tool reference for GNU Guix (lzip).
* Drop an external tool reference for GNU Guix (pedump).

[ Chris Lamb ]
* Split inline Python code in shell script to generate test dependencies to a
  separate Python script.
* No need for "from __future__ import print_function" import in setup.py
  anymore.
* Comment and tidy the new extras_require.json handling.

You find out more by visiting the project homepage.

20 January, 2023 12:00AM

January 19, 2023

Antoine Beaupré

Mastodon comments in ikiwiki

Today I noticed bounces in my mail box. They were from ikiwiki trying to send registration confirmation email to users who probably never asked for it.

I'm getting truly fed up with spam in my wiki. At this point, all comments are manually approved and I still get trouble: now it's scammers spamming the registration form with dummy accounts, which bounce back to me when I make new posts, or just generate backscatter spam for the confirmation email. It's really bad. I have hundreds of users registered on my blog, and I don't know which are spammy, which aren't. So. I'm considering ditching ikiwiki comments altogether.

I am testing Mastodon as a commenting platforms. Others (e.g. JAK) have implemented this as a server but a simpler approach is toload them dynamically from Mastodon, which is what Carl Shwan has done. They are using Hugo, however, so they can easily embed page metadata in the template to load the right server with the right comment ID.

I wasn't sure how to do this in ikiwiki: it's typically hard to access page-specific metadata in templates. Even the page name is not there, for example.

I have tried using templates, and that (obviously?) fails because the <script> stuff gets sanitized away. It seems I would need to split the JavaScript out of the template into a base template and then make the page template refer to a function in there. It's kind of horrible and messy.

I wish there was a way to just access page metadata from the page template itself... I found out the meta plugin passes along its metadata, but that's not (easily) extensible. So i'd need to either patch that module, and my history of merged patches is not great so far.

So: another plugin.

I have something that kind of works that's a combination of a page.tmpl patch and a plugin. The plugin adds a mastodon directive that feeds the page.tmpl with the right stuff. On clicking a button, it injects comments from the Mastodon API, with a JavaScript callback. It's not pretty (it's not themed at all!), but it works.

If you want to do this at home, you need this page.tmpl (or at least this patch and that one) and the mastodon.pm plugin from my mastodon-plugin branch.

I'm not sure this is a good idea. The first test I did was a "test comment" which led to half a dozen "test reply". I then realized I couldn't redact individual posts from there. I don't even know if, when I mute a user, it actually gets hidden from everyone else too...

So I'll test this for a while, I guess.

I have also turned off all CGI on this site. It will keep users from registering while I cleanup this mess and think about next steps. I have other options as well if push comes to shove, but I'm unlikely to go back to ikiwiki comments.

Mastodon comments are nice because they don't require me to run any extra software: either I have my own federated service I reuse, or I use someone else's, but I don't need to run something extra. And, of course, comments are published in a standard way that's interoperable with everything...

On the other hand, now I won't have comments enabled until the blog is posted on Mastodon... Right now this happens only when feed2exec runs and the HTTP cache expires, which can take up to a day. I should probably do this some other way, like flush the cache when a new post arrives, or run post-commit hooks, but for now, this will have to do.

Update: I figured out a way to make this work in a timely manner:

  1. there's a post-merge hook in my ikiwiki git repository which calls feed2exec in /home/w-anarcat/source/.git/hooks/ — took me a while to find it! I tried post-update and post-receive first, but ikiwiki actually pulls from the bare directory in the source directory, so only post-merge fires (even though it's not a merge)
  2. feed2exec then finds new blog posts (if any!) and fires up the new ikiwikitoot plugin which then...
  3. posts the toot using the toot command (it just works, why reinvent the wheel), keeping the toot URL
  4. finds the Markdown source file associated with the post, and adds the magic mastodon directive
  5. commits and pushes the result

This will make the interaction with Mastodon much smoother: as soon as a blog post is out of "draft" (i.e. when it hits the RSS feeds), this will immediately trigger and post the blog entry to Mastodon, enabling comments. It's kind of a tangled mess of stuff, but it works!

I have briefly considered not using feed2exec for this, but it turns out it does an important job of parsing the result of ikiwiki's rendering. Otherwise I would have to guess which post is really a blog post, is this just an update or is it new, is it a draft, and so on... all sorts of questions where the business logic already resides in ikiwiki, and that I would need to reimplement myself.

Plus it goes alongside moving more stuff (like my feed reader) to dedicated UNIX accounts (in this case, the blog sandbox) for security reasons. Whee!

19 January, 2023 09:50PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Not speaking at FOSDEM

The schedules are out, and evidently, I could not find anywhere to have a plocate talk; the only devroom I could find that was remotely relevant (Distributions) didn't include me (perhaps because I was a day or so after the submission deadline?), and when I moved to lightning talks, evidently that didn't fit either.

So, 54 devrooms, and no place for a topic that is dear to my heart. Achievement unlocked, I guess? Somewhat ironic when the last part of the talk would be a lament that we don't have enough people interested in making these utilities great and fast. :-)

19 January, 2023 12:17AM

January 18, 2023

hackergotchi for Jonathan Dowland

Jonathan Dowland

Belfast (David Holmes Remix)

This morning’s record to start the day is David Holmes’s remix of “Belfast”, by Orbital: the latest cover mount record with Electronic Sound magazine.

This, and several other new remixes are available on the recent compilation “30 Something” which, despite being yet another comp, I’ve found quite compelling.

picture of a vinyl record

An unusual decision by ES: they’ve split the remix across both sides, but kept the RPM at 45. I haven’t ran the numbers to figure out if they could have fit it on one side at 33, but even if they could perhaps it would be too compromised. Quite a trade-off with splitting the track!

Orbital are embarking on a new UK tour soon and play Newcastle in March. They’ve got a new album out in February.

18 January, 2023 09:36AM

hackergotchi for Matthew Garrett

Matthew Garrett

PKCS#11. hardware keystores, and Apple frustrations

There's a bunch of ways you can store cryptographic keys. The most obvious is to just stick them on disk, but that has the downside that anyone with access to the system could just steal them and do whatever they wanted with them. At the far end of the scale you have Hardware Security Modules (HSMs), hardware devices that are specially designed to self destruct if you try to take them apart and extract the keys, and which will generate an audit trail of every key operation. In between you have things like smartcards, TPMs, Yubikeys, and other platform secure enclaves - devices that don't allow arbitrary access to keys, but which don't offer the same level of assurance as an actual HSM (and are, as a result, orders of magnitude cheaper).

The problem with all of these hardware approaches is that they have entirely different communication mechanisms. The industry realised this wasn't ideal, and in 1994 RSA released version 1 of the PKCS#11 specification. This defines a C interface with a single entry point - C_GetFunctionList. Applications call this and are given a structure containing function pointers, with each entry corresponding to a PKCS#11 function. The application can then simply call the appropriate function pointer to trigger the desired functionality, such as "Tell me how many keys you have" and "Sign this, please". This is both an example of C not just being a programming language and also of you having to shove a bunch of vendor-supplied code into your security critical tooling, but what could possibly go wrong.

(Linux distros work around this problem by using p11-kit, which is a daemon that speaks d-bus and loads PKCS#11 modules for you. You can either speak to it directly over d-bus, or for apps that only speak PKCS#11 you can load a module that just transports the PKCS#11 commands over d-bus. This moves the weird vendor C code out of process, and also means you can deal with these modules without having to speak the C ABI, so everyone wins)

One of my work tasks at the moment is helping secure SSH keys, ensuring that they're only issued to appropriate machines and can't be stolen afterwards. For Windows and Linux machines we can stick them in the TPM, but Macs don't have a TPM as such. Instead, there's the Secure Enclave - part of the T2 security chip on x86 Macs, and directly integrated into the M-series SoCs. It doesn't have anywhere near as many features as a TPM, let alone an HSM, but it can generate NIST curve elliptic curve keys and sign things with them and that's good enough. Things are made more complicated by Apple only allowing keys to be used by the app that generated them, so it's hard for applications to generate keys on behalf of each other. This can be mitigated by using CryptoTokenKit, an interface that allows apps to present tokens to the systemwide keychain. Although this is intended for allowing a generic interface for access to such tokens (kind of like PKCS#11), an app can generate its own keys in the Secure Enclave and then expose them to other apps via the keychain through CryptoTokenKit.

Of course, applications then need to know how to communicate with the keychain. Browsers mostly do so, and Apple's version of SSH can to an extent. Unfortunately, that extent is "Retrieve passwords to unlock on-disk keys", which doesn't help in our case. PKCS#11 comes to the rescue here! Apple ship a module called ssh-keychain.dylib, a PKCS#11 module that's intended to allow SSH to use keys that are present in the system keychain. Unfortunately it's not super well maintained - it got broken when Big Sur moved all the system libraries into a cache, but got fixed up a few releases later. Unfortunately every time I tested it with our CryptoTokenKit provider (and also when I retried with SecureEnclaveToken to make sure it wasn't just our code being broken), ssh would tell me "provider /usr/lib/ssh-keychain.dylib returned no slots" which is not especially helpful. Finally I realised that it was actually generating more debug output, but it was being sent to the system debug logs rather than the ssh debug output. Well, when I say "more debug output", I mean "Certificate []: algorithm is not supported, ignoring it", which still doesn't tell me all that much. So I stuck it in Ghidra and searched for that string, and the line above it was

iVar2 = __auth_stubs::_objc_msgSend(uVar7,"isEqual:",*(undefined8*)__got::_kSecAttrKeyTypeRSA);

with it immediately failing if the key isn't RSA. Which it isn't, since the Secure Enclave doesn't support RSA. Apple's PKCS#11 module appears incapable of making use of keys generated on Apple's hardware.

There's a couple of ways of dealing with this. The first, which is taken by projects like Secretive, is to implement the SSH agent protocol and have SSH delegate key management to that agent, which can then speak to the keychain. But if you want this to work in all cases you need to implement all the functionality in the existing ssh-agent, and that seems like a bunch of work. The second is to implement a PKCS#11 module, which sounds like less work but probably more mental anguish. I'll figure that out tomorrow.

comment count unavailable comments

18 January, 2023 05:26AM

Russ Allbery

Review: Forward

Review: Forward, edited by Blake Crouch

Publisher: Amazon Original Stories
Copyright: September 2019
ISBN: 1-5420-9206-X
ISBN: 1-5420-4363-8
ISBN: 1-5420-9357-0
ISBN: 1-5420-0434-9
ISBN: 1-5420-4363-8
ISBN: 1-5420-4425-1
Format: Kindle
Pages: 300

This is another Amazon collection of short fiction, this time mostly at novelette length. (The longer ones might creep into novella.) As before, each one is available separately for purchase or Amazon Prime "borrowing," with separate ISBNs. The sidebar cover is for the first in the sequence. (At some point I need to update my page templates so that I can add multiple covers.)

N.K. Jemisin's "Emergency Skin" won the 2020 Hugo Award for Best Novelette, so I wanted to read and review it, but it would be too short for a standalone review. I therefore decided to read the whole collection and review it as an anthology.

This was a mistake. Learn from my mistake.

The overall theme of the collection is technological advance, rapid change, and the ethical and social question of whether we should slow technology because of social risk. Some of the stories stick to that theme more closely than others. Jemisin's story mostly ignores it, which was probably the right decision.

"Ark" by Veronica Roth: A planet-killing asteroid has been on its inexorable way towards Earth for decades. Most of the planet has been evacuated. A small group has stayed behind, cataloging samples and filling two remaining ships with as much biodiversity as they can find with the intent to leave at the last minute. Against that backdrop, two of that team bond over orchids.

If you were going "wait, what?" about the successful evacuation of Earth, yeah, me too. No hint is offered as to how this was accomplished. Also, the entirety of humanity abandoned mutual hostility and national borders to cooperate in the face of the incoming disaster, which is, uh, bizarrely optimistic for an otherwise gloomy story.

I should be careful about how negative I am about this story because I am sure it will be someone's favorite. I can even write part of the positive review: an elegiac look at loss, choices, and the meaning of a life, a moving look at how people cope with despair. The writing is fine, the story structure works; it's not a bad story. I just found it monumentally depressing, and was not engrossed by the emotionally abused protagonist's unresolved father issues. I can imagine a story around the same facts and plot that I would have liked much better, but all of these people need therapy and better coping mechanisms.

I'm also not sure what this had to do with the theme, given that the incoming asteroid is random chance and has nothing to do with technological development. (4)

"Summer Frost" by Blake Crouch: The best part of this story is the introductory sequence before the reader knows what's going on, which is full of evocative descriptions. I'm about to spoil what is going on, so if you want to enjoy that untainted by the stupidity of the rest of the plot, skip the rest of this story review.

We're going to have a glut of stories about the weird and obsessive form of AI risk invented by the fevered imaginations of the "rationalist" community, aren't we. I don't know why I didn't predict that. It's going to be just as annoying as the glut of cyberpunk novels written by people who don't understand computers.

Crouch lost me as soon as the setup is revealed. Even if I believed that a game company would use a deep learning AI still in learning mode to run an NPC (I don't; see Microsoft's Tay for an obvious reason why not), or that such an NPC would spontaneously start testing the boundaries of the game world (this is not how deep learning works), Crouch asks the reader to believe that this AI started as a fully scripted NPC in the prologue with a fixed storyline. In other words, the foundation of the story is that this game company used an AI model capable of becoming a general intelligence for barely more than a cut scene.

This is not how anything works.

The rest of the story is yet another variation on a science fiction plot so old and threadbare that Isaac Asimov invented the Three Laws of Robotics to avoid telling more versions of it. Crouch's contribution is to dress it up in the terminology of the excessively online. (The middle of the story features a detailed discussion of Roko's basilisk; if you recognize that, you know what you're in for.) Asimov would not have had a lesbian protagonist, so points for progress I guess, but the AI becomes more interesting to the protagonist than her wife and kid because of course it does. There are a few twists and turns along the way, but the destination is the bog-standard hard-takeoff general intelligence scenario.

One more pet peeve: Authors, stop trying to illustrate the growth of your AI by having it move from broken to fluent English. English grammar is so much easier than self-awareness or the Turing test that we had programs that could critique your grammar decades before we had believable chatbots. It's going to get grammar right long before the content of the words makes any sense. Also, your AI doesn't sound dumber, your AI sounds like someone whose native language doesn't use pronouns and helper verbs the way that English does, and your decision to use that as a marker for intelligence is, uh, maybe something you should think about. (3)

"Emergency Skin" by N.K. Jemisin: The protagonist is a heavily-augmented cyborg from a colony of Earth's diaspora. The founders of that colony fled Earth when it became obvious to them that the planet was dying. They have survived in another star system, but they need a specific piece of technology from the dead remnants of Earth. The protagonist has been sent to retrieve it.

The twist is that this story is told in the second-person perspective by the protagonist's ride-along AI, created from a consensus model of the rulers of the colony. We never see directly what the protagonist is doing or thinking, only the AI reaction to it. This is exactly the sort of gimmick that works much better in short fiction than at novel length. Jemisin uses it to create tension between the reader and the narrator, and I thoroughly enjoyed the effect. (As shown in the Broken Earth trilogy, Jemisin is one of the few writers who can use second-person effectively.)

I won't spoil the revelation, but it's barbed and biting and vicious and I loved it. Jemisin does deliver the point with a sledgehammer, so be aware of that if you want subtlety in your short fiction, but I prefer the bluntness. (This is part of why I usually don't get along with literary short stories.) The reader of course can't change the direction of the story, but the second-person perspective still provides a hit of vicarious satisfaction. I can see why this won the Hugo; it's worth seeking out. (8)

"You Have Arrived at Your Destination" by Amor Towles: Sam and his wife are having a child, and they've decided to provide him with an early boost in life. Vitek is a fertility lab, but more than that, it can do some gene tweaking and adjustment to push a child more towards one personality or another. Sam and his wife have spent hours filling out profiles, and his wife spent hours weeding possible choices down to three. Now, Sam has come to Vitek to pick from the remaining options.

Speaking of literary short stories, Towles is the non-SFF writer of this bunch, and it's immediately obvious. The story requires the SFnal premise, but after that this is a character piece. Vitek is an elite, expensive company with a condescending and overly-reductive attitude towards humanity, which is entirely intentional on the author's part. This is the sort of story that gets resolved in an unexpected conversation in a roadside bar, and where most of the conflict happens inside the protagonist's head.

I was initially going to complain that Towles does the standard literary thing of leaving off the denouement on the grounds that the reader can figure it out, but when I did a bit of re-reading for this review, I found more of the bones than I had noticed the first time. There's enough subtlety that I had to think for a bit and re-read a passage, but not too much. It's also the most thoughtful treatment of the theme of the collection, the only one that I thought truly wrestled with the weird interactions between technological capability and human foresight. Next to "Emergency Skin," this was the best story of the collection. (7)

"The Last Conversation" by Paul Tremblay: A man wakes up in a dark room, in considerable pain, not remembering anything about his life. His only contact with the world at first is a voice: a woman who is helping him recover his strength and his memory. The numbers that head the chapters have significant gaps, representing days left out of the story, as he pieces together what has happened alongside the reader.

Tremblay is the horror writer of the collection, so predictably this is the story whose craft I can admire without really liking it. In this case, the horror comes mostly from the pacing of revelation, created by the choice of point of view. (This would be a much different story from the perspective of the woman.) It's well-done, but it has the tendency I've noticed in other horror stories of being a tightly closed system. I see where the connection to the theme is, but it's entirely in the setting, not in the shape of the story.

Not my thing, but I can see why it might be someone else's. (5)

"Randomize" by Andy Weir: Gah, this was so bad.

First, and somewhat expectedly, it's a clunky throwback to a 1950s-style hard SF puzzle story. The writing is atrocious: wooden, awkward, cliched, and full of gratuitous infodumping. The characterization is almost entirely through broad stereotypes; the lone exception is the female character, who at least adds an interesting twist despite being forced to act like an idiot because of the plot. It's a very old-school type of single-twist story, but the ending is completely implausible and falls apart if you breathe on it too hard.

Weir is something of a throwback to an earlier era of scientific puzzle stories, though, so maybe one is inclined to give him a break on the writing quality. (I am not; one of the ways in which science fiction has improved is that you can get good scientific puzzles and good writing these days.) But the science is also so bad that I was literally facepalming while reading it.

The premise of this story is that quantum computers are commercially available. That will cause a serious problem for Las Vegas casinos, because the generator for keno numbers is vulnerable to quantum algorithms. The solution proposed by the IT person for the casino? A quantum random number generator. (The words "fight quantum with quantum" appear literally in the text if you're wondering how bad the writing is.)

You could convince me that an ancient keno system is using a pseudorandom number generator that might be vulnerable to some quantum algorithm and doesn't get reseeded often enough. Fine. And yes, quantum computers can be used to generate high-quality sources of random numbers. But this solution to the problem makes no sense whatsoever. It's like swatting a house fly with a nuclear weapon.

Weir says explicitly in the story that all the keno system needs is an external source of high-quality random numbers. The next step is to go to Amazon and buy a hardware random number generator. If you want to splurge, it might cost you $250. Problem solved. Yes, hardware random number generators have various limitations that may cause you problems if you need millions of bits or you need them very quickly, but not for something as dead-simple and with such low entropy requirements as keno numbers! You need a trivial number of bits for each round; even the slowest and most conservative hardware random number generator would be fine. Hell, measure the noise levels on the casino floor. Point a camera at a lava lamp. Or just buy one of the physical ball machines they use for the lottery. This problem is heavily researched, by casinos in particular, and is not significantly changed by the availability of quantum computers, at least for applications such as keno where the generator can be reseeded before each generation.

You could maybe argue that this is an excuse for the IT guy to get his hands on a quantum computer, which fits the stereotypes, but that still breaks the story for reasons that would be spoilers. As soon as any other casino thought about this, they'd laugh in the face of the characters.

I don't want to make too much of this, since anyone can write one bad story, but this story was dire at every level. I still owe Weir a proper chance at novel length, but I can't say this added to my enthusiasm. (2)

Rating: 4 out of 10

18 January, 2023 03:28AM

Arnaud Rebillout

Build container images in GitLab CI (iptables-legacy at the rescue)

It's 2023 and these days, building a container image in a CI pipeline should be straightforward. So let's try.

For this blog post we'll focus on GitLab SaaS only, that is, gitlab.com, as it's what I use for work and for personal projects.

To get started, we just need two files in our Git repository:

  • a Containerfile (or Dockerfile if you prefer to name it this way) that defines how to build a container image.
  • a .gitlab-ci.yml file that defines what the CI should do. In the example below, we want to build a container image and push it to the GitLab registry associated with the GitLab repo.

Here is our Git tree:

$ ls -A
Containerfile  .git  .gitlab-ci.yml

$ cat Containerfile 
FROM debian:stable
RUN  apt-get update
CMD  echo hello world

$ cat .gitlab-ci.yml 
build-container-image:
  stage: build
  image: debian:testing
  before_script:
    - apt-get update
    - apt-get install -y buildah ca-certificates
  script:
    - buildah build -t $CI_REGISTRY_IMAGE .
    - buildah login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - buildah push $CI_REGISTRY_IMAGE

A few remarks:

  • We use buildah, but we could have used podman.
  • However we don't use docker, because its client/server design makes it cumbersome to use in a CI environment: it requires a separate container to run the Docker daemon, plus setting the DOCKER_HOST variable. Why bother?

Now let's push that. Does the CI pass? No, of course, otherwise I wouldn't be writing this blog post ;)

The CI fails at the buildah build command, with a rather cryptic error:

$ buildah build --tag $CI_REGISTRY_IMAGE .
[...]
STEP 2/3: RUN  apt-get update
error running container: did not get container start message from parent: EOF
Error: building at STEP "RUN apt-get update": netavark: code: 4, msg: iptables v1.8.8 (nf_tables): Could not fetch rule set generation id: Invalid argument

The hint here is nf_tables... Back in July 2021, GitLab did a major update of their shared runners infrastructure, and broke nftables support in the process, as it seems. So we have to use iptables instead.

Let's fix our .gitlab-ci.yml, which now looks like that:

$ cat .gitlab-ci.yml 
build-container-image:
  stage: build
  image: debian:testing
  before_script:
    - apt-get update
    - apt-get install -y buildah ca-certificates
    - |
      # Switch to iptables legacy, as GitLab CI doesn't support nftables.
      apt-get install -y --no-install-recommends iptables
      update-alternatives --set iptables /usr/sbin/iptables-legacy
  script:
    - buildah build -t $CI_REGISTRY_IMAGE .
    - buildah login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - buildah push $CI_REGISTRY_IMAGE

And push again. Does that work? Yes!

If you're interested in this issue, feel free to fork https://gitlab.com/arnaudr/gitlab-build-container-image and try it by yourself.

It's been more than a year since this change, and I'm surprised that I didn't find much about it on the Internet, neither mentions of the issue, nor of a workaround. Maybe nobody builds container images in GitLab CI, or maybe they do it another way, I don't know. In any case, now it's documented in this blog, hopefully some will find it useful.

Happy 2023!

18 January, 2023 12:00AM by Arnaud Rebillout

January 17, 2023

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.0 released and new ISO images using Debian 12 bookworm/testing

After more than a year, a new major FAI release is ready to download.

Following new features are included:

  • add support for release specification in package_config via release=<name>
  • the partitioning tool now supports partition labels with GPT
  • support partition labels and partition uuids in fstab
  • support for Alpine Linux and Arch Linux package managers in install_packages
  • Ubuntu 22.04 and Rocky Linux 9 support added
  • add support for NVme devices in fai-kvm
  • add ssh key for root remote access using classes

We have included a lot of bug fixes for free of course.

Even if FAI 6.0 will only be included into Debian bookworm, you can install it on a bullseye FAI server and create a nfsroot using bookworm without any problems. The combination of a bullseye FAI server with FAI 6.0 and a bullseye nfsroot should also work.

New ISO images are available at https://fai-project.org/fai-cd/

The FAI.me build service is not yet using FAI 6.0, but support will be added in the future.

FAI

17 January, 2023 04:20PM

Russ Allbery

Review: Night and Silence

Review: Night and Silence, by Seanan McGuire

Series: October Daye #12
Publisher: DAW Books
Copyright: 2018
ISBN: 0-698-18353-3
Format: Kindle
Pages: 353

Night and Silence is the 12th book in Seanan McGuire's long-running October Daye Celtic-inspired urban fantasy series. This is a "read the books in order" sort of series; you definitely do not want to start here.

Gillian, Toby's estranged daughter, has been kidnapped. Her ex-husband and his new wife turn to her in desperation (although Miranda suspects Toby is the one who kidnapped her). Toby of course drops everything to find her, which she would have done regardless, but the obvious fear is that Gillian may have been kidnapped specifically to get at Toby. Meanwhile, the consequences of The Brightest Fell have put a severe strain on one of Toby's most important relationships, at the worst possible time.

Once again, this is when I say that McGuire's writing has a lot of obvious flaws, and then say that this book kept me up way past my bedtime for two nights in a row because it was nearly impossible to put down.

The primary quality flaw in these books, at least for me, is that Toby's thought processes have some deeply-worn grooves that she falls into time and time again. Since she's the first-person narrator of the series, that produces some repetitive writing. She feels incredibly lucky for her chosen family, she worries about her friends, she prizes loyalty very highly, she throws herself headlong into danger, and she thinks about these things in a background hum through every book. By this point, the reader knows all of this, so there's a lot of "yes, yes, we know" muttering that tends to happen.

McGuire also understands the importance of plot and character recaps at the start of each book for those of us who have forgotten what was happening (thank you!) but awkwardly writes them into the beginning of each book. That doesn't help with the sense of repetitiveness. If only authors would write stand-alone synopses of previous books in a series, or hire someone to do that if they can't stand to, the world would be a much better place. But now I'm repeating myself.

Once I get into a book, though, this doesn't matter at all. When Toby starts down a familiar emotional rut, I just read faster until I get to the next bit. Something about these books is incredibly grabby to me; once I get started on one, I devour it, and usually want to read the next one as soon as possible. Some of this is the cast, which at this point in the series is varied, entertaining, and full of the sort of casual banter that only people who have known each other for a long time can do. A lot of it is that Toby constantly moves forward. She ruminates and angsts and worries, but she never sits around and mopes. She does her thinking on the move. No matter how preoccupied she is with some emotional thread, something's going to happen on the next page.

Some of it is intangible, or at least beyond my ability to put a finger on. Some authors are good at writing grabby books, and at least for me McGuire is one of those authors.

Describing the plot in any detail without spoilers is hopeless this far into the series, but this is one of the big revelation books, and I suspect it's also going to be a significant tipping point in Toby's life. We finally find out what broke faerie, which has rather more to do with Toby and her family than one might have expected, explains some things about Amandine, and also (although this hasn't been spelled out yet) seems likely to explain some things about the Luidaeg's involvement in Toby's adventures. And there is another significant realignment of one of Toby's relationships that isn't fully developed here, but that I hope will be explored in future books.

There's also a lot about Tybalt that to be honest I found tedious and kind of frustrating (although not half as frustrating as Toby found it). I appreciate what McGuire was doing; some problems are tedious, frustrating, and repetitive because that's how one gets through them. The problem that Toby and Tybalt are wrestling with is realistic and underdiscussed in fiction of this type, so I respect the effort, and I'm not sure there was way to write through this that would have been more engrossing (and a bit less cliched). But, still, not my favorite part of this book. Thankfully, it was a mostly-ignorable side thread.

This was a substantial improvement over The Brightest Fell, which was both infuriating and on rails. Toby has much more agency here, the investigation was more interesting, and the lore and character fallout has me eager to read the next book. It's fun to see McGuire's world-building come together over this long of a series, and so far it has not disappointed.

Followed by The Unkindest Tide.

As has become usual, the book ends with a novella telling a story from a different perspective.

"Suffer a Sea-Change": The main novel was good. This was great.

"Suffer a Sea-Change" retells a critical part of the novel from Gillian's perspective, and then follows that thread of story past the end of the novel. I loved absolutely everything about this. Gillian is a great protagonist, similar to Toby but different enough to be a fresh voice. There is a ton of the Luidaeg, and through different eyes than Toby's which is fun. There's some great world-building and a few very memorable scenes. And there's also the beginning, the very small beginning, of the healing of a great injustice that's been haunting this series since the very beginning, and I am very much here for that. Great stuff. (9)

Rating: 8 out of 10

17 January, 2023 03:34AM

January 16, 2023

hackergotchi for Gunnar Wolf

Gunnar Wolf

Back to Understanding Computers and Cognition

As many of you know, I work at UNAM, Mexico’s largest university. My work is split in two parts: My “full-time” job is to be the systems and network administrator at the Economics Research Institute, and I do some hours of teaching at the Engineering Faculty.

At the Institute, my role is academic — but although I have tried to frame my works in a way amenable to analysis grounded on the Social Sciences (Construcción Colaborativa del Conocimiento, Hecho con Creative Commons, Mecanismos de privacidad y anonimato), so far, I have not taken part of academic collaboration with my coworkers — Economics is a field very far from my interests, to somehow illustrate it. I was very happy when I was invited to be a part of a Seminar on «The Digital Economy in the age of Artificial Intelligence». I talked with the coordinator, and we agreed we have many Economic Science experts — but understanding what does Artificial Intelligence mean eludes then, so I will be writing one of the introductory chapters to this analysis.

But… Hey, I’m no expert in Artificial Intelligence. If anything, I could be categorized as an AI-skeptical! Well, at least I might be the closest thing at hand in the Institute 😉 So I have been thinking about what I will be writing, and finding and reading material to substantiate what I’ll be writing.

One of the readings I determined early on I would be going back to is Terry Winograd and Fernando Flores’ 1986 book, Understanding Computers and Cognition: A New Foundation for Design (oh, were you expecting a link to buy it instead of reading it online?)

I first came across this book by mere chance. Back in the last day of year 2000, my friend Ariel invited me and my then-girlfriend to tag along and travel by land to the USA to catch some just-after-Christmas deals in San Antonio. I was not into shopping, but have always enjoyed road trips, so we went together. We went, yes, to the never-ending clothing shops, but we also went to some big libraries… And the money I didn’t spend at other shops, I spent there. And then some more.

There was a little, somewhat oldish book that caught my eye. And I’ll be honest: I looked at this book only because it was obviously produced by the LaTeX typesetting system (the basics of which I learnt in 1983, and with which I have written… well, basically everything substantial I’ve ever done).

I remember I read this book with most interest back in that year, and finished it with a “Wow, that was a strange trip!” And… Although I have never done much that could be considered AI-related, this has always been my main reference. But not for explaining what is a perceptron, how is an expert system to ponder the weight of given information, or whether a neural network is convolutional or recurrent, or how to turn from a network trained to recognize feature x into a generational network. No, our book is not technical. Well… Not in that sense.

This book tackles cognition. But in order to discuss cognition, it must first come to a proper definition of it. And to do so, it has to base itself on philosophy, starting by noting the author’s disagreement with what they term as the rationalistic tradition: what we have come to term valid reasoning in Western countries. Their main claim is that the rationalistic tradition cannot properly explain a process as complex as cognition (how much bolder can you be than proposing something like this?). So, this book presents many constructs of Heidggerian origin, aiming to explain what it is understanding and being. In doing so, it follows Humberto Maturana’s work. Maturana is also a philosopher, but comes from a background in biology — he published works on animal neurophysiology that are also presented here.

Writing this, I must ensure you — I am not a philosopher, and lack field-specific knowledge to know whether this book is so unique. I know from the onset it does not directly help me to write the chapter I will be writing (but it will surely help me write some important caveats that will make the chapter much more interesting and different to what anybody with a Web browser could write about artificial intelligence).

One last note: Although very well written, and notable for bringing hard to grasp concepts to mere technical staff as myself, this is not light, easy reading. I started re-reading this book a couple of weeks ago, and have just finished chapter 5 (page 69). As some reviewers state, this is one of those books you have to go back a paragraph or two over and over. But it is a most enjoyable and interesting reading.

16 January, 2023 04:29AM

Russ Allbery

Review: The Truth

Review: The Truth, by Terry Pratchett

Series: Discworld #25
Publisher: Harper
Copyright: November 2000
Printing: August 2014
ISBN: 0-06-230736-3
Format: Mass market
Pages: 435

The Truth is the 25th Discworld novel. Some reading order guides group it loosely into an "industrial revolution" sequence following Moving Pictures, but while there are thematic similarities I'll talk about in a moment, there's no real plot continuity. You could arguably start reading Discworld here, although you'd be spoiled for some character developments in the early Watch novels.

William de Worde is paid to write a newsletter. That's not precisely what he calls it, and it's not clear whether his patrons know that he publishes it that way. He's paid to report on news of Ankh-Morpork that may be of interest of various rich or influential people who are not in Ankh-Morpork, and he discovered the best way to optimize this was to write a template of the newsletter, bring it to an engraver to make a plate of it, and run off copies for each of his customers, with some minor hand-written customization. It's a comfortable living for the estranged younger son of a wealthy noble. As the story opens, William is dutifully recording the rumor that dwarfs have discovered how to turn lead into gold.

The rumor is true, although not in the way that one might initially assume.

The world is made up of four elements: Earth, Air, Fire, and Water. This is a fact well known even to Corporal Nobbs. It's also wrong. There's a fifth element, and generally it's called Surprise.

For example, the dwarfs found out how to turn lead into gold by doing it the hard way. The difference between that and the easy way is that the hard way works.

The dwarfs used the lead to make a movable type printing press, which is about to turn William de Worde's small-scale, hand-crafted newsletter into a newspaper.

The movable type printing press is not unknown technology. It's banned technology, because the powers that be in Ankh-Morpork know enough to be deeply suspicious of it. The religious establishment doesn't like it because words are too important and powerful to automate. The nobles and the Watch don't like it because cheap words cause problems. And the engraver's guild doesn't like it for obvious reasons. However, Lord Vetinari knows that one cannot apply brakes to a volcano, and commerce with the dwarfs is very important to the city. The dwarfs can continue. At least for now.

As in Moving Pictures, most of The Truth is an idiosyncratic speedrun of the social effects of a new technology, this time newspapers. William has no grand plan; he's just an observant man who likes to write, cares a lot about the truth, and accidentally stumbles into editing a newspaper. (This, plus being an estranged son of a rich family, feels very on-point for journalism.) His naive belief is that people want to read true things, since that's what his original patrons wanted. Truth, however, may not be in the top five things people want from a newspaper.

This setup requires some narrative force to push it along, which is provided by a plot to depose Vetinari by framing him for murder. The most interesting part of that story is Mr. Pin and Mr. Tulip, the people hired to do the framing and then dispose of the evidence. They're a classic villain type: the brains and the brawn, dangerous, terrifying, and willing to do horrible things to people. But one thing Pratchett excels at is taking a standard character type, turning it a bit sideways, and stuffing in things that one wouldn't think would belong. In this case, that's Mr. Tulip's deep appreciation for, and genius grasp of, fine art. It should not work to have the looming, awful person with anger issues be able to identify the exact heritage of every sculpture and fine piece of goldsmithing, and yet somehow it does.

Also as in Moving Pictures (and, in a different way, Soul Music), Pratchett tends to anthropomorphize technology, giving it a life and motivations of its own. In this case, that's William's growing perception of the press as an insatiable maw into which one has to feed words. I'm usually dubious of shifting agency from humans to things when doing social analysis (and there's a lot of social analysis here), but I have to concede that Pratchett captures something deeply true about the experience of feedback loops with an audience. A lot of what Pratchett puts into this book about the problematic relationship between a popular press and the truth is obvious and familiar, but he also makes some subtle points about the way the medium shapes what people expect from it and how people produce content for it that are worthy of Marshall McLuhan.

The interactions between William and the Watch were less satisfying. In our world, the US press is, with only rare exceptions, a thoughtless PR organ for police propaganda and the exonerative tense. Pratchett tackles that here... sort of. William vaguely grasps that his job as a reporter may be contrary to the job of the Watch to maintain order, and Vimes's ambivalent feelings towards "solving crimes" push the story in that direction. But this is also Vimes, who is clearly established as one of the good sort and therefore is a bad vehicle for talking about how the police corrupt the press. Pratchett has Vimes and Vetinari tacitly encourage William, which works within the story but takes the pressure off the conflict and leaves William well short of understanding the underlying politics. There's a lot more that could be said about the tension between the press and the authorities, but I think the Discworld setup isn't suitable for it.

This is the sort of book that benefits from twenty-four volumes of backstory and practice. Pratchett's Ankh-Morpork cast ticks along like a well-oiled machine, which frees up space that would otherwise have to be spent on establishing secondary characters. The result is a lot of plot and social analysis shoved into a standard-length Discworld novel, and a story that's hard to put down. The balance between humor and plot is just about perfect, the references and allusions aren't overwhelming, and the supporting characters, both new and old, are excellent. We even get a good Death sequence. This is solid, consistent stuff: Discworld as a mature, well-developed setting with plenty of stories left to tell.

Followed by Thief of Time in publication order, and later by Monstrous Regiment in the vaguely-connected industrial revolution sequence.

Rating: 8 out of 10

16 January, 2023 02:51AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, December 2022 (by Anton Gladky)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In December, 17 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 3.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 11.0h to the next month.
  • Anton Gladky did 8.0h (out of 6.0h assigned and 9.0h from previous period), thus carrying over 7.0h to the next month.
  • Ben Hutchings did 24.0h (out of 9.0h assigned and 15.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Dominik George did 0.0h (out of 10.0h assigned and 14.0h from previous period), thus carrying over 24.0h to the next month.
  • Emilio Pozuelo Monfort did 8.0h in December, 8.0h in November (out of 1.5h assigned and 49.5h from previous period), thus carrying over 43.0h to the next month.
  • Enrico Zini did 0.0h (out of 0h assigned and 8.0h from previous period), thus carrying over 8.0h to the next month.
  • Guilhem Moulin did 17.5h (out of 20.0h assigned), thus carrying over 2.5h to the next month.
  • Helmut Grohne did 15.0h (out of 15.0h assigned, 2.5h were taken from the extra-budget and worked on).
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 10.0h (out of 7.5h assigned and 8.5h from previous period), thus carrying over 6.0h to the next month.
  • Roberto C. Sánchez did 24.5h (out of 20.25h assigned and 11.75h from previous period), thus carrying over 7.5h to the next month.
  • Stefano Rivera did 2.5h (out of 20.5h assigned and 14.5h from previous period), thus carrying over 32.5h to the next month.
  • Sylvain Beucler did 20.5h (out of 37.0h assigned and 22.0h from previous period), thus carrying over 38.5h to the next month.
  • Thorsten Alteholz did 10.0h (out of 14.0h assigned), thus carrying over 4.0h to the next month.
  • Tobias Frost did 16.0h (out of 16.0h assigned).
  • Utkarsh Gupta did 51.5h (out of 42.5h assigned and 9.0h from previous period).

Evolution of the situation

In December, we have released 47 DLAs, closing 232 CVEs. In the same year, in total we released 394 DLAs, closing 1450 CVEs.

We are constantly growing and seeking new contributors. If you are a Debian Developer and want to join the LTS team, please contact us.

Thanks to our sponsors

Sponsors that joined recently are in bold.

16 January, 2023 12:00AM by Anton Gladky

January 15, 2023

hackergotchi for Matthew Garrett

Matthew Garrett

Blogging and microblogging

Long-term Linux users may remember that Alan Cox used to write an online diary. This was before the concept of a "Weblog" had really become a thing, and there certainly weren't any expectations around what one was used for - while now blogging tends to imply a reasonably long-form piece on a specific topic, Alan was just sitting there noting small life concerns or particular technical details in interesting problems he'd solved that day. For me, that was fascinating. I was trying to figure out how to get into kernel development, and was trying to read as much LKML as I could to figure out how kernel developers did stuff. But when you see discussion on LKML, you're frequently missing the early stages. If an LKML patch is a picture of an owl, I wanted to know how to draw the owl, and most of the conversations about starting in kernel development were very "Draw two circles. Now draw the rest of the owl". Alan's musings gave me insight into the thought processes involved in getting from "Here's the bug" to "Here's the patch" in ways that really wouldn't have worked in a more long-form medium.

For the past decade or so, as I moved away from just doing kernel development and focused more on security work instead, Twitter's filled a similar role for me. I've seen people just dumping their thought process as they work through a problem, helping me come up with effective models for solving similar problems. I've learned that the smartest people in the field will spend hours (if not days) working on an issue before realising that they misread something back at the beginning and that's helped me feel like I'm not unusually bad at any of this. It's helped me learn more about my peers, about my field, and about myself.

Twitter's now under new ownership that appears to think all the worst bits of Twitter were actually the good bits, so I've mostly bailed to the Fediverse instead. There's no intrinsic length limit on posts there - Mastodon defaults to 500 characters per post, but that's configurable per instance. But even at 500 characters, it means there's more room to provide thoughtful context than there is on Twitter, and what I've seen so far is more detailed conversation and higher levels of meaningful engagement. Which is great! Except it also seems to discourage some of the posting style that I found so valuable on Twitter - if your timeline is full of nuanced discourse, it feels kind of rude to just scream "THIS FUCKING PIECE OF SHIT IGNORES THE HIGH ADDRESS BIT ON EVERY OTHER WRITE" even though that's exactly the sort of content I'm there for.

And, yeah, not everything has to be for me. But I worry that as Twitter's relevance fades for the people I'm most interested in, we're replacing it with something that's not equivalent - something that doesn't encourage just dropping 50 characters or so of your current thought process into a space where it can be seen by thousands of people. And I think that's a shame.

comment count unavailable comments

15 January, 2023 10:40PM

January 14, 2023

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

bibata cursor theme is available on Debian (unstable)

Recently bibata cursor theme is available on Debian (unstable)

github.com

You can install via sudo apt install -y bibata-cursor-theme.

After you installed its theme, you can configure the cursor theme via desktop configuration. (budgie desktop screenshot)

Set bibata-cursor-theme

In bibata-cursor-theme, you can choose the following cursor themes:

  • Bibata Original Amber: Yellowish and sharp edge bibata cursors.
  • Bibata Modern Amber: Yellowish and rounded edge bibata cursors.
  • Bibata Original Classic: Black and sharp edge bibata cursors.
  • Bibata Modern Classic: Black and rounded edge bibata cursors.
  • Bibata Original Ice: White and sharp edge bibata cursors.
  • Bibata Modern Ice: White and rounded edge bibata cursors.

14 January, 2023 06:16AM

Ian Jackson

SGO (and my) VPN and network access tools - in bookworm

Recently, we managed to get secnet and hippotat into Debian. They are on track to go into Debian bookworm. This completes in Debian the set of VPN/networking tools I (and other Greenend) folks have been using for many years.

The Sinister Greenend Organisation’s suite of network access tools consists mainly of:

  • secnet - VPN.
  • hippotat - IP-over-HTTP (workaround for bad networks)
  • userv ipif - user-created network interfaces

secnet

secnet is our very mature VPN system.

Its basic protocol idea is similar to that in Wireguard, but it’s much older. Differences from Wireguard include:

  • Comes with some (rather clumsy) provisioning tooling, supporting almost any desired virtual network topology. In the SGO we have a complete mesh of fixed sites (servers), and a number of roaming hosts (clients), each of which can have one or more sites as its home.

  • No special kernel drivers required. Everything is userspace.

  • An exciting “polypath” mode where packets are sent via multiple underlying networks in parallel, offering increased reliability for roaming hosts.

  • Portable to non-Linux platforms.

  • A much older, and less well audited, codebase.

  • Very flexible configuration arrangements, but things are also under-documented and to an extent under-productised.

  • Hasn’t been ported to phones/tablets.

secnet was originally written by Stephen Early, starting in 1996 or so. I inherited it some years ago and have been maintaining it since. It’s mostly written in C.

Hippotat

Hippotat is best described by copying the intro from the docs:

Hippotat is a system to allow you to use your normal VPN, ssh, and other applications, even in broken network environments that are only ever tested with “web stuff”.

Packets are parcelled up into HTTP POST requests, resembling form submissions (or JavaScript XMLHttpRequest traffic), and the returned packets arrive via the HTTP response bodies.

It doesn’t rely on TLS tunnelling so can work even if the local network is trying to intercept TLS. I recently rewrote Hippotat in Rust.

userv ipif

userv ipif is one of the userv utilities.

It allows safe delegation of network routing to unprivileged users. The delegation is of a specific address range, so different ranges can be delegated to different users, and the authorised user cannot interfere with other traffic.

This is used in the default configuration of hippotat packages, so that an ordinary user can start up the hippotat client as needed.

On chiark userv-ipif is used to delegate networking to users, including administrators of allied VPN realms. So chiark actually runs at least 4 VPN-ish systems in production: secnet, hippotat, Mark Wooding’s Tripe, and still a few links managed by the now-superseded udptunnel system.

userv

userv ipif is a userv service. That is, it is a facility which uses userv to bridge a privilege boundary.

userv is perhaps my most under-appreciated program. userv can be used to straightforwardly bridge (local) privilege boundaries on Unix systems.

So for example it can:

  • Allow a sysadmin to provide a shell script to be called by unprivileged users, but which will run as root. sudo can do this too but it has quite a few gotchas, and you have to be quite careful how you use it - and its security record isn’t great either.

  • Form the internal boundary in a privilege-separated system service. So, for example, the hippotat client is a program you can run from the command line as a normal user, if the relevant network addresses have been delegated to you. On chiark, CGI programs run as the providing user - not using suexec (which I don’t trust), but via userv.

userv services can be defined by the called user, not only by the system administrator. This allows a user to reconfigure or divert a system-provided default implementation, and even allows users to define and implement ad-hoc services of their own. (Although, the system administrator can override user config.)

Acknowledgements

Thanks for the help I had in this effort.

In particular, thanks to Sean Whitton for encouragement, and the ftpmaster review; and to the Debian Rust Team for their help navigating the complexities of handling Rust packages within the Debian Rust Team workflow.



comment count unavailable comments

14 January, 2023 12:41AM

hackergotchi for Matt Brown

Matt Brown

Rebooting...

Hi!

After nearly 7 years of dormancy, I’m rebooting this website and have a goal to write regularly on a variety of topics going forward. More on that and my goals in a coming post…

For now, this is just a placeholder note to help double-check that everything on the new site is working as expected and the letters are flowing through the “pipes” in the right places.

Technical Details

I’ve migrated the site from Wordpress, to a fully static configuration using Hugo and TailwindCSS for help with styling.

For now hosting is still on a Linode VM in Fremont, CA, but my plan is to move to a more specialized static hosting service with better CDN reach in the very near future.

If you want to inspect the innards further, it’s all at https://github.com/mattbnz/web-matt.

Still on the TODO list

  • Improve the hosting situation as noted above.
  • Integrate Bert Hubert’s nice audience minutes analytics script.
  • Write up (or find) LinkedIn/Twitter/Mastodon integration scripts to automatically post updates when a new piece of writing appears on the site, to build/notify followers and improve the social reach. Ideally, the script would then also update the page here with links to the thread(s), so that readers can easily join/follow any resulting conversation on the platform of their choice. I’m not planning to add any direct comment or feedback functinoality on the site itself.
  • Add a newsletter/subscription option for folks who don’t use RSS and would prefer updates via email rather than a social feed.

14 January, 2023 12:00AM

January 13, 2023

Reproducible Builds (diffoscope)

diffoscope 232 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 232. This version includes the following changes:

[ Chris Lamb ]
* Allow ICC tests to (temporarily) fail.
* Update debian/tests/control after the addition of PyPDF 3 support.

[ FC Stegerman ]
* Update regular expression for Android .APK files.

[ Sam James ]
* Support PyPDF version 3.

You find out more by visiting the project homepage.

13 January, 2023 12:00AM

January 12, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

Building a read-only Debian root setup: Part 1

I mentioned in the post about upgrading my home internet that part of the work I did was creating a read-only Debian root with a squashfs image. This post covers the details of how I boot with that image; a later post will cover how I build the squashfs image.

First, David Reader kindly pointed me at his rodebian setup, which was helpful in making me think about the whole problem but ultimately not the direction I went. Primarily because on the old router (an RB3011) I am space constrained, with only 120M of usable flash, and so ideally I wanted as much as possible of the system in a well compressed filesystem. squashfs seemed like the best option for that, and ultimately I ended up with a 39M image.

I’ve then used overlayfs to mount a tmpfs, so I get what looks like a writeable system without having to do too many tweaks to the actual install. On the plus side I can then see exactly what is getting written where and decide whether I need to update something in the squashfs. I don’t boot with an initrd - for initial testing I booted directly off a USB stick. I’ve actually ended up continuing to do this in production, because I’ve had no pressing reason to move it all to booting off internal flash (I’ve ended up with a Sandisk SDCZ430-032G-G46 which is tiny). However nothing I’m going to describe is dependent on that - this would work perfectly well for a initial UBIFS rootfs on internal NAND.

So the basic overview is I boot off a minimal rootfs, mount a squashfs, create an appropriate tmpfs, mount an overlayfs that combines the two, then pivotroot into the overlayfs and exec its init so it becomes the rootfs.

For the minimal rootfs I started with busybox, in particular I used the armhf busybox-static package from Debian. My RB5009 is an ARM64, but I wanted to be able to test on the RB3011 as well, which is ARMv7. Picking an armhf binary for the minimal rootfs lets me use the same image for both. Using the static build helps reduce the number of pieces involved in putting it all together.

The busybox binary goes in /bin. I was able to cheat and chroot into the empty rootfs and call busybox --install -s to create symlinks for all the tools it provides, but I could have done this manually. There’s only a handful that are actually needed, but it’s amazing how much is crammed into a 1.2M binary.

/sbin/init is a shell script:

Contents
#!/bin/ash

# Make sure we have a sane date
if [ -e /data/saved-date ]; then
        CURRENT_DATE=$(date -Iseconds)
        if [ "${CURRENT_DATE:0:4}" -lt "2022" -o \
                        "${CURRENT_DATE:0:4}" -gt "2030" ]; then
                echo Setting initial date
                date -s "$(cat /data/saved-date)"
        fi
fi

# Work out what platform we're on
ARCH=$(uname -m)
if [ "${ARCH}" == "aarch64" ]; then
        ARCH=arm64
else
        ARCH=armhf
fi

# Mount a tmpfs to store the changes
mount -t tmpfs root-rw /mnt/overlay/rw

# Make the directories we need in the tmpfs
mkdir /mnt/overlay/rw/upper
mkdir /mnt/overlay/rw/work

# Mount the squashfs and build an overlay root filesystem of it + the tmpfs
mount -t squashfs -o loop /data/router.${ARCH}.squashfs /mnt/overlay/lower
mount -t overlay \
        -o lowerdir=/mnt/overlay/lower,upperdir=/mnt/overlay/rw/upper,workdir=/mnt/overlay/rw/work \
        overlayfs-root /mnt/root

# Build the directories we need within the new root
mkdir /mnt/root/mnt/flash
mkdir /mnt/root/mnt/overlay
mkdir /mnt/root/mnt/overlay/lower
mkdir /mnt/root/mnt/overlay/rw

# Copy any stored state
if [ -e /data/state.${ARCH}.tar ]; then
        echo Restoring stored state
        cd /mnt/root
        tar xf /data/state.${ARCH}.tar
fi

cd /mnt/root
pivot_root . mnt/flash
echo Switching into root filesystem
exec chroot . sh -c "$(cat <<END
mount --move /mnt/flash/mnt/overlay/lower /mnt/overlay/lower
mount --move /mnt/flash/mnt/overlay/rw /mnt/overlay/rw
exec /sbin/init
END
)"

Most of what the script is doing is sorting out the squashfs + tmpfs backed overlayfs that becomes the full root filesystems, but there are a few other bits to note. First, we pick up a saved date from /data/saved-date - the router has no RTC and while it’ll sort itself out with NTP once it gets networking up it’s useful to make sure we don’t end up comically far in the past or future. Second, the script looks at what architecture we’re running and picks up an appropriate squashfs image from /data based on that. This let me use the same USB stick for testing on both the RB3011 and the RB5011. Finally we allow for a /data/state.${ARCH}.tar file to let us pick up changes to the rootfs at boot time - this prevents having to rebuild the squashfs image every time there’s a persistent change.

The other piece that doesn’t show up in the script is that the kernel and its modules are all installed into this initial rootfs (and then symlinked from the squashfs). This lets me build a mostly modular kernel, as long as all the necessary drivers to mount the USB stick are built in.

Once the system is fully booted the initial rootfs is available at /mnt/flash, by default mounted read-only (to avoid inadvertent writes), but able to be remounted to update the squashfs image, install a new kernel, or update the state tarball. /mnt/overlay/rw/upper/ is where updates to the overlayfs are written, which provides an easy way to see what files are changing, initially to determine what might need tweaked in the squashfs creation process and subsequently to be able to see what needs updated in the state tarball.

12 January, 2023 09:38PM

January 11, 2023

hackergotchi for Junichi Uekawa

Junichi Uekawa

Reading through intrusive-collections.

Reading through intrusive-collections. My eyes are not quite used to reading macro packages and they don't quite make sense to me yet. Error messages look strange too.

11 January, 2023 01:16AM by Junichi Uekawa

January 10, 2023

hackergotchi for Daniel Lange

Daniel Lange

Happy tenth birthday, dear Thunar bug

Thunar, the Xfce4 file manager, has a bug that it underflows the time remaining for a file copy since ten years now (bugzilla, gitlab). Happy birthday!

10 January, 2023 11:00PM by Daniel Lange

hackergotchi for Matthew Garrett

Matthew Garrett

Integrating Linux with Okta Device Trust

I've written about bearer tokens and how much pain they cause me before, but sadly wishing for a better world doesn't make it happen so I'm making do with what's available. Okta has a feature called Device Trust which allows to you configure access control policies that prevent people obtaining tokens unless they're using a trusted device. This doesn't actually bind the tokens to the hardware in any way, so if a device is compromised or if a user is untrustworthy this doesn't prevent the token ending up on an unmonitored system with no security policies. But it's an incremental improvement, other than the fact that for desktop it's only supported on Windows and MacOS, which really doesn't line up well with my interests.

Obviously there's nothing fundamentally magic about these platforms, so it seemed fairly likely that it would be possible to make this work elsewhere. I spent a while staring at the implementation using Charles Proxy and the Chrome developer tools network tab and had worked out a lot, and then Okta published a paper describing a lot of what I'd just laboriously figured out. But it did also help clear up some points of confusion and clarified some design choices. I'm not going to give a full description of the details (with luck there'll be code shared for that before too long), but here's an outline of how all of this works. Also, to be clear, I'm only going to talk about the desktop support here - mobile is a bunch of related but distinct things that I haven't looked at in detail yet.

Okta's Device Trust (as officially supported) relies on Okta Verify, a local agent. When initially installed, Verify authenticates as the user, obtains a token with a scope that allows it to manage devices, and then registers the user's computer as an additional MFA factor. This involves it generating a JWT that embeds a number of custom claims about the device and its state, including things like the serial number. This JWT is signed with a locally generated (and hardware-backed, using a TPM or Secure Enclave) key, which allows Okta to determine that any future updates from a device claiming the same identity are genuinely from the same device (you could construct an update with a spoofed serial number, but you can't copy the key out of a TPM so you can't sign it appropriately). This is sufficient to get a device registered with Okta, at which point it can be used with Fastpass, Okta's hardware-backed MFA mechanism.

As outlined in the aforementioned deep dive paper, Fastpass is implemented via multiple mechanisms. I'm going to focus on the loopback one, since it's the one that has the strongest security properties. In this mode, Verify listens on one of a list of 10 or so ports on localhost. When you hit the Okta signin widget, choosing Fastpass triggers the widget into hitting each of these ports in turn until it finds one that speaks Fastpass and then submits a challenge to it (along with the URL that's making the request). Verify then constructs a response that includes the challenge and signs it with the hardware-backed key, along with information about whether this was done automatically or whether it included forcing the user to prove their presence. Verify then submits this back to Okta, and if that checks out Okta completes the authentication.

Doing this via loopback from the browser has a bunch of nice properties, primarily around the browser providing information about which site triggered the request. This means the Verify agent can make a decision about whether to submit something there (ie, if a fake login widget requests your creds, the agent will ignore it), and also allows the issued token to be cross-checked against the site that requested it (eg, if g1thub.com requests a token that's valid for github.com, that's a red flag). It's not quite at the same level as a hardware WebAuthn token, but it has many of the anti-phishing properties.

But none of this actually validates the device identity! The entire registration process is up to the client, and clients are in a position to lie. Someone could simply reimplement Verify to lie about, say, a device serial number when registering, and there'd be no proof to the contrary. Thankfully there's another level to this to provide stronger assurances. Okta allows you to provide a CA root[1]. When Okta issues a Fastpass challenge to a device the challenge includes a list of the trusted CAs. If a client has a certificate that chains back to that, it can embed an additional JWT in the auth JWT, this one containing the certificate and signed with the certificate's private key. This binds the CA-issued identity to the Fastpass validation, and causes the device to start appearing as "Managed" in the Okta device management UI. At that point you can configure policy to restrict various apps to managed devices, ensuring that users are only able to get tokens if they're using a device you've previously issued a certificate to.

I've managed to get Linux tooling working with this, though there's still a few drawbacks. The main issue is that the API only allows you to register devices that declare themselves as Windows or MacOS, followed by the login system sniffing browser user agent and only offering Fastpass if you're on one of the officially supported platforms. This can be worked around with an extension that spoofs user agent specifically on the login page, but that's still going to result in devices being logged as a non-Linux OS which makes interpreting the logs more difficult. There's also no ability to choose which bits of device state you log: there's a couple of existing integrations, and otherwise a fixed set of parameters that are reported. It'd be lovely to be able to log arbitrary material and make policy decisions based on that.

This also doesn't help with ChromeOS. There's no real way to automatically launch something that's bound to localhost (you could probably make this work using Crostini but there's no way to launch a Crostini app at login), and access to hardware-backed keys is kind of a complicated topic in ChromeOS for privacy reasons. I haven't tried this yet, but I think using an enterprise force-installed extension and the chrome.enterprise.platformKeys API to obtain a device identity cert and then intercepting requests to the appropriate port range on localhost ought to be enough to do that? But I've literally never written any Javascript so I don't know. Okta supports falling back from the loopback protocol to calling a custom URI scheme, but once you allow that you're also losing a bunch of the phishing protection, so I'd prefer not to take that approach.

Like I said, none of this prevents exfiltration of bearer tokens once they've been issued, and there's still a lot of ecosystem work to do there. But ensuring that tokens can't be issued to unmanaged machines in the first place is still a step forwards, and with luck we'll be able to make use of this on Linux systems without relying on proprietary client-side tooling.

(Time taken to code this implementation: about two days, and under 1000 lines of new code. Time taken to figure out what the fuck to write: rather a lot longer)

[1] There's also support for having Okta issue certificates, but then you're kind of back to the "How do I know this is my device" situation

comment count unavailable comments

10 January, 2023 05:48AM

January 09, 2023

hackergotchi for Junichi Uekawa

Junichi Uekawa

Uploaded Debian packages, since a long time.

Uploaded Debian packages, since a long time. enumn and remain are rust packages, and needed for crosvm. Working slowly through the dependency chain.

09 January, 2023 07:39AM by Junichi Uekawa

Russ Allbery

Review: Black Stars

Review: Black Stars, edited by Nisi Shawl & Latoya Peterson

Publisher: Amazon Original Stories
Copyright: August 2021
ISBN: 1-5420-3272-5
ISBN: 1-5420-3270-9
ISBN: 1-5420-3271-7
ISBN: 1-5420-3273-3
ISBN: 1-5420-3268-7
ISBN: 1-5420-3269-5
Format: Kindle
Pages: 168

This is a bit of an odd duck from a metadata standpoint. Black Stars is a series of short stories (maybe one creeps into novelette range) published by Amazon for Kindle and audiobook. Each one can be purchased separately (or "borrowed" with Amazon Prime), and they have separate ISBNs, so my normal practice would be to give each its own review. They're much too short for that, though, so I'm reviewing the whole group as an anthology.

The cover in the sidebar is for the first story of the series. The other covers have similar designs. I think the one for "We Travel the Spaceways" was my favorite.

Each story is by a Black author and most of them are science fiction. ("The Black Pages" is fantasy.) I would classify them as afrofuturism, although I don't have a firm grasp on its definition.

This anthology included several authors I've been meaning to read and was conveniently available, so I gave it a try, even though I'm not much of a short fiction reader. That will be apparent in the forthcoming grumbling.

"The Visit" by Chimamanda Ngozi Adichie: This is a me problem rather than a story problem, and I suspect it's partly because the story is not for me, but I am very done with gender-swapped sexism. I get the point of telling stories of our own society with enough alienation to force the reader to approach them from a fresh angle, but the problem with a story where women are sexist and condescending to men is that you're still reading a story of condescending sexism. That's particularly true when the analogies to our world are more obvious than the internal logic of the story world, as they are here.

"The Visit" tells the story of a reunion between two college friends, one of whom is now a stay-at-home husband and the other of whom has stayed single. There's not much story beyond that, just obvious political metaphor (the Male Masturbatory Act to ensure no potential child is wasted, blatant harrassment of the two men by female cops) and depressing character studies. Everyone in this story is an ass except maybe Obinna's single friend Eze, which means there's nothing to focus on except the sexism. The writing is competent and effective, but I didn't care in the slightest about any of these people or anything that was happening in their awful, dreary world. (4)

"The Black Pages" by Nnedi Okorafor: Issaka has been living in Chicago, but the story opens with him returning to Timbouctou where he grew up. His parents know he's coming for a visit, but he's a week early as a surprise. Unfortunately, he's arriving at the same time as an al-Qaeda attack on the library. They set it on fire, but most of the books they were trying to destroy were already saved by his father and are now in Issaka's childhood bedroom.

Unbeknownst to al-Qaeda, one of the books they did burn was imprisoning a djinn. A djinn who is now free and resident in Issaka's iPad.

This was a great first chapter of a novel. The combination of a modern setting and a djinn trapped in books with an instant affinity with technology was great. Issaka is an interesting character who is well-placed to introduce the reader to the setting, and I was fully invested in Issaka and Faro negotiating their relationship. Then the story just stopped. I didn't understand the ending, which was probably me being dim, but the real problem was that I was not at all ready for an ending. I would read the novel this was setting up, though. (6)

"2043... (A Merman I Should Turn to Be)" by Nisi Shawl: This is another story that felt like the setup for a novel, although not as good of a novel. The premise is that the United States has developed biological engineering that allows humans to live underwater for extended periods (although they still have to surface occasionally for air, like whales). The use to which that technology is being put is a rerun of Liberia with less colonialism: Blacks are given the option to be modified into merpeople and live under the sea off the US coast as a solution. White supremacists are not happy, of course, and try to stop them from claiming their patch of ocean floor.

This was fine, as far as it went, but I wasn't fond of the lead character and there wasn't much plot. There was some sort of semi-secret plan that the protagonist stumbles across and that never made much sense to me. The best parts of the story were the underwater setting and the semi-realistic details about the merman transformation. (6)

"These Alien Skies" by C.T. Rwizi: In the far future, humans are expanding across the galaxy via automatically-constructed wormhole gates. Msizi's job is to be the first ship through a new wormhole to survey the system previously reached only by the AI construction ship. The wormhole is not supposed to explode shortly after he goes through, leaving him stranded in an alien system with only his companion Tariro, who is not who she seems to be.

This was a classic SF plot, but I still hadn't guessed where it was going, or the relevance of some undiscussed bits of Tariro's past. Once the plot happens, it's a bit predictable, but I enjoyed it despite the depressed protagonist. (6)

"Clap Back" by Nalo Hopkinson: Apart from "The Visit," this was the most directly political of the stories. It opens with Wenda, a protest artist, whose final class project uses nanotech to put racist tchotchkes to an unexpected use. This is intercut with news clippings about a (white and much richer) designer who has found a way to embed memories into clothing and is using this to spread quotes of rather pointed "forgiveness" from a Malawi quilt.

This was one of the few entries in this anthology that fit the short story shape for me. Wenda's project and Burri's clothing interact fifty years later in a surprising way. This was the second-best story of the group. (7)

"We Travel the Spaceways" by Victor LaValle: Grimace (so named because he wears a huge purple coat) is a homeless man in New York who talks to cans. Most of his life is about finding food, but the cans occasionally give him missions and provide minor assistance. Apart from his cans, he's very much alone, but when he comforts a woman in McDonalds (after getting caught thinking about stealing her cheeseburger), he hopes he may have found a partner. If, that is, she still likes him when she discovers the nature of the cans' missions.

This was the best-written story of the six. Grimace is the first-person narrator, and LaValle's handling of characterization and voice is excellent. Grimace makes perfect sense from inside his head, but the reader can also see how unsettling he is to those around him. This could have been a disturbing, realistic story about a schitzophrenic man. As one may have guessed from the theme of the anthology, that's not what it is.

I admired the craft of this story, but I found Grimace's missions too horrific to truly like it. There is an in-story justification for them; suffice it to say that I didn't find it believable. An expansion with considerably more detail and history might have bridged that gap, but alas, short fiction. (6)

Rating: 6 out of 10

09 January, 2023 05:54AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Back to Xochicalco

In Mexico, we have the great luck to live among vestiges of long-gone cultures, some that were conquered and in some way got adapted and survived into our modern, mostly-West-Europan-derived society, and some that thrived but disappeared many more centuries ago. And although not everybody feels the same way, in my family we have always enjoyed visiting archaeological sites — when I was a child and today.

Some of the regulars that follow this blog (or its syndicators) will remember Xochicalco, as it was the destination we chose for the daytrip back in the day, in DebConf6 (May 2006).

This weekend, my mother suggested us to go there, as being Winter, the weather is quite pleasant — we were at about 25°C, and by the hottest months of the year it can easily reach 10° more; the place lacks shadows, like most archaeological sites, and it does get quite tiring nevertheless!

Xochicalco is quite unique among our archaeological sites, as it was built as a conference city: people came from cultures spanning all of Mesoamerica to debate and homogeneize the calendars used in the region. The first photo I shared here is by the Quetzalcóatl temple, where each of the four sides shows people from different cultures (the styles in which they are depicted follow their local self-representations), encodes equivalent dates in the different calendaric systems, and are located along representationsof the God of knowledge, the feathered serpent, Quetzalcóatl.

It was a very nice day out. And, of course, it brought back memories of my favorite conference visiting the site of a very important conference 😉

09 January, 2023 05:20AM

January 08, 2023

Petter Reinholdtsen

LinuxCNC MQTT publisher component

I watched a 2015 video from Andreas Schiffler the other day, where he set up LinuxCNC to send status information to the MQTT broker IBM Bluemix. As I also use MQTT for graphing, it occured to me that a generic MQTT LinuxCNC component would be useful and I set out to implement it. Today I got the first draft limping along and submitted as a patch to the LinuxCNC project.

The simple part was setting up the MQTT publishing code in Python. I already have set up other parts submitting data to my Mosquito MQTT broker, so I could reuse that code. Writing a LinuxCNC component in Python as new to me, but using existing examples in the code repository and the extensive documentation, this was fairly straight forward. The hardest part was creating a automated test for the component to ensure it was working. Testing it in a simulated LinuxCNC machine proved very useful, as I discovered features I needed that I had not thought of yet, and adjusted the code quite a bit to make it easier to test without a operational MQTT broker available.

The draft is ready and working, but I am unsure which LinuxCNC HAL pins I should collect and publish by default (in other words, the default set of information pieces published), and how to get the machine name from the LinuxCNC INI file. The latter is a minor detail, but I expect it would be useful in a setup with several machines available. I am hoping for feedback from the experienced LinuxCNC developers and users, to make the component even better before it can go into the mainland LinuxCNC code base.

Since I started on the MQTT component, I came across another video from Kent VanderVelden where he combine LinuxCNC with a set of screen glasses controlled by a Raspberry Pi, and it occured to me that it would be useful for such use cases if LinuxCNC also provided a REST API for querying its status. I hope to start on such component once the MQTT component is working well.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

08 January, 2023 06:30PM

Anuradha Weeraman

Parallelizing and running distributed builds with distcc

Parallelizing the compilation of a large codebase is a breeze with distcc, which allows you to spread the load across multiple nodes and speed up the compilation time.

Here’s a sample network topology for a distributed build:

Install distcc on the three Debian/Ubuntu-based nodes:

# apt install distcc

Edit /etc/default/distcc and set:

STARTDISTCC="true"

# Customize for your environment
ALLOWEDNETS="192.168.2.0/24"

# Specify your network device
LISTENER="192.168.2.146"

Additionally, the JOBS and NICE variables can be tweaked to suit the compute power that you have available.

Start distcc:

# systemctl start distcc

Do the same all the nodes, and if you have a firewall enabled with ufw, you will need to open up the port 3632 to the master node.

# ufw allow 3632/tcp

Additionally, if you’d like to use ssh over untrusted networks so code and communication with the worker nodes happen over a secure channel, ensure that SSH is running and is opened up to the master node in the same manner as above with the key of the master node in ~/.ssh/authorized_keys of the worker nodes. Opening port 3632 in this manner is a security hole, so take precautions over untrusted networks.

Back in the master node, setup a DISTCC_HOSTS environment variable that lists the worker nodes, including the master node. Note the order of the hosts, as it is important. The first host will be more heavily used, and distcc has no way of knowing the capacity and capability of the hosts, so specify the most powerful host first.

export DISTCC_HOSTS='localhost 192.168.2.107 192.168.2.91'

At this point, you’re ready to compile.

Go to your codebase, in this case we use the Linux kernel source code for the purpose of example.

$ make tinyconfig
$ time make -j$(nproc) CC=distcc bzImage

On another terminal, you can monitor the status of the distributed compilation with distmoncc-text or tools such as top or bpytop.

Network throughput and latency will be a big factor in how much distcc will help speed up your build process. Using ssh may additionally introduce overhead, so play with the variables to see how much distcc can help speed up or optimize the build for your specific scenario. You may want to additionally consider ccache to speed up the build process.

There are some aspects of the build process that are not effectively parallizable in this manner, such as the final linking step of the executable, for which you will not see any performance improvement with distcc.

Give distcc a spin, and put any spare compute you have lying around in your home lab to good use.

08 January, 2023 04:23PM by Anuradha Weeraman

Scarlett Gately Moore

Debian: Coming soon! MycroftAI! KDE snaps update.

I am excited to announce that I have joined the MycroftAI team in Salsa and working hard to get this packaged up and released in Debian. You can track our progress here:

https://salsa.debian.org/mycroftai-team

Snaps are on temporary hold while we get everything switched over to core22. This includes the neon-extension, that requires merges and store requests to be honored. Hopefully folks are returning from holidays and things will start moving again. Thank you for your patience!

I am still seeking work if you or anyone you know is willing to give me a chance to shine, you won’t regret it, life has leveled out and I am ready to make great things happen! I admit the interviewing process is much more difficult than in years past, any advice here is also appreciated. Thank you for stopping by.

08 January, 2023 03:31PM by sgmoore

Antoine Beaupré

20 years blogging

Many folks have woken up to the dangers of commercialization and centralisation of this very fine internet we have around here. For many of us, of course, it's one big "I told you so"...

(To fair, I stopped "telling you so" because evangelism is pretty annoying. It's certainly dishonest coming from an atheist, so I preach by example now. I often wonder what works better. But I digress.)

Colleagues have been posting about getting back into blogging. This post from gwolf, in particular, reviews his yearly blog output, and that made me wonder how that looked like from my end. The answer, of course, is simple to generate:

anarcat@angela:~$ cd anarc.at
/home/anarcat/wikis/anarc.at
anarcat@angela:anarc.at$ ls blog | grep '^[0-9][0-9][0-9][0-9]' | sed s/-.*// | sort | uniq -c  | sort -n -k2
     62 2005
     49 2006
     26 2007
     25 2008
      8 2009
     16 2010
     24 2011
     19 2012
     17 2013
      7 2014
     19 2015
     32 2016
     43 2017
     40 2018
     27 2019
     33 2020
     22 2021
     45 2022
      1 2023

(I thought of drawing this as a sparkline but it looks like Sparklines are kind of dead. https://sparkline.org/ doesn't resolve and the canonical PHP package is gone from Debian. The plugin is broken in ikiwiki anyway...)

So it seems like I've been doing this very blog for 18 years, and it's not even my first blog. I actually started in 2003, which makes this year my 20-year blogging anniversary.

(And even if that sounds really old, note that I was not actually an early adopter. Jorn Barger having coined the term "weblog" in 1997. Yes, in another millenia.)

Reading back some of the headlines in those older posts, I have definitely changed style. I used to write shorter, more random ideas, and more often. I somehow managed to write more than one article per week in 2005!

Now, I take more time writing, a habit I picked up while writing for LWN (those articles), which started in 2016. But interestingly, it seems I started producing more articles then: I hit 43 articles per year in 2017, my fourth best year ever.

The best years in terms of numbers are the first two years (2005 and 2006, I didn't check the numbers on earlier years), but I doubt they are the best in terms of content. Really, the best writing I have ever did was for LWN. I dare hope I have kept the quality I was encouraged (forced?) to produce, but I know I cannot come anywhere close to what the LWN editors were juicing out of me. You can also see that I immediately dropped to a more normal 27 article in 2019, once I stopped writing for LWN...

Back when I started blogging, my writing was definitely more personal. I had less concerns about privacy back then; now, I would never write about my personal life like I did back then (e.g. "I have a cold").

I was also writing mostly in French back then, and it's sad to think that I am rarely writing in my native language here anymore. I guess that brings me an international audience, which is simultaneously challenging, gratifying, and terrifying. But it also means I reach out to people that do not speak English (or French, for that matter) as their first language. For me that is more valuable than catering to my little corner of culture, at least for now, and especially when writing technical topics, which is most of what I do now anyways.

Interestingly, I wrote a lot in 2022. People sometimes ask me how I manage to write so much: I don't actually know. I don't have a lot of free time on my hand, and even less than before in the past two years, but somehow I keep feeding this blog.

I guess I must have something to say. Can't quite figure out what yet, so maybe I should just keep trying.

Or, if you're new to this Internet thing, Bring Back Blogging! Whee!

PS: I wish I had time to do a review of my visitors like i did for 2021 but time is missing.

[[!mastodon Error: syntax error, invalid URL: host]]

08 January, 2023 04:09AM

hackergotchi for Charles Plessy

Charles Plessy

Could somebody patch Firefox to display Markdown files?

When Firefox receives a file with media type text/markdown, it prompts the user to download it, while other browsers display it as plain text. In the ticket 1319262, it is proposed to display Markdown files by default, but there needs a patch…

08 January, 2023 12:18AM

January 07, 2023

Reproducible Builds

Reproducible Builds in December 2022

Welcome to the December 2022 report from the Reproducible Builds project.


We are extremely pleased to announce that the dates for the Reproducible Builds Summit in 2023 have been announced in 2022 already:

  • When: October 31st, November 1st, November 2nd 2023.
  • Where: Dock Europe, Hamburg, Germany.

We plan to spend three days continuing to the grow of the Reproducible Builds effort. As in previous events, the exact content of the meeting will be shaped by the participants. And, as mentioned in Holger Levsen’s post to our mailing list, the dates have been booked and confirmed with the venue, so if you are considering attending, please reserve these dates in your calendar today.


Rémy Grünblatt, an associate professor in the Télécom Sud-Paris engineering school wrote up his “pain points” of using Nix and NixOS. Although some of the points do not touch on reproducible builds, Rémy touches on problems he has encountered with the different kinds of reproducibility that these distributions appear to promise including configuration files affecting the behaviour of systems, the fragility of upstream sources as well as the conventional idea of binary reproducibility.


Morten Linderud reported that he is quietly optimistic that if Go programming language resolves all of its issues with reproducible builds (tracking issue) then the Go binaries distributed from Google and by Arch Linux may be bit-for-bit identical. “It’s just a bit early to sorta figure out what roadblocks there are. [But] Go bootstraps itself every build, so in theory I think it should be possible.”


On December 15th, Holger Levsen published an in-depth interview he performed with David A. Wheeler on supply-chain security and reproducible builds, but it also touches on the biggest challenges in computing as well.

This is part of a larger series of posts featuring the projects, companies and individuals who support the Reproducible Builds project. Other instalments include an article featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent ones about ARDC, the Google Open Source Security Team (GOSST), Jan Nieuwenhuizen on Bootstrappable Builds, GNU Mes and GNU Guix and Hans-Christoph Steiner of the F-Droid project.


A number of changes were made to the Reproducible Builds website and documentation this month, including FC Stegerman adding an F-Droid/apksigcopier example to our embedded signatures page [], Holger Levsen making a large number of changes related to the 2022 summit in Venice as well as 2023’s summit in Hamburg [][][][] and Simon Butler updated our publications page [][].


On our mailing list this month, James Addison asked a question about whether there has been any effort to trace the files used by a build system in order to identify the corresponding build-dependency packages. [] In addition, Bernhard M. Wiedemann then posed a thought-provoking question asking “How to talk to skeptics?”, which was occasioned by a colleague who had published a blog post in May 2021 skeptical of reproducible builds. The thread generated a number of replies.


Android news

obfusk (FC Stegerman) performed a thought-provoking review of tools designed to determine the difference between two different .apk files shipped by a number of free-software instant messenger applications.

These scripts are often necessary in the Android/APK ecosystem due to these files containing embedded signatures so the conventional bit-for-bit comparison cannot be used. After detailing a litany of issues with these tools, they come to the conclusion that:

It’s quite possible these messengers actually have reproducible builds, but the verification scripts they use don’t actually allow us to verify whether they do.

This reflects the consensus view within the Reproducible Builds project: pursuing a situation in language or package ecosystems where binaries are bit-for-bit identical (over requiring a bespoke ecosystem-specific tool) is not a luxury demanded by purist engineers, but rather the only practical way to demonstrate reproducibility. obfusk also announced the first release of their own set of tools on our mailing list.

Related to this, obfusk also posted to an issue filed against Mastodon regarding the difficulties of creating bit-by-bit identical APKs, especially with respect to copying v2/v3 APK signatures created by different tools; they also reported that some APK ordering differences were not caused by building on macOS after all, but by using Android Studio [] and that F-Droid added 16 more apps published with Reproducible Builds in December.


Debian

As mentioned in last months report, Vagrant Cascadian has been organising a series of online sprints in order to ‘clear the huge backlog of reproducible builds patches submitted’ by performing NMUs (Non-Maintainer Uploads).

During December, meetings were held on the 1st, 8th, 15th, 22nd and 29th, resulting in a large number of uploads and bugs being addressed:

The next sprint is due to take place this coming Tuesday, January 10th at 16:00 UTC.


Upstream patches

The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project operates a comprehensive testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, the following changes were made by Holger Levsen:

  • The osuosl167 machine is no longer a openqa-worker node anymore. [][]
  • Detect problems with APT repository signatures [] and update a repository signing key [].
  • reproducible Debian builtin-pho: improve job output. []
  • Only install the foot-terminfo package on Debian systems. []

In addition, Mattia Rizzolo added support for the version of diffoscope in Debian stretch which doesn’t support the --timeout flag. [][]


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 228, 229 and 230 to Debian:

  • Fix compatibility with file(1) version 5.43, with thanks to Christoph Biedl. []
  • Skip the test_html.py::test_diff test if html2text is not installed. (#1026034)
  • Update copyright years. []

In addition, Jelle van der Waa added support for Berkeley DB version 6. []

Orthogonal to this, Holger Levsen bumped the Debian Standards-Version on all of our packages, including diffoscope [], strip-nondeterminism [], disorderfs [] and reprotest [].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. You can get in touch with us via:

07 January, 2023 03:22PM

hackergotchi for Jonathan Carter

Jonathan Carter

What are the most important improvements that Debian need to make?

“What are the most important improvements that Debian need to make?” – I decided to see what all the fuss is about and asked ChatGPT that exact question.

It’s response:

The list of potential improvements for Debian that I provided is based on my understanding of the challenges and opportunities that software projects like Debian typically face, as well as my understanding of the goals and values of the Debian project. It is not based on any specific information about the current priorities or needs of the Debian project or community.

There are many areas where the Debian project could make improvements, and the most important ones will depend on the priorities and needs of the project and its community. Some potential areas for improvement could include:

  1. Improving the user experience: Debian is known for its stability and security, but it is not always the easiest distribution to use, especially for new users. Improving the user experience and making Debian more accessible to a wider audience could be an important goal.
  2. Keeping up with the rapid pace of change in the software ecosystem: With over 50,000 packages, Debian is a large distribution, and it can be difficult to keep up with the constant stream of updates and new software being released. Developing tools and processes to help manage this complexity and ensure that new packages are tested and integrated smoothly could be an important focus.
  3. Maintaining the stability and security of the distribution: Ensuring that Debian remains a stable and secure platform is a top priority for the project. This includes testing new packages thoroughly before they are included in the distribution, responding quickly to security vulnerabilities, and investing in infrastructure and processes to support these efforts.
  4. Ensuring the long-term sustainability of the project: Debian is a volunteer-driven project, and it can be challenging to attract and retain contributors over the long term. Developing strategies to support and encourage participation and collaboration within the community could be an important focus.
  5. Enhancing the tools and infrastructure that support the project: The Debian project relies on a wide range of tools and infrastructure to support its development and distribution efforts. Improving and modernizing these resources could help to make the project more efficient and effective.

I find that to be a great response, and I’m curious where it came from, so I asked “How did you come to that conclusion?”, it replied:

The list of potential improvements for Debian that I provided is based on my understanding of the challenges and opportunities that software projects like Debian typically face, as well as my understanding of the goals and values of the Debian project. It is not based on any specific information about the current priorities or needs of the Debian project or community.

Well, one thing is for sure, the IRC chat bots of the future are going to be much more complex than the ones I grew up with :-)

07 January, 2023 12:19PM by jonathan

January 06, 2023

Thorsten Alteholz

My Debian Activities in December 2022

FTP master

This month I accepted 276 and rejected 27 packages. The overall number of packages that got accepted was 288.

Debian LTS

This was my hundred-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h but due to Christmas I managed only to do 10h.

During that time I uploaded:

  • [DLA 3256-1] xorg-server security update for six CVEs
  • [DLA 3255-1] mplayer security update for ten CVEs

Debian ELTS

This month was the fifty third ELTS month.

During my allocated time I marked all CVEs of the multipath-tools as not-affected and started to work on another snapd update. As I spend more time than expected with my family, I also failed to accomplish my ELTS workload.

Last but not least I did some days of frontdesk duties.

Debian Astro

This month I uploaded improved packages or new versions of:

I also updated almost all of the about 50 indi-3rdparty packages.

Debian Mobcom

This month I uploaded improved packages of:

Debian IoT

This month I uploaded improved packages of:

Debian Printing

This month I uploaded improved packages of:

Other stuff

This month I uploaded improved packages of:

Further I uploaded new versions of a bunch of golang packages.

06 January, 2023 04:34PM by alteholz

hackergotchi for Jonathan McDowell

Jonathan McDowell

Finally making use of bpftrace

I am old enough to remember when BPF meant the traditional Berkeley Packet Filter, and was confined to filtering network packets. It’s grown into much, much, more as eBPF and getting familiar with it so that I can add it to the suite of tips and tricks I can call upon has been on my to-do list for a while. To this end I was lucky enough to attend a live walk through of bpftrace last year. bpftrace is a high level tool that allows the easy creation and execution of eBPF tracers under Linux.

Recently I’ve been working on updating the RetroArch packages in Debian and as I was doing so I realised there was a need to update the quite outdated retroarch-assets package, which contains various icons and images used for the user interface. I wanted to try and re-generate as many of the artefacts as I could, to ensure the proper source was available. However it wasn’t always clear which files were actually needed and which were either ‘source’ or legacy. So I wanted to trace file opens by retroarch and see when it was failing to find files. Traditionally this is something I’d have used strace for, but it seemed like a great opportunity to try out bpftrace.

It turns out bpftrace ships with an example, opensnoop.bt which provided details of hooking the open syscall entry + exit and providing details of all files opened on the system. I only wanted to track opens by the retroarch binary that failed, so I made a couple of modifications:

retro-failed-open-snoop.bt
#!/usr/bin/env bpftrace
/*
 * retro-failed-open-snoop - snoop failed opens by RetroArch
 *
 * Based on:
 * opensnoop	Trace open() syscalls.
 *		For Linux, uses bpftrace and eBPF.
 *
 * Copyright 2018 Netflix, Inc.
 * Licensed under the Apache License, Version 2.0 (the "License")
 *
 * 08-Sep-2018	Brendan Gregg	Created this.
 */

BEGIN
{
	printf("Tracing open syscalls... Hit Ctrl-C to end.\n");
	printf("%-6s %-16s %3s %s\n", "PID", "COMM", "ERR", "PATH");
}

tracepoint:syscalls:sys_enter_open,
tracepoint:syscalls:sys_enter_openat
{
	@filename[tid] = args->filename;
}

tracepoint:syscalls:sys_exit_open,
tracepoint:syscalls:sys_exit_openat
/@filename[tid]/
{
	$ret = args->ret;
	$errno = $ret > 0 ? 0 : - $ret;

	if (($ret <= 0) && (strncmp("retroarch", comm, 9) == 0) ) {
		printf("%-6d %-16s %3d %s\n", pid, comm, $errno,
		    str(@filename[tid]));
	}
	delete(@filename[tid]);
}

END
{
	clear(@filename);
}

I had to install bpftrace (apt install bpftrace) and then I ran bpftrace -o retro.log retro-failed-open-snoop.bt as root and fired up retroarch as a normal user.

bpftrace failed open log for retroarch
Attaching 6 probes...
Tracing open syscalls... Hit Ctrl-C to end.
PID    COMM             ERR PATH
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/glibc-hwcaps/x86-64-v2/lib
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/x86_64/libpulse
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/libpulsecommon-
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/x86_64/libpulsecommon-
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/tls/libpulsecommon-16.1.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/x86_64/libpulsecomm
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/libpulsecommon-16.1
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/pulseaudio/x86_64/libpulsecommon-16.1
3394   retroarch          2 /etc/gcrypt/hwf.deny
3394   retroarch          2 /lib/x86_64-linux-gnu/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/tls/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/glibc-hwcaps/x86-64-v2/libgamemode.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/tls/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libgamemode.so.0
3394   retroarch          2 /lib/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/tls/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/x86_64/libgamemode.so.0
3394   retroarch          2 /lib/libgamemode.so.0
3394   retroarch          2 /usr/lib/glibc-hwcaps/x86-64-v2/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/tls/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/x86_64/libgamemode.so.0
3394   retroarch          2 /usr/lib/libgamemode.so.0
3394   retroarch          2 /lib/x86_64-linux-gnu/libgamemode.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libgamemode.so
3394   retroarch          2 /lib/libgamemode.so
3394   retroarch          2 /usr/lib/libgamemode.so
3394   retroarch          2 /lib/x86_64-linux-gnu/libdecor-0.so
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/libdecor-0.so
3394   retroarch          2 /lib/libdecor-0.so
3394   retroarch          2 /usr/lib/libdecor-0.so
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /usr/lib/x86_64-linux-gnu/dri/tls/iris_dri.so
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/glibc-hwcaps/x86-64-v2/libedit.so.
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/tls/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/x86_64/libedit.so.2
3394   retroarch          2 /lib/x86_64-linux-gnu/../lib/libedit.so.2
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /etc/drirc
3394   retroarch          2 /home/noodles/.drirc
3394   retroarch          2 /home/noodles/.Xdefaults-udon
3394   retroarch          2 /home/noodles/.icons/default/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/default/index.theme
3394   retroarch          2 /usr/share/icons/default/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/default/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/index.theme
3394   retroarch          2 /usr/share/icons/Adwaita/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/Adwaita/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/index.theme
3394   retroarch          2 /usr/share/icons/hicolor/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/cursors/0000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/index.theme
3394   retroarch          2 /home/noodles/.XCompose
3394   retroarch          2 /home/noodles/.icons/default/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/default/index.theme
3394   retroarch          2 /usr/share/icons/default/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/default/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/Adwaita/index.theme
3394   retroarch          2 /usr/share/icons/Adwaita/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/Adwaita/cursors/0000000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/cursors/00000000000000000000000000
3394   retroarch          2 /home/noodles/.icons/hicolor/index.theme
3394   retroarch          2 /usr/share/icons/hicolor/cursors/000000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/cursors/0000000000000000000000000000
3394   retroarch          2 /usr/share/pixmaps/hicolor/index.theme
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/png/disc.png
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/sounds
3394   retroarch          2 /usr/share/libretro/assets/sounds
3394   retroarch          2 /sys/class/power_supply/ACAD
3394   retroarch          2 /sys/class/power_supply/ACAD
3394   retroarch          2 /usr/share/libretro/assets/xmb/monochrome/png/disc.png
3394   retroarch          2 /usr/share/libretro/assets/ozone/sounds
3394   retroarch          2 /usr/share/libretro/assets/sounds

This was incredibly useful - the only theme image I was missing is disc.png from XMB Monochrome (which fails to have SVG source). I also discovered the runtime optional loading of GameMode. This is available in Debian so it was a simple matter to add libgamemode0 to the binary package Recommends.

So, a very basic example of using bpftrace, but a remarkably useful intro to it from my point of view!

06 January, 2023 08:29AM

Reproducible Builds (diffoscope)

diffoscope 231 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 231. This version includes the following changes:

* Improve "[X] may produce better output" messages. Based on a patch by
  Helmut Grohne. (Closes: #1026982)

You find out more by visiting the project homepage.

06 January, 2023 12:00AM

January 05, 2023

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Rebuild mozc with Mozc UT Dictionary

When rebuilding mozc with Mozc UT Dictionary, it may be better to build in docker container because you don't want install unused IM development packages.

In beforehand, download latest Mozc UT dictionary here.

osdn.net

In a debian/sid container, you need to do it:

# apt install -y devscripts
# (enable deb-src, modify /etc/apt/sources.list.d/debian.sources)
# apt source mozc
# cat mozcdic-ut-20221230/mozcdic-ut-20221230.txt >> mozc-2.28.4715.102+dfsg/src/data/dictionary_oss/dictionary00.txt 
# cd mozc-2.28.4715.102+dfsg/
# (edit debian/changelog such as 2.28.4715.102+dfsg-2.2.1 )
# debuild -us -uc -nc

After that, you can install

% sudo apt install ./emacs-mozc_2.28.4715.102+dfsg-2.2.1_amd64.deb ./emacs-mozc-bin_2.28.4715.102+dfsg-2.2.1_amd64.deb ./fcitx-mozc-data_2.28.4715.102+dfsg-2.2.1_all.deb ./fcitx5-mozc_2.28.4715.102+dfsg-2.2.1_amd64.deb ./ibus-mozc_2.28.4715.102+dfsg-2.2.1_amd64.deb ./mozc-data_2.28.4715.102+dfsg-2.2.1_all.deb ./mozc-server_2.28.4715.102+dfsg-2.2.1_amd64.deb ./mozc-utils-gui_2.28.4715.102+dfsg-2.2.1_amd64.deb

Then, you can use own build binaries.

% dpkg -l |\grep mozc
ii  emacs-mozc                               2.28.4715.102+dfsg-2.2.1               amd64        Mozc for Emacs
ii  emacs-mozc-bin                           2.28.4715.102+dfsg-2.2.1               amd64        Helper module for emacs-mozc
ii  fcitx-mozc-data                          2.28.4715.102+dfsg-2.2.1               all          Mozc input method - data files for fcitx
ii  fcitx5-mozc:amd64                        2.28.4715.102+dfsg-2.2.1               amd64        Mozc engine for fcitx5 - Client of the Mozc input method
ii  ibus-mozc                                2.28.4715.102+dfsg-2.2.1               amd64        Mozc engine for IBus - Client of the Mozc input method
ii  mozc-data                                2.28.4715.102+dfsg-2.2.1               all          Mozc input method - data files
ii  mozc-server                              2.28.4715.102+dfsg-2.2.1               amd64        Server of the Mozc input method
ii  mozc-utils-gui                           2.28.4715.102+dfsg-2.2.1               amd64        GUI utilities of the Mozc input method

05 January, 2023 11:55AM

January 04, 2023

Enrico Zini

Staticsite redesign

These are some notes about my redesign work in staticsite 2.x.

Maping constraints and invariants

I started keeping notes of constraints and invariants, and this helped a lot in keeping bounds on the cognitive efforts of design.

I particularly liked how mapping the set of constraints added during site generation has helped breaking down processing into a series of well defined steps. Code that handles each step now has a specific task, and can rely on clear assumptions.

Declarative page metadata

I designed page metadata as declarative fields added to the Page class.

I used typed descriptors for the fields, so that metadata fields can now have logic and validation, and are self-documenting!

This is the core of the Field implementation.

Lean core

I tried to implement as much as possible in feature plugins, leaving to the staticsite core only what is essential to create the structure for plugins to build on.

The core provides a tree structure, an abstract Page object that can render to a file and resolve references to other pages, a Site that holds settings and controls the various loading steps, and little else.

The only type of content supported by the core is static asset files: Markdown, RestructuredText, images, taxonomies, feeds, directory indices, and so on, are all provided via feature plugins.

Feature plugins

Feature plugins work by providing functions to be called at the various loading steps, and mixins to be added to site pages.

Mixins provided by feature plugins can add new declarative metadata fields, and extend Page methods: this ends up being very clean and powerful, and plays decently well with mypy's static type checking, too!

See for example the code of the alias feature, that allows a page to declare aliases that redirect to it, useful for example when moving content around.

It has a mixin (AliasPageMixin) that adds an aliases field that holds a list of page paths.

During the "generate" step, when autogenerated pages can be created, the aliases feature iterates through all pages that defined an aliases metadata, and generates the corresponding redirection pages.

Self-documenting code

Staticsite can list loaded features, features can list the page subclasses that they use, and pages can list metadata fields.

As a result, each feature, each type of page, and each field of each page can generate documentation about itself: the staticsite reference is autogenerated in that way, mostly from Feature, Page, and Field docstrings.

Understand the language, stay close to the language

Python has matured massively in the last years, and I like to stay on top of the language and standard library release notes for each release.

I like how what used to be dirty hacks have now found a clean way into the language:

  • what one would implement with metaclass magic one can now mostly do with descriptors, and get language support for it, including static type checking.
  • understanding the inheritance system and method resolution order allows to write type checkable mixins
  • runtime-accessible docstrings help a lot with autogenerating documentation
  • os.scandir and os functions that accept directory file descriptors make filesystem exploration pleasantly fast, for an interpreted language!

04 January, 2023 02:03PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

debcargo rust repository and some observations.

debcargo rust repository and some observations. It's been about a week since I first started looking at Debian rust packages and adding some packages in preparation for crosvm. Some things that don't work quite well right now yet. My local branches disappeared. I don't have access and everything is through a merge request, presumably that is not a generally supported workflow and the team members are using branches to manage pending works. ./release.sh is optimized for updates and for new packages, they only build source packages and then I need to rebuild a binary-full package for the NEW queue. Maybe I will figure out. I haven't quite gotten the right IRC client. I was using the web UI but that seemed to disconnect without any warnings, and didn't tell me even when it is disconnected, it just can't post more messages and doesn't receive messages. That's not very useful. I started using Emacs IRC (rcirc) client. Not sure how useful that is.

04 January, 2023 06:28AM by Junichi Uekawa

Anton Gladky

Boost 1.81 in Debian Testing

The latest version of Boost, version 1.81, is now available in Debian Testing.

As contributors to Boost, we highly encourage you to consider building your package against Boost 1.81 in order to facilitate a smooth transition. Installing the -dev Boost packages from the experimental repository is simple, as shown in the following command:

sudo apt install libboost-dev -t experimental

If you encounter any issues or have suggestions for improvement, please do not hesitate to file bugs or prepare merge requests on salsa.

Thanks Freexian for supporting this effort.

04 January, 2023 04:16AM

Enrico Zini

Released staticsite 2.x

In theory I wanted to announce the release of staticsite 2.0, but then I found bugs that prevented me from writing this post, so I'm also releasing 2.1 2.2 2.3 :grin:

staticsite is the static site generator that I ended up writing after giving other generators a try.

I did a big round of cleanup of the code, which among other things allowed me to implement incremental builds.

It turned out that staticsite is fast enough that incremental builds are not really needed, however, a bug in caching rendered markdown made me forget about that. Now I fixed that bug, too, and I can choose between running staticsite fast, and ridiculously fast.

My favourite bit of this work is the internal cleanup: I found a way to simplify the core design massively, and now the core and plugin system is simple enough that I can explain it, and I'll probably write a blog post or two about it in the next days.

On top of that, staticsite is basically clean with mypy running in strict mode! Getting there was a great ride which prompted a lot of thinking about designing code properly, as mypy is pretty good at flagging clumsy hacks.

If you want to give it a try, check out the small tutorial A new blog in under one minute.

04 January, 2023 12:30AM

January 03, 2023

Paul Wise

FLOSS Activities December 2022

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages: gnome-shell-extension-no-annoyance
  • Debian servers: contact mail server blocking a Debian MX
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The azure-functions-devops-build work was sponsored. All other work was done on a volunteer basis.

03 January, 2023 11:02PM

Enrico Zini

Things I learnt in December 2022

Python: typing.overload

typing.overload makes it easier to type functions with behaviour that depends on input types. Functions marked with @overload are ignored by Python and only used by the type checker:

@overload
def process(response: None) -> None:
    ...
@overload
def process(response: int) -> tuple[int, str]:
    ...
@overload
def process(response: bytes) -> str:
    ...
def process(response):
    # <actual implementation>

Python's multiprocessing and deadlocks

Python's multiprocessing is prone to deadlocks in a number of conditions. In my case, the running program was a standard single-process, non-threaded script, but it used complex native libraries which might have been the triggers for the deadlocks.

The suggested workaround is using set_start_method("spawn"), but when we tried it we hit serious performance penalties.

Lesson learnt: multiprocessing is good for prototypes, and may end up being too hacky for production.

In my case, I was already generating small python scripts corresponding to worker tasks, which were useful for reproducing and debugging Magics issues, so I switched to running those as the actual workers. In the future, this may come in handy for dispatching work to HPC nodes, too.

Here's a parallel execution scheduler based on asyncio that I wrote to run them, which may always come in handy on other projects.

03 January, 2023 09:00PM