May 06, 2021

hackergotchi for Matthew Garrett

Matthew Garrett

More doorbell adventures

Back in my last post on this topic, I'd got shell on my doorbell but hadn't figured out why the HTTP callbacks weren't always firing. I still haven't, but I have learned some more things.

Doorbird sell a chime, a network connected device that is signalled by the doorbell when someone pushes a button. It costs about $150, which seems excessive, but would solve my problem (ie, that if someone pushes the doorbell and I'm not paying attention to my phone, I miss it entirely). But given a shell on the doorbell, how hard could it be to figure out how to mimic the behaviour of one?

Configuration for the doorbell is all stored under /mnt/flash, and there's a bunch of files prefixed 1000eyes that contain config (1000eyes is the German company that seems to be behind Doorbird). One of these was called 1000eyes.peripherals, which seemed like a good starting point. The initial contents were {"Peripherals":[]}, so it seemed likely that it was intended to be JSON. Unfortunately, since I had no access to any of the peripherals, I had no idea what the format was. I threw the main application into Ghidra and found a function that had debug statements referencing "initPeripherals and read a bunch of JSON keys out of the file, so I could simply look at the keys it referenced and write out a file based on that. I did so, and it didn't work - the app stubbornly refused to believe that there were any defined peripherals. The check that was failing was pcVar4 = strstr(local_50[0],PTR_s_"type":"_0007c980);, which made no sense, since I very definitely had a type key in there. And then I read it more closely. strstr() wasn't being asked to look for "type":, it was being asked to look for "type":". I'd left a space between the : and the opening " in the value, which meant it wasn't matching. The rest of the function seems to call an actual JSON parser, so I have no idea why it doesn't just use that for this part as well, but deleting the space and restarting the service meant it now believed I had a peripheral attached.

The mobile app that's used for configuring the doorbell now showed a device in the peripherals tab, but it had a weird corrupted name. Tapping it resulted in an error telling me that the device was unavailable, and on the doorbell itself generated a log message showing it was trying to reach a device with the hostname bha-04f0212c5cca and (unsurprisingly) failing. The hostname was being generated from the MAC address field in the peripherals file and was presumably supposed to be resolved using mDNS, but for now I just threw a static entry in /etc/hosts pointing at my Home Assistant device. That was enough to show that when I opened the app the doorbell was trying to call a CGI script called peripherals.cgi on my fake chime. When that failed, it called out to the cloud API to ask it to ask the chime[1] instead. Since the cloud was completely unaware of my fake device, this didn't work either. I hacked together a simple server using Python's HTTPServer and was able to return data (another block of JSON). This got me to the point where the app would now let me get to the chime config, but would then immediately exit. adb logcat showed a traceback in the app caused by a failed assertion due to a missing key in the JSON, so I ran the app through jadx, found the assertion and from there figured out what keys I needed. Once that was done, the app opened the config page just fine.

Unfortunately, though, I couldn't edit the config. Whenever I hit "save" the app would tell me that the peripheral wasn't responding. This was strange, since the doorbell wasn't even trying to hit my fake chime. It turned out that the app was making a CGI call to the doorbell, and the thread handling that call was segfaulting just after reading the peripheral config file. This suggested that the format of my JSON was probably wrong and that the doorbell was not handling that gracefully, but trying to figure out what the format should actually be didn't seem easy and none of my attempts improved things.

So, new approach. Rather than writing the config myself, why not let the doorbell do it? I should be able to use the genuine pairing process if I could mimic the chime sufficiently well. Hitting the "add" button in the app asked me for the username and password for the chime, so I typed in something random in the expected format (six characters followed by four zeroes) and a sufficiently long password and hit ok. A few seconds later it told me it couldn't find the device, which wasn't unexpected. What was a little more unexpected was that the log on the doorbell was showing it trying to hit another bha-prefixed hostname (and, obviously, failing). The hostname contains the MAC address, but I hadn't told the doorbell the MAC address of the chime, just its username. Some more digging showed that the doorbell was calling out to the cloud API, giving it the 6 character prefix from the username and getting a MAC address back. Doing the same myself revealed that there was a straightforward mapping from the prefix to the mac address - changing the final character from "a" to "b" incremented the MAC by one. It's actually just a base 26 encoding of the MAC, with aaaaaa translating to 00408C000000.

That explained how the hostname was being generated, and in return I was able to work backwards to figure out which username I should use to generate the hostname I was already using. Attempting to add it now resulted in the doorbell making another CGI call to my fake chime in order to query its feature set, and by mocking that up as well I was able to send back a file containing X-Intercom-Type, X-Intercom-TypeId and X-Intercom-Class fields that made the doorbell happy. I now had a valid JSON file, which cleared up a couple of mysteries. The corrupt name was because the name field isn't supposed to be ASCII - it's base64 encoded UTF16-BE. And the reason I hadn't been able to figure out the JSON format correctly was because it looked something like this:

{"Peripherals":[]{"prefix":{"type":"DoorChime","name":"AEQAbwBvAHIAYwBoAGkAbQBlACAAVABlAHMAdA==","mac":"04f0212c5cca","user":"username","password":"password"}}]}


Note that there's a total of one [ in this file, but two ]s? Awesome. Anyway, I could now modify the config in the app and hit save, and the doorbell would then call out to my fake chime to push config to it. Weirdly, the association between the chime and a specific button on the doorbell is only stored on the chime, not on the doorbell. Further, hitting the doorbell didn't result in any more HTTP traffic to my fake chime. However, it did result in some broadcast UDP traffic being generated. Searching for the port number led me to the Doorbird LAN API and a complete description of the format and encryption mechanism in use. Argon2I is used to turn the first five characters of the chime's password (which is also stored on the doorbell itself) into a 256-bit key, and this is used with ChaCha20 to decrypt the payload. The payload then contains a six character field describing the device sending the event, and then another field describing the event itself. Some more scrappy Python and I could pick up these packets and decrypt them, which showed that they were being sent whenever any event occurred on the doorbell. This explained why there was no storage of the button/chime association on the doorbell itself - the doorbell sends packets for all events, and the chime is responsible for deciding whether to act on them or not.

On closer examination, it turns out that these packets aren't just sent if there's a configured chime. One is sent for each configured user, avoiding the need for a cloud round trip if your phone is on the same network as the doorbell at the time. There was literally no need for me to mimic the chime at all, suitable events were already being sent.

Still. There's a fair amount of WTFery here, ranging from the strstr() based JSON parsing, the invalid JSON, the symmetric encryption that uses device passwords as the key (requiring the doorbell to be aware of the chime's password) and the use of only the first five characters of the password as input to the KDF. It doesn't give me a great deal of confidence in the rest of the device's security, so I'm going to keep playing.

[1] This seems to be to handle the case where the chime isn't on the same network as the doorbell

comment count unavailable comments

06 May, 2021 06:26AM

May 05, 2021

hackergotchi for Junichi Uekawa

Junichi Uekawa

Wrote a pomodoro timer in elisp.

Wrote a pomodoro timer in elisp. Why? Because I try to keep my workflow simple, and to keep the simplicity I sometimes need to re-implement stuff. No this is a lame excuse. I have been living in emacs for the past week and felt like it. However writing elisp has been challenging, maybe because I haven't done it for a while. I noticed there's lexical-binding, but I didn't quite get it, my lambda isn't getting the function parameter in scope.

05 May, 2021 12:29AM by Junichi Uekawa

May 04, 2021

hackergotchi for Steve Kemp

Steve Kemp

Password store plugin: env

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner.

I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years.

The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support.

That then became a pass-plugin:

  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..

Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:

   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data

The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way.

Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:

     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"

That's ideal, because now I can source that from within a shell:

   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve

Or I could directly execute the tool I want:

   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..

TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

04 May, 2021 03:00PM

Password store plugin: enve

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner.

I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years.

The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support.

That then became a pass-plugin:

  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..

Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:

   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data

The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way.

Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:

     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"

That's ideal, because now I can source that from within a shell:

   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve

Or I could directly execute the tool I want:

   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..

TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

04 May, 2021 02:45PM

hackergotchi for Erich Schubert

Erich Schubert

Machine Learning Lecture Recordings

I have uploaded most of my “Machine Learning” lecture to YouTube.

The slides are in English, but the audio is in German.

Some very basic contents (e.g., a demo of standard k-means clustering) were left out from this advanced class, and instead only a link to recordings from an earlier class were given. In this class, I wanted to focus on the improved (accelerated) algorithms instead. These are not included here (yet). I believe there are some contents covered in this class you will find nowhere else (yet).

The first unit is pretty long (I did not split it further yet). The later units are shorter recordings.

ML F1: Principles in Machine Learning

ML F2/F3: Correlation does not Imply Causation & Multiple Testing Problem

ML F4: Overfitting – Überanpassung

ML F5: Fluch der Dimensionalität – Curse of Dimensionality

ML F6: Intrinsische Dimensionalität – Intrinsic Dimensionality

ML F7: Distanzfunktionen und Ähnlichkeitsfunktionen

ML L1: Einführung in die Klassifikation

ML L2: Evaluation und Wahl von Klassifikatoren

ML L3: Bayes-Klassifikatoren

ML L4: Nächste-Nachbarn Klassifikation

ML L5: Nächste Nachbarn und Kerndichteschätzung

ML L6: Lernen von Entscheidungsbäumen

ML L7: Splitkriterien bei Entscheidungsbäumen

ML L8: Ensembles und Meta-Learning: Random Forests und Gradient Boosting

ML L9: Support Vector Machinen - Motivation

ML L10: Affine Hyperebenen und Skalarprodukte – Geometrie für SVMs

ML L11: Maximum Margin Hyperplane – die “breitest mögliche Straße”

ML L12: Training Support Vector Machines

ML L13: Non-linear SVM and the Kernel Trick

ML L14: SVM – Extensions and Conclusions

ML L15: Motivation of Neural Networks

ML L16: Threshold Logic Units

ML L17: General Artificial Neural Networks

ML L18: Learning Neural Networks with Backpropagation

ML L19: Deep Neural Networks

ML L20: Convolutional Neural Networks

ML L21: Recurrent Neural Networks and LSTM

ML L22: Conclusion Classification

ML U1: Einleitung Clusteranalyse

ML U2: Hierarchisches Clustering

ML U3: Accelerating HAC mit Anderberg’s Algorithmus

ML U4: k-Means Clustering

ML U5: Accelerating k-Means Clustering

ML U6: Limitations of k-Means Clustering

ML U7: Extensions of k-Means Clustering

ML U8: Partitioning Around Medoids (k-Medoids)

ML U9: Gaussian Mixture Modeling (EM Clustering)

ML U10: Gaussian Mixture Modeling Demo

ML U11: BIRCH and BETULA Clustering

ML U12: Motivation Density-Based Clustering (DBSCAN)

ML U13: Density-reachable and density-connected (DBSCAN Clustering)

ML U14: DBSCAN Clustering

ML U15: Parameterization of DBSCAN

ML U16: Extensions and Variations of DBSCAN Clustering

ML U17: OPTICS Clustering

ML U18: Cluster Extraction from OPTICS Plots

ML U19: Understanding the OPTICS Cluster Order

ML U20: Spectral Clustering

ML U21: Biclustering and Subspace Clustering

ML U22: Further Clustering Approaches

04 May, 2021 01:18PM by Erich Schubert

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

NSF CAREER Award

In exciting professional news, it was recently announced that I got an National Science Foundation CAREER award! The CAREER is the US NSF’s most prestigious award for early-career faculty. In addition to the recognition, the award involves a bunch of money for me to put toward my research over the next 5 years. The Department of Communication at the University of Washington has put up a very nice web page announcing the thing. It’s all very exciting and a huge honor. I’m very humbled.

The grant will support a bunch of new research to develop and test a theory about the relationship between governance and online community lifecycles. If you’ve been reading this blog for a while, you’ll know that I’ve been involved in a bunch of research to describe how peer production communities tend to follow common patterns of growth and decline as well as a studies that show that many open communities become increasingly closed in ways that deter lots of the kinds contributions that made the communities successful in the first place.

Over the last few years, I’ve worked with Aaron Shaw to develop the outlines of an explanation for why many communities because increasingly closed over time in ways that hurt their ability to integrate contributions from newcomers. Over the course of the work on the CAREER, I’ll be continuing that project with Aaron and I’ll also be working to test that explanation empirically and to develop new strategies about what online communities can do as a result.

In addition to supporting research, the grant will support a bunch of new outreach and community building within the Community Data Science Collective. In particular, I’m planning to use the grant to do a better job of building relationships with community participants, community managers, and others in the platforms we study. I’m also hoping to use the resources to help the CDSC do a better job of sharing our stuff out in ways that are useful as well doing a better job of listening and learning from the communities that our research seeks to inform.

There are many to thank. The proposed work was the direct research of the work I did as the Center for Advanced Studies in the Behavioral Sciences at Stanford where I got to spend the 2018-2019 academic year in Claude Shannon’s old office and talking through these ideas with an incredible range of other scholars over lunch every day. It’s also the product of years of conversations with Aaron Shaw and Yochai Benkler. The proposal itself reflects the excellent work of the whole CDSC who did the work that made the award possible and provided me with detailed feedback on the proposal itself.

04 May, 2021 02:29AM by Benjamin Mako Hill

May 03, 2021

Russell Coker

DNS, Lots of IPs, and Postal

I decided to start work on repeating the tests for my 2006 OSDC paper on Benchmarking Mail Relays [1] and discover how the last 15 years of hardware developments have changed things. There have been software changes in that time too, but nothing that compares with going from single core 32bit systems with less than 1G of RAM and 60G IDE disks to multi-core 64bit systems with 128G of RAM and SSDs. As an aside the hardware I used in 2006 wasn’t cutting edge and the hardware I’m using now isn’t either. In both cases it’s systems I bought second hand for under $1000. Pedants can think of this as comparing 2004 and 2018 hardware.

BIND

I decided to make some changes to reflect the increased hardware capacity and use 2560 domains and IP addresses, which gave the following errors as well as a startup time of a minute on a system with two E5-2620 CPUs.

May  2 16:38:37 server named[7372]: listening on IPv4 interface lo, 127.0.0.1#53
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4, 10.0.2.45#53
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4, 10.0.40.1#53
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4, 10.0.40.2#53
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4, 10.0.40.3#53
[...]
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4, 10.0.47.0#53
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4, 10.0.48.0#53
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4, 10.0.49.0#53
May  2 16:39:33 server named[7372]: listening on IPv6 interface lo, ::1#53
[...]
May  2 16:39:36 server named[7372]: zone localhost/IN: loaded serial 2
May  2 16:39:36 server named[7372]: all zones loaded
May  2 16:39:36 server named[7372]: running
May  2 16:39:36 server named[7372]: socket: file descriptor exceeds limit (123273/21000)
May  2 16:39:36 server named[7372]: managed-keys-zone: Unable to fetch DNSKEY set '.': not enough free resources
May  2 16:39:36 server named[7372]: socket: file descriptor exceeds limit (123273/21000)

The first thing I noticed is that a default configuration of BIND with 2560 local IPs (when just running in the default recursive mode) takes a minute to start and needed to open over 100,000 file handles. BIND also had some errors in that configuration which led to it not accepting shutdown requests. I filed Debian bug report #987927 [2] about this. One way of dealing with the errors in this situation on Debian is to edit /etc/default/named and put in the following line to allow BIND to access to many file handles:

OPTIONS="-u bind -S 150000"

But the best thing to do for BIND when there are many IP addresses that aren’t going to be used for DNS service is to put a directive like the following in the BIND configuration to specify the IP address or addresses that are used for the DNS service:

listen-on { 10.0.2.45; };

I have just added the listen-on and listen-on-v6 directives to one of my servers with about a dozen IP addresses. While 2560 IP addresses is an unusual corner case it’s not uncommon to have dozens of addresses on one system.

dig

When doing tests of Postfix for relaying mail I noticed that mail was being deferred with DNS problems (error was “Host or domain name not found. Name service error for name=a838.example.com type=MX: Host not found, try again“. I tested the DNS lookups with dig which failed with errors like the following:

dig -t mx a704.example.com
socket.c:1740: internal_send: 10.0.2.45#53: Invalid argument
socket.c:1740: internal_send: 10.0.2.45#53: Invalid argument
socket.c:1740: internal_send: 10.0.2.45#53: Invalid argument

; <
> DiG 9.16.13-Debian <
> -t mx a704.example.com
;; global options: +cmd
;; connection timed out; no servers could be reached

Here is a sample of the strace output from tracing dig:

bind(20, {sa_family=AF_INET, sin_port=htons(0), 
sin_addr=inet_addr("0.0.0.0")}, 16) = 0
recvmsg(20, {msg_namelen=128}, 0)       = -1 EAGAIN (Resource temporarily 
unavailable)
write(4, "\24\0\0\0\375\377\377\377", 8) = 8
sendmsg(20, {msg_name={sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("10.0.2.45")}, msg_
namelen=16, msg_iov=[{iov_base="86\1 
\0\1\0\0\0\0\0\1\4a704\7example\3com\0\0\17\0\1\0\0)\20\0\0\0\0
\0\0\f\0\n\0\10's\367\265\16bx\354", iov_len=57}], msg_iovlen=1, 
msg_controllen=0, msg_flags=0}, 0) 
= -1 EINVAL (Invalid argument)
write(2, "socket.c:1740: ", 15)         = 15
write(2, "internal_send: 10.0.2.45#53: Invalid argument", 45) = 45
write(2, "\n", 1)                       = 1
futex(0x7f5a80696084, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f5a80696010, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f5a8069809c, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f5a80698020, FUTEX_WAKE_PRIVATE, 1) = 1
sendmsg(20, {msg_name={sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("10.0.2.45")}, msg_namelen=16, msg_iov=[{iov_base="86\1 
\0\1\0\0\0\0\0\1\4a704\7example\3com\0\0\17\0\1\0\0)\20\0\0\0\0\0\0\f\0\n\0\10's\367\265\16bx\354", 
iov_len=57}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = -1 EINVAL 
(Invalid argument)
write(2, "socket.c:1740: ", 15)         = 15
write(2, "internal_send: 10.0.2.45#53: Invalid argument", 45) = 45
write(2, "\n", 1)

Ubuntu bug #1702726 claims that an insufficient ARP cache was the cause of dig problems [3]. At the time I encountered the dig problems I was seeing lots of kernel error messages “neighbour: arp_cache: neighbor table overflow” which I solved by putting the following in /etc/sysctl.d/mine.conf:

net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh1 = 1024

Making that change (and having rebooted because I didn’t need to run the server overnight) didn’t entirely solve the problems. I have seen some DNS errors from Postfix since then but they are less common than before. When they happened I didn’t have that error from dig. At this stage I’m not certain that the ARP change fixed the dig problem although it seems likely (it’s always difficult to be certain that you have solved a race condition instead of made it less common or just accidentally changed something else to conceal it). But it is clearly a good thing to have a large enough ARP cache so the above change is probably the right thing for most people (with the possibility of changing the numbers according to the required scale). Also people having that dig error should probably check their kernel message log, if the ARP cache isn’t the cause then some other kernel networking issue might be related.

Preliminary Results

With Postfix I’m seeing around 24,000 messages relayed per minute with more than 60% CPU time idle. I’m not sure exactly how to count idle time when there are 12 CPU cores and 24 hyper-threads as having only 1 process scheduled for each pair of hyperthreads on a core is very different to having half the CPU cores unused. I ran my script to disable hyper-threads by telling the Linux kernel to disable each processor core that has the same core ID as another, it was buggy and disabled the second CPU altogether (better than finding this out on a production server). Going from 24 hyper-threads of 2 CPUs to 6 non-HT cores of a single CPU didn’t change the thoughput and the idle time went to about 30%, so I have possibly halved the CPU capacity for these tasks by disabling all hyper-threads and one entire CPU which is surprising given that I theoretically reduced the CPU power by 75%. I think my focus now has to be on hyper-threading optimisation.

Since 2006 the performance has gone from ~20 messages per minute on relatively commodity hardware to 24,000 messages per minute on server equipment that is uncommon for home use but which is also within range of home desktop PCs. I think that a typical desktop PC with a similar speed CPU, 32G of RAM and SSD storage would give the same performance. Moore’s Law (that transistor count doubles approximately every 2 years) is often misquoted as having performance double every 2 years. In this case more than 1024* the performance over 15 years means the performance doubling every 18 months. Probably most of that is due to SATA SSDs massively outperforming IDE hard drives but it’s still impressive.

Notes

I’ve been using example.com for test purposes for a long time, but RFC2606 specifies .test, .example, and .invalid as reserved top level domains for such things. On the next iteration I’ll change my scripts to use .test.

My current test setup has a KVM virtual machine running my bhm program to receive mail which is taking between 20% and 50% of a CPU core in my tests so far. While that is happening the kvm process is reported as taking between 60% and 200% of a CPU core, so kvm takes as much as 4* the CPU of the guest due to the virtual networking overhead – even though I’m using the virtio-net-pci driver (the most efficient form of KVM networking for emulating a regular ethernet card). I’ve also seen this in production with a virtual machine running a ToR relay node.

I’ve fixed a bug where Postal would try to send the SMTP quit command after encountering a TCP error which would cause an infinite loop and SEGV.

03 May, 2021 04:54AM by etbe

Russ Allbery

Review: The Voyage of the Dawn Treader

Review: The Voyage of the Dawn Treader, by C.S. Lewis

Illustrator: Pauline Baynes
Series: Chronicles of Narnia #3
Publisher: Collier Books
Copyright: 1952
Printing: 1978
ISBN: 0-02-044260-2
Format: Mass market
Pages: 216

There was a boy named Eustace Clarence Scrubb and he almost deserved it.

The Voyage of the Dawn Treader is the third Narnia book in original publication order (see my review of The Lion, the Witch and the Wardrobe for more about reading order). You could arguably start reading here; there are a lot of references to the previous books, but mostly as background material, and I don't think any of it is vital. If you wanted to sample a single Narnia book to see if you'd get along with the series, this is the one I'd recommend.

Since I was a kid, The Voyage of the Dawn Treader has held the spot of my favorite of the series. I'm happy to report that it still holds up. Apart from one bit that didn't age well (more on that below), this is the book where the story and the world-building come together, in part because Lewis picks a plot shape that works with what he wants to write about.

The younger two Pevensie children, Edmund and Lucy, are spending the summer with Uncle Harold and Aunt Alberta because their parents are in America. That means spending the summer with their cousin Eustace. C.S. Lewis had strong opinions about child-raising that crop up here and there in his books, and Harold and Alberta are his example of everything he dislikes: caricatured progressive, "scientific" parents who don't believe in fiction or mess or vices. Eustace therefore starts the book as a terror, a whiny bully who has only read boring practical books and is constantly scoffing at the Pevensies and making fun of their stories of Narnia. He is therefore entirely unprepared when the painting of a ship in the guest bedroom turns into a portal to the Narnia and dumps the three children into the middle of the ocean.

Thankfully, they're in the middle of the ocean near the ship in the painting. That ship is the Dawn Treader, and onboard is Caspian from the previous book, now king of Narnia. He has (improbably) sorted things out in his kingdom and is now on a sea voyage to find seven honorable Telmarine lords who left Narnia while his uncle was usurping the throne. They're already days away from land, headed towards the Lone Islands and, beyond that, into uncharted seas.

MAJOR SPOILERS BELOW.

Obviously, Eustace gets a redemption arc, which is roughly the first half of this book. It's not a bad arc, but I am always happy when it's over. Lewis tries so hard to make Eustace insufferable that it becomes tedious. As an indoor kid who would not consider being dumped on a primitive sailing ship to be a grand adventure, I wanted to have more sympathy for him than the book would allow.

The other problem with Eustace's initial character is that Lewis wants it to stem from "modern" parenting and not reading the right sort of books, but I don't buy it. I've known kids whose parents didn't believe in fiction, and they didn't act anything like this (and kids pick up a lot more via osmosis regardless of parenting than Lewis seems to realize). What Eustace acts like instead is an entitled, arrogant rich kid who is used to the world revolving around him, and it's fascinating to me how Lewis ignores class to focus on educational philosophy.

The best part of Eustace's story is Reepicheep, which is just setup for Reepicheep becoming the best part of The Voyage of the Dawn Treader.

Reepicheep, the leader of Narnia's talking mice, first appears in Prince Caspian, but there he's mostly played for laughs: the absurdly brave and dashing mouse who rushes into every fight he sees. In this book, he comes into his own as the courage and occasionally the moral conscience of the party. Caspian wants to explore and to find the lords of his past, the Pevensie kids want to have a sea adventure, and Eustace is in this book to have a redemption arc, but Reepicheep is the driving force at the heart of the voyage. He's going to Aslan's country beyond the sea, armed with a nursemaid's song about his destiny and a determination to be his best and most honorable self every step of the way, and nothing is going to stop him.

Eustace, of course, takes an immediate dislike to a talking rodent. Reepicheep, in return, is the least interested of anyone on the ship in tolerating Eustace's obnoxious behavior and would be quite happy to duel him. But when Eustace is turned into a dragon, Reepicheep is the one who spends hours with him, telling him stories and ensuring he's not alone. It's beautifully handled, and my only complaint is that Lewis doesn't do enough with the Eustace and Reepicheep friendship (or indeed with Eustace at all) for the rest of the book.

After Eustace's restoration and a few other relatively short incidents comes the second long section of the book and the part that didn't age well: the island of the Dufflepuds. It's a shame because the setup is wonderful: a cultivated island in the middle of nowhere with no one in sight, mysterious pounding sounds and voices, the fun of trying to figure out just what these invisible creatures could possibly be, and of course Lucy's foray into the second floor of a house, braving the lair of a magician to find and read one of the best books of magic in fantasy.

Everything about how Lewis sets this scene is so well done. The kids are coming from an encounter with a sea serpent and a horrifically dangerous magic island and land on this scene of eerily normal domesticity. The most dangerous excursion is Lucy going upstairs in a brightly lit house with soft carpet in the middle of the day. And yet it's incredibly tense because Lewis knows exactly how to put you in Lucy's head, right down to having to stand with her back to an open door to read the book.

And that book! The pages only turn forward, the spells are beautifully illustrated, and the sense of temptation is palpable. Lucy reading the eavesdropping spell is one of the more memorable bits in this series, at least for me, and makes a surprisingly subtle moral point about the practical reasons why invading other people's privacy is unwise and can just make you miserable. And then, when Lucy reads the visibility spell that was her goal, there's this exchange, which is pure C.S. Lewis:

"Oh Aslan," said she, "it was kind of you to come."

"I have been here all the time," said he, "but you have just made me visible."

"Aslan!" said Lucy almost a little reproachfully. "Don't make fun of me. As if anything I could do would make you visible!"

"It did," said Aslan. "Did you think I wouldn't obey my own rules?"

I love the subtlety of what's happening here: the way that Lucy is much more powerful than she thinks she is, but only because Aslan decided to make the rules that way and chooses to follow his own rules, making himself vulnerable in a fascinating way. The best part is that Lewis never belabors points like this; the characters immediately move on to talk about other things, and no one feels obligated to explain.

But, unfortunately, along with the explanation of the thumping and the magician, we learn that the Dufflepuds are (remarkably dim-witted) dwarfs, the magician is their guardian (put there by Aslan, no less!), he transformed them into rather absurd shapes that they hate, and all of this is played for laughs. Once you notice that these are sentient creatures being treated essentially like pets (and physically transformed against their will), the level of paternalistic colonialism going on here is very off-putting. It's even worse that the Dufflepuds are memorably funny (washing dishes before dinner to save time afterwards!) and are arguably too dim to manage on their own, because Lewis made the authorial choice to write them that way. The "white man's burden" feeling is very strong.

And Lewis could have made other choices! Coriakin the magician is a fascinating and somewhat morally ambiguous character. We learn later in the book that he's a star and his presence on the island is a punishment of sorts, leading to one of my other favorite bits of theology in this book:

"My son," said Ramandu, "it is not for you, a son of Adam, to know what faults a star can commit."

Lewis could have kept most of the setup, kept the delightfully silly things the Dufflepuds believe, changed who was responsible for their transformation, and given Coriakin a less authoritarian role, and the story would have been so much stronger for it.

After this, the story gets stranger and wilder, and it's in the last part that I think the true magic of this book lies. The entirety of The Voyage of the Dawn Treader is a progression from a relatively mundane sea voyage to something more awe-inspiring. The last few chapters are a tour de force of wonder: rejuvenating stars, sunbirds, the Witch's stone knife, undersea kingdoms, a sea of lilies, a wall of water, the cliffs of Aslan's country, and the literal end of the world. Lewis does it without much conflict, with sparse description in a very few pages, and with beautifully memorable touches like the quality of the light and the hush that falls over the ship.

This is the part of Narnia that I point to and wonder why I don't see more emulation (although I should note that it is arguably an immram). Tolkien-style fantasy, with dwarfs and elves and magic rings and great battles, is everywhere, but I can't think of many examples of this sense of awe and discovery without great battles and detailed explanations. Or of characters like Reepicheep, who gets one of the best lines of the series:

"My own plans are made. While I can, I sail east in the Dawn Treader. When she fails me, I paddle east in my coracle. When she sinks, I shall swim east with my four paws. And when I can swim no longer, if I have not reached Aslan's country, or shot over the edge of the world in some vast cataract, I shall sink with my nose to the sunrise and Peepiceek shall be the head of the talking mice in Narnia."

The last section of The Voyage of the Dawn Treader is one of my favorite endings of any book precisely because it's so different than the typical ending of a novel. The final return to England is always a bit disappointing in this series, but it's very short and is preceded by so much wonder that I don't mind. Aslan does appear to the kids as a lamb at the very end of the world, making Lewis's intended Christian context a bit more obvious, but even that isn't belabored, just left there for those who recognize the symbolism to notice.

I was curious during this re-read to understand why The Voyage of the Dawn Treader is so much better than the first two books in the series. I think it's primarily due to two things: pacing, and a story structure that's better aligned with what Lewis wants to write about.

For pacing, both The Lion, the Witch and the Wardrobe and Prince Caspian have surprisingly long setups for short books. In The Voyage of the Dawn Treader, by contrast, it takes only 35 pages to get the kids in Narnia, introduce all the characters, tour the ship, learn why Caspian is off on a sea voyage, establish where this book fits in the Narnian timeline, and have the kids be captured by slavers. None of the Narnia books are exactly slow, but Dawn Treader is the first book of the series that feels like it knows exactly where it's going and isn't wasting time getting there.

The other structural success of this book is that it's a semi-episodic adventure, which means Lewis can stop trying to write about battles and political changes whose details he's clearly not interested in and instead focus wholeheartedly on sense-of-wonder exploration. The island-hopping structure lets Lewis play with ideas and drop them before they wear out their welcome. And the lack of major historical events also means that Aslan doesn't have to come in to resolve everything and instead can play the role of guardian angel.

I think The Voyage of the Dawn Treader has the most compelling portrayal of Aslan in the series. He doesn't make decisions for the kids or tell them directly what to do the way he did in the previous two books. Instead, he shows up whenever they're about to make a dreadful mistake and does just enough to get them to make a better decision. Some readers may find this takes too much of the tension out of the book, but I have always appreciated it. It lets nervous child readers enjoy the adventures while knowing that Aslan will keep anything too bad from happening. He plays the role of a protective but non-interfering parent in a genre that usually doesn't have parents because they would intervene to prevent adventures.

I enjoyed this book just as much as I remembered enjoying it during my childhood re-reads. Still the best book of the series.

This, as with both The Lion, the Witch and the Wardrobe and Prince Caspian, was originally intended to be the last book of the series. That, of course, turned out to not be the case, and The Voyage of the Dawn Treader is followed (in both chronological and original publication order) by The Silver Chair.

Rating: 9 out of 10

03 May, 2021 03:03AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

First email from my new machine.

First email from my new machine. I didn't have my desktop Debian machine for a long time and now I have one set up. I rewrote my procmail/formail recipe, especially the part where I wrote complex shell script to generate my folder rules. I rewrote that 4 lines of shell script in 200 lines of Go, with unit tests. The part that took the longest time was finding out how to write unit tests in Go, and how to properly use go.mod to be able to import packages from subdirectories. I guess that's part of the fun.

03 May, 2021 12:07AM by Junichi Uekawa

May 02, 2021

hackergotchi for Santiago García Mantiñán

Santiago García Mantiñán

Windows and Linux software Raid dual boot BIOS machine

One could think that nowadays having a machine with software raid doing dual boot should be easy, but... my experience showed that it is not that easy.

Having a Windows machine do software raid is easy (I still don't understand why it doesn't really work like it should, but that is because I'm used to Linux software raid), and having software raid on Linux is also really easy. But doing so on a BIOS booted machine, on mbr disks (as Windows doesn't allow GPT on BIOS) is quite a pain.

The problem is how Windows does all this, with it's dynamic disks. What happens with this is that you get from a partitioning like this:

/dev/sda1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT /dev/sda2 206848 312580095 312373248 149G 7 HPFS/NTFS/exFAT /dev/sda3 312580096 313165823 585728 286M 83 Linux /dev/sda4 313165824 957698047 644532224 307,3G fd Linux raid autodetect

To something like this:

/dev/sda1 63 2047 1985 992,5K 42 SFS /dev/sda2 * 2048 206847 204800 100M 42 SFS /dev/sda3 206848 312580095 312373248 149G 42 SFS /dev/sda4 312580096 976769006 664188911 316,7G 42 SFS

These are the physical partitions as seen by fdisk, logical partitions are still like before, of course, so there is no problem in accesing them under Linux or windows, but what happens here is that Windows is using the first sectors for its dynamic disks stuff, so... you cannot use those to write grub info there :-(

So... the solution I found here was to install Debian's mbr and make it boot grub, but then... where do I store grub's info?, well, to do this I'm using a btrfs /boot which is on partition 3, as btrfs has room for embedding grub's info, and I setup the software raid with ext4 on partition 4, like you can see on my first partition dump. Of course, you can have just btrfs with its own software raid, then you don't need the fourth partition or anything.

There are however some caveats on doing all this, what I found was that I had to install grub manually using grub-install --no-floppy on /dev/sda3 and /dev/sdb3, as Debian's grub refused to give me the option to install there, also... several warnings came as a result, but things work ok anyway.

One more warning, I did all this on Buster, but it looks like for Grub 2.04 which is included on Bullseye, things have gotten a bit bigger, so at least on my partitions there was no room for it, so I had to leave the old Buster's grub around for now, if anybody has any ideas on how to solve this... they are welcome.

02 May, 2021 10:51PM by Santiago García Mantiñán (noreply@blogger.com)

May 01, 2021

Ingo Juergensmann

The Fediverse – What About Resources?

Today ist May, 1st. In about two weeks on May, 15th WhatsApp will put their changed Terms of Service into action and when you don’t accept their rules you won’t be able to use WhatsApp any longer.

Early this year there was already a strong movement away from WhatsApp towards other solutions. Mainly to Signal, but also some other services like the Fediverse gained some new users. And also XMPP got their fair share of new users.

So, what to do about the WhatsApp ToS change then? Shall we go all to Signal? Surely not. Signal is another vendor lock-in silo. It’s centralistic and recent development plans want to implement some crypto payment system. Even Bruce Schneier thinks that this is a bad idea.

Other alternatives often named include Matrix/Element or XMPP. Today, Don di Dislessia in the (german) Fediverse asked about power consumption of the Fediverse incl. Matrix and XMPP and how much renewable energy is being used. Of course this is no easy answer to this question, but I tried my best – at least for my own server.

Here are my findings and conclusions…

Power

screenshot showing power consumption of serverscreenshot showing power consumption of server

Currently my server in the colocation is using about 93W in average with 6c Xeon E5-2630L, 128 GB RAM, 4x 2 TB WD Red + 1 Samsung 960pro NVMe. The server is 7 years old. When I started with that server the power consumption was about 75W, but back then there were far less users on the server. So, 20W more over the past year…

Users

I’m running my Friendica node on Nerdica.net since 2013. Over the years it became one of the largest Friendica servers in the Fediverse, for some time it was the largest one. It has currently like 700 total users and 180 monthly active users. My Mastodon instance on Nerdculture.de has about 1000 total users and about 300 monthly active users.

Since last year I also run a Matrix-Synapse server. Although I invited my family I’m in fact the only active user on that server and have joined some channels.

My XMPP server is even older than my Friendica node. For long time I had like maybe 20 users. Now I setup a new website and added some domains like hookipa.net and xmpp.social the user count increased and currently I have like 130 users on those two domains and maybe like 50 monthly active users. Also note that all my Friendica and Mastodon users can use XMPP with their accounts, but won’t be counted the same way as on “native” users on ejabberd, because the auth backend is different.

So, let’s assume I do have like 2000 total users and 500 monthly active users.

CPU, Database Sizes and Disk I/O

Let’s have a look about how many resources are being used by those users.

Database Sizes:

  • Friendica (MariaDB): 31 GB for 700 users
  • Mastodon (PostgreSQL): 15 GB for 1000 users
  • Matrix-Synapse (PostgreSQL): 5 GB for 1 user
  • XMPP (PostgreSQL): 0.4 GB for 200 users

CPU times according to xentop:

  • Webserver VM (Matrix, Friendica & Mastodon): 13410130 s / 130%
  • XMPP VM: 944275 s / 5.4%

Friendica does use the largest database and causes most disk I/O on NVMe, but it’s difficult to differentiate between the load between the web apps on the webserver. So, let’s have a quick look on an simple metric:

Number of lines in webserver logfile:

  • Friendica: 11575 lines
  • Matrix: 8174 lines
  • Mastodon: 3212 lines

These metrics correlate to some degree with the database I/O load, at least for Friendica. If you take into account the number of users, things look quite different.

Conclusion

Overall, and my personal impression, is that Matrix is really bad in regards of resource usage. Given that I’m the only active user it uses exceptionally many resources. When you also consider that Matrix is using a distributed database for its chat rooms, you can assume that the resource usage is multiplied across the network, making things even worse.

Friendica is using a large database and many disk accesses, but has a fairly large user base, so it seems ok, but of course should be improved.

Mastodon seems to be quite good, considering the database size, the number of log lines and the user count.

XMPP turns out to be the most efficient contestant in this comparison: it uses much less CPU cycles and database disk I/O.

Of course, Mastdon/Friendica are different services than XMPP or Matrix. So, coming back to the initial question about alternatives to WhatsApp, the answer for me is: you should prefer XMPP over Matrix alone for reasons of saving resources and thus reducing power consumption. Less power consumption also means a smaller ecological footprint and fewer CO2 emissions for your communication with your family and friends.

XMPP is surely not the perfect replacement for WhatsApp, but I think it is the best thing to recommend. As said above, I don’t think that Signal is an viable option. It’s just another proprierary silo with all the problems that come with it. Matrix is a resource hog and not a messenger but a MS Teams replacement. Element as the main Matrix client is laggy and not multi-account/multi-server capable. Other Matrix clients do support multiple accounts but are not as feature-complete as Element. In the end the Matrix ecosystem will suffer from the same issues as XMPP did already a decade ago. But XMPP has learned to deal with it.

Also XMPP is proceeding fast in the last years and it has solved many problems many people are still complaining about. Sure, there still some open issues. The situation on IOS is still not as good as on Android with Conversations, but it is fairly close to it.

There are many efforts to improve XMPP. There is Quicksy IM, which is a service that will use your phone number as Jabber ID/JID and is thus comparable to Signal which uses phone numbers as well as unique identifier. But Quicksy is compatible with XMPP standards. Snikket is another new XMPP ecosystem aiming at smaller groups hosting their own server by simply installing a Docker container and setup some basic SRV records in the DNS. Or there is Mailcow, a Docker based mailserver setup that added XMPP server in their setup as well, so you can have the same mail and XMPP address. Snikket even got EU based funding for implementing XMPP Account Portability which also will improve the decentralization even further. Additionally XMPP helps vaccination in Canada and USA with vaxbot by Monal.

Be smart and use ecofriendly infrastructure.

01 May, 2021 03:50PM by ij

Petter Reinholdtsen

VLC bittorrent plugin in Bullseye, saved by the bell?

Yesterday morning I got a warning call from the Debian quality control system that the VLC bittorrent plugin was due to be removed because of a release critical bug in one of its dependencies. As you might remember, this plugin make VLC able to stream videos directly from a bittorrent source using both torrent files and magnet links, similar to using a HTTP source. I believe such protocol support is a vital feature in VLC, allowing efficient streaming from sources such at the almost 7 million movies in the Internet Archive.

The dependency was the unmaintained libtorrent-rasterbar package, and the bug in question blocked its python library from working properly. As I did not want Bullseye to release without bittorrent support in VLC, I set out to check out the status, and track down a fix for the problem. Luckily the issue had already been identified and fixed upstream, providing everything needed. All I needed to do was to fetch the Debian git repository, extract and trim the patch from upstream and apply it to the Debian package for upload.

The fixed library was uploaded yesterday evening. But that is not enough to get it into Bullseye, as Debian is currently in package freeze to prepare for a new next stable release. Only non-critical packages with autopkgtest setup included, in other words able to validate automatically that the package is working, are allowed to migrate automatically into the next release at this stage. And the unmaintained libtorrent-rasterbar lack such testing, and thus needed a manual override. I am happy to report that such manual override was approved a few minutes ago, thus increasing significantly the chance of VLC bittorrent streaming being available out of the box also for Debian/Buster users. A bit too close shave for my liking, as the Bullseye release is most likely just a few days away, and this did feel like the package was saved by the bell. I am so glad the warning email showed up in time for me to handle the issue, and a big thanks go to the Debian Release team for the quick feedback on #debian-release and their swift unblocking.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

01 May, 2021 09:00AM

April 30, 2021

hackergotchi for Junichi Uekawa

Junichi Uekawa

May.

May. Told my son about the months in English. The numbers are straightforward but couldn't remember what the other ones are. He was amused when I told him Septem is seven in Latin, and September is the ninth month. Octo, novem, decem are similar.

30 April, 2021 11:51PM by Junichi Uekawa

Paul Wise

FLOSS Activities April 2021

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian: restart service killed by OOM killer, revert mirror redirection
  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors

The flower/sptag work was sponsored by my employer. All other work was done on a volunteer basis.

30 April, 2021 08:34PM

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in April 2021

Here is my monthly update covering what I have been doing in the free software world during April 2021 (previous month):

§

Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Updated the main Reproducible Builds website and documentation:

    • Highlight our mailing list on the Contribute. page [...]
    • Add a noun (and drop an unnecessary full-stop) on the landing page. [...][...]
    • Correct a reference to the date metadata attribute on reports, restoring the display of months on the homepage. [...]
    • Correct a typo of "instalment" within a previous news entry. [...]
    • Added a conspicuous "draft" banner to unpublished blog posts in order to match the report draft banner. [...]
  • I also made the following changes to diffoscope, including uploading versions 172 and 173 to Debian:

    • Add support for showing annotations in PDF files. (#249)
    • Move to the assert_diff helper in test_pdf.py. [...]


§

Debian

  • redis (5:6.2.2-1) (to experimental) — New upstream release.

  • python-django:

    • 2.2.20-1 — New upstream security release.
    • 3.2-1 (to experimental) — New major upstream release (release notes).
  • hiredis (1.0.0-2) — Build with SSL/TLS support (#987114), and overhaul various aspects of the packaging.

  • mtools (4.0.27-1) — New upstream release.


Debian Long Term Support (LTS)

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project:

  • Investigated and triaged avahi (CVE-2021-3468), exiv2 (CVE-2021-3482), file-roller (CVE-2020-36314), fluidsynth (CVE-2021-28421), gnuchess (CVE-2021-30184), gpac (CVE-2021-28300), imagemagick (CVE-2021-20309, CVE-2021-20243), ircii (CVE-2021-29376), jetty9 (CVE-2021-28163), libcaca (CVE-2021-30498, CVE-2021-30499), libjs-handlebars, libpano13, libpodofo (CVE-2021-30469, CVE-2021-30470, CVE-2021-30471, CVE-2021-30472), mediawiki, mpv (CVE-2021-30145), nettle (CVE-2021-20305), nginx (CVE-2020-36309), nim (CVE-2021-21372, CVE-2021-21373, CVE-2021-21374), node-glob-parent (CVE-2020-28469), openexr (CVE-2021-3474), python-django-registration (CVE-2021-21416), qt4-x11 (CVE-2021-3481), qtsvg-opensource-src (CVE-2021-3481), ruby-kramdown, scrollz (CVE-2021-29376), syncthing (CVE-2021-21404), thunderbird (CVE-2021-23991, CVE-2021-23992, CVE-2021-23993) & wordpress (CVE-2021-29447).

  • Issued DLA 2620-1 to address a cross-site scripting (XSS) vulnerability in python-bleach, a whitelist-based HTML sanitisation library.

  • Issued DLA 2622-1 and ELA 402-1 as it was discovered that there was a potential directory traversal issue in Django, the popular Python-based web development framework. The vulnerability could have been exploited by maliciously crafted filenames. However, the upload handlers built into Django itself were not affected. (#986447)

  • Jan-Niklas Sohn discovered that there was an input validation failure in the X.Org display server. Insufficient checks on the lengths of the XInput extension's ChangeFeedbackControl request could have lead to out of bounds memory accesses in the X server. These issues could have led to privilege escalation for authorised clients, particularly on systems where the X server is running as a privileged user. I, therefore, issued both DLA 2627-1 and ELA 405-1 to address this problem.

  • Frontdesk duties, reviewing others' packages, participating in mailing list discussions, etc., as well as attending our monthly meeting.

You can find out more about the project via the following video:

30 April, 2021 05:24PM

hackergotchi for Bastian Venthur

Bastian Venthur

Getting the Function keys of a Keychron working on Linux

Having destroyed the third Cherry Stream keyboard in 4 years, I wanted to try a more substantial keyboard for a change. After some research I decided that I want a mechanical, wired, tenkeyless keyboard without any fancy LEDs.

At the end I settled for a Keychron C1 with red switches. It meets all requirements, looks very nice and the price is reasonable.

Surprise!

After the keyboard was delivered, I connected it to my Debian machine and was unpleasantly surprised to notice that the Function-keys did not work at all. The keyboard shares the Multimedia keys with the F-keys and you have an fn key that supposedly switches between the two modes, like you’re used to on a laptop. On Linux, however you cannot access the F-keys at all: pressing fn + F1 or F1 makes no difference, you’ll always get the Multimedia key. Switching the keyboard between “Windows” and “Mac” mode makes no difference either, in both modes the F-keys are not accessible.

Apparently Keychron is aware of the problem, because the quick start manual tells you:

“We have a Linux user group on facebook. Please search “Keychron Linux Group” on facebook. So you can better experience with our keyboard.”

Customer support at its finest!

So at this point you should probably just send the keyboard back, get the refund and buy a proper one with functioning F-keys.

The fix

Test if this command fixes the issue and enables the Fn + F-key-combos:

# as root:
echo 2 > /sys/module/hid_apple/parameters/fnmode

Depending on the mode the keyboard is in, you should now be able to use the F-keys by simply pressing them, and the Multimedia keys by pressing fn + F-key (or the other way round). To switch the default mode of the F-keys to Function- or Multimedia-mode, press and hold fn + X + L for 4 seconds.

If everything works as expected, you can make the change permanent by creating the file /etc/modprobe.d/hid_apple.conf and adding the line:

options hid_apple fnmode=2

This works regardless if the keyboard is in Windows- or Mac-mode, and that might hint at the problem: in both cases the Linux thinks you’re connecting a Mac keyboard.

The rant

Although the fix was not very hard to find and apply, this experience still leaves a foul taste. I naively assumed the problem of having a properly functioning keyboard that simply works when you plug it in, has been thoroughly solved by 2021.

To make it worse, I assume Keychron must be aware of the problem because the other Keychron models have the same issue! But instead of fixing it on their end, they forward you to a facebook “community” and expect you to somehow fix it yourself.

So dear Keychron, you do make really beautiful keyboards! But before you release your next model with the same bug, maybe invest a bit on fixing the basics? I see that your keyboards support firmware updates for Windows and Mac – maybe you can talk to the folks over at the Linux Vendor Firmware Service and support Linux as well? Maybe you can even fix this annoying bug for the keyboards you’ve already sold? I found it really cute that you sent different keycaps for Windows and Mac setups – a few disappointed Linux users might accept an apology in form of a Linux cap…

30 April, 2021 03:30PM by Bastian Venthur

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, March 2021

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In March, we put aside 3225 EUR to fund Debian projects but sadly nobody picked up anything, so this one of the many reasons Raphael posted as series of blog posts titled “Challenging times for Freexian”, posted in 4 parts on the last two days of March and the first two of April. [Part one, two, three and four]

So we’re still looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article!

Debian LTS contributors

In March, 11 contributors have been paid to work on Debian LTS, their reports are available:

Evolution of the situation

In March we released 28 DLAs and held our second LTS team meeting for 2021 on IRC, with the next public IRC meeting coming up at the end of May.

At that meeting Holger announced that after 2.5 years he wanted to step back from his role helping Raphaël in coordinating/managing the LTS team. We would like to thank Holger for his continuous work on Debian LTS (which goes back to 2014) and are happy to report that we already found a successor which we will introduce in the upcoming April report from Freexian.

Finally, we would like to remark once again that we are constantly looking for new contributors. For a last time, please contact Holger if you are interested!

The security tracker currently lists 42 packages with a known CVE and the dla-needed.txt file has 28 packages needing an update.

We are also pleased to report that we got 4 new sponsors over the last 2 months : thanks to sipgate GmbH, OVH US LLC, Tilburg University and Observatoire des Sciences de l’Univers de Grenoble !

Thanks to our sponsors

Sponsors that joined recently are in bold.

30 April, 2021 02:22PM by Raphaël Hertzog

Reproducible Builds (diffoscope)

diffoscope 173 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 173. This version includes the following changes:

[ Chris Lamb ]
* Add support for showing annotations in PDF files.
  (Closes: reproducible-builds/diffoscope#249)
* Move to assert_diff in test_pdf.py.

[ Zachary T Welch ]
* Difference.__init__: Demote unified_diff argument to a Python "kwarg".

You find out more by visiting the project homepage.

30 April, 2021 12:00AM

April 29, 2021

Anton Gladky

2021/04, FLOSS activity

LTS

This is my second month of working for LTS. I was assigned 12 hrs and worked all of them.

Released DLAs

  1. DLA 2619-1 python3.5_3.5.3-1+deb9u4

    CVE-2021-23336 introduced an API-change. It was hard decision to upload this fix, because it can potentially break user’s code, if they code uses semicolon as separator. Another option is not to fix it at all, leaving the security issue open. Not the best solution.

    Also I have fixed the failing autopkgtest, which was introduced in one of latest CVE fixes. CI-pipelines on salsa.d.o are helping now to detect such mistakes.

  2. DLA 2628-1 python2.7_2.7.13-2+deb9u5

    CVE-2021-23336 introduced an API-change, same as for python3.5. But the backporting was much harder because python3->2 is not always easy.

CI-pipelines

I try to setup for all LTS-packages which I touch CI-pipelines on salsa.d.o. Setting up pipelines for python3.5 and python2.7 was much harder as for other packages. Failing autopkgtests and some other issues. Though it takes at the beginning more time to setup, I believe it improves package quality.

LTS-Meeting

I attended the Debian LTS team Jitsi-meeting.

29 April, 2021 06:00PM

hackergotchi for Norbert Preining

Norbert Preining

In memoriam of Areeb Jamal

We lost one of our friends and core developers of the FOSSASIA community. An extremely sad day.

We miss you.

29 April, 2021 01:13AM by Norbert Preining

April 28, 2021

Antoine Beaupré

Building a status page service with Hugo

The Tor Project now has a status page which shows the state of our major services.

You can check status.torprojet.org for news about major outages in Tor services, including v3 and v2 onion services, directory authorities, our website (torproject.org), and the check.torproject.org tool. The status page also displays outages related to Tor internal services, like our GitLab instance.

This post documents why we launched status.torproject.org, how the service was built, and how it works.

Why a status page

The first step in setting up a service page was to realize we needed one in the first place. I surveyed internal users at the end of 2020 to see what could be improved, and one of the suggestions that came up was to "document downtimes of one hour or longer" and generally improve communications around monitoring. The latter is still on the sysadmin roadmap, but a status page seemed like a good solution for the former.

We already have two monitoring tools in the sysadmin team: Icinga (a fork of Nagios) and Prometheus, with Grafana dashboards. But those are hard to understand for users. Worse, they also tend to generate false positives, and don't clearly show users which issues are critical.

In the end, a manually curated dashboard provides important usability benefits over an automated system, and all major organisations have one.

Picking the right tool

It wasn't my first foray in status page design. In another life, I had setup a status page using a tool called Cachet. That was already a great improvement over the previous solutions, which were to use first a wiki and then a blog to post updates. But Cachet is a complex Laravel app, which also requires a web browser to update. It generally requires more maintenance than what we'd like, needing silly things like a SQL database and PHP web server.

So when I found cstate, I was pretty excited. It's basically a theme for the Hugo static site generator, which means that it's a set of HTML, CSS, and a sprinkle of Javascript. And being based on Hugo means that the site is generated from a set of Markdown files and the result is just plain HTML that can be hosted on any web server on the planet.

Deployment

At first, I wanted to deploy the site through GitLab CI, but at that time we didn't have GitLab pages set up. Even though we do have GitLab pages set up now, it's not (yet) integrated with our mirroring infrastructure. So, for now, the source is hosted and built in our legacy git and Jenkins services.

It is nice to have the content hosted in a git repository: sysadmins can just edit Markdown in the git repository and push to deploy changes, no web browser required. And it's trivial to setup a local environment to preview changes:

hugo serve --baseUrl=http://localhost/
firefox https://localhost:1313/

Only the sysadmin team and gitolite administrators have access to the repository, at this stage, but that could be improved if necessary. Merge requests can also be issued on the GitLab repository and then pushed by authorized personnel later on, naturally.

Availability

One of the concerns I have is that the site is hosted inside our normal mirror infrastructure. Naturally, if an outage occurs there, the site goes down. But I figured it's a bridge we'll cross when we get there. Because it's so easy to build the site from scratch, it's actually trivial to host a copy of the site on any GitLab server, thanks to the .gitlab-ci.yml file shipped (but not currently used) in the repository. If push comes to shove, we can just publish the site elsewhere and point DNS there.

And, of course, if DNS fails us, then we're in trouble, but that's the situation anyway: we can always register a new domain name for the status page when we need to. It doesn't seem like a priority at the moment.

Comments and feedback are welcome!


This article was first published on the Tor Project Blog.

28 April, 2021 08:05PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

DeskPi Pro update

I wrote previously about my DeskPi Pro + 8GB Pi 4 setup. My main complaint at the time was the fact one of the forward facing USB ports broke off early on in my testing. For day to day use that hasn’t been a problem, but it did mar the whole experience. Last week I received an unexpected email telling me “The new updated PCB Board for your DeskPi order was shipped.”. Apparently this was due to problems with identifying SSDs and WiFi/HDMI issues. I wasn’t quite sure how much of the internals they’d be replacing, so I was pleasantly surprised when it turned out to be most of them; including the PCB with the broken USB port on my device.

DeskPi Pro replacement PCB

They also provided a set of feet allowing for vertical mounting of the device, which was a nice touch.

The USB/SATA bridge chip in use has changed; the original was:

usb 2-1: New USB device found, idVendor=152d, idProduct=0562, bcdDevice= 1.09
usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1: Product: RPi_SSD
usb 2-1: Manufacturer: 52Pi
usb 2-1: SerialNumber: DD5641988389F

and the new one is:

usb 2-1: New USB device found, idVendor=174c, idProduct=1153, bcdDevice= 0.01
usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1
usb 2-1: Product: AS2115
usb 2-1: Manufacturer: ASMedia
usb 2-1: SerialNumber: 00000000000000000000

That’s a move from a JMicron 6Gb/s bridge to an ASMedia 3Gb/s bridge. It seems there are compatibility issues with the JMicron that mean the downgrade is the preferred choice. I haven’t retried the original SSD I wanted to use (that wasn’t detected), but I did wonder if this might have resolved that issue too.

Replacing the PCB was easier than the original install; everything was provided pre-assembled and I just had to unscrew the Pi4 and slot it out, then screw it into the new PCB assembly. Everything booted fine without the need for any configuration tweaks. Nice and dull. I’ve tried plugging things into the new USB ports and they seem ok so far as well.

However I also then ended up pulling in a new backports kernel from Debian (upgrading from 5.9 to 5.10) which resulted in a failure to boot. The kernel and initramfs were loaded fine, but no login prompt ever appeared. Some digging led to the discovery that a change in boot ordering meant USB was not being enabled. The solution is to add reset_raspberrypi to the /etc/initramfs-tools/modules file - that way this module is available in the initramfs, the appropriate pre-USB reset can happen and everything works just fine again.

The other niggle with the new kernel was a regular set of errors in the kernel log:

mmc1: Timeout waiting for hardware cmd interrupt.
mmc1: sdhci: ============ SDHCI REGISTER DUMP ===========

and a set of registers afterwards, roughly every 10s or so. This seems to be fallout from an increase in the core clock due to the VC4 driver now being enabled, the fact I have no SD card in the device and a lack of working card-detect line for the MicroSD slot. There’s a GitHub issue but I solved it by removing the sdhci_iproc for now - I’m not using the wifi so loss of MMC isn’t a problem.

Credit to DeskPi for how they handled this. I didn’t have to do anything and didn’t even realise anything was happening until I got the email with my tracking number and a description of what they were sending out in it. Delivery took less than a week. This is a great example of how to handle a product issue - no effort required on the part of the customer.

28 April, 2021 07:27PM

hackergotchi for Martin Michlmayr

Martin Michlmayr

Research on FOSS foundations

I worked on research on FOSS foundations and published two reports:

Growing Open Source Projects with a Stable Foundation

This primer covers non-technical aspects that the majority of projects will have to consider at some point. It also explains how FOSS foundations can help projects grow and succeed.

This primer explains:

  • What issues and areas to consider
  • How other projects and foundations have approached these topics
  • What FOSS foundations bring to the table
  • How to choose a FOSS foundation

You can download Growing Open Source Projects with a Stable Foundation.

Research report

The research report describes the findings of the research and aims to help understand the operations and challenges FOSS foundations face.

This report covers topics such as:

  • Role and activities of foundations
  • Challenges faced and gaps in the service offerings
  • Operational aspects, including reasons for starting an org and choice of jurisdiction
  • Trends, such as the "foundation in a foundation" model
  • Recommendations for different stakeholders

You can download the research report.

Acknowledgments

This research was sponsored by Ford Foundation and Alfred P. Sloan Foundation. The research was part of their Critical Digital Infrastructure Research initiative, which investigates the role of open source in digital infrastructure.

28 April, 2021 08:29AM by Martin Michlmayr

Russell Coker

Russ Allbery

Review: Beyond Shame

Review: Beyond Shame, by Kit Rocha

Series: Beyond #1
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 270

I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Noelle is a child of Eden, the rich and technologically powerful city of a post-apocalyptic world. As the daughter of a councilman, she had everything she wanted except the opportunity to feel. Eden's religious elite embrace a doctrine of strict Puritanism: Even hugging one's children was frowned upon, let alone anything related to sex. Noelle was too rebellious to settle for that, which is why this book opens with her banished from Eden, ejected into Sector Four. The sectors are the city slums, full of gangs and degenerates and violence, only a slight step up from the horrific farming communes. Luckily for her, she literally stumbles into one of the lieutenants of the O'Kane gang, who are just as violent as their reputations but who have surprising sympathy for a helpless city girl.

My shorthand distinction between romance and erotica is that romance mixes some sex into the plot and erotica mixes some plot into the sex. Beyond Shame is erotica, specifically BDSM erotica. The forbidden sensations that Noelle got kicked out of Eden for pursuing run strongly towards humiliation, which is tangled up in the shame she was taught to feel about anything sexual. There is a bit of a plot surrounding the O'Kanes who take her in, their leader, some political skulduggery that eventually involves people she knows, and some inter-sector gang warfare, but it's quite forgettable (and indeed I've already forgotten most of it). The point of the story is Noelle navigating a relationship with Jasper (among others) that involves a lot of very graphic sex.

I was of two minds about reviewing this. Erotica is tricky to review, since to an extent it's not trying to do what most books are doing. The point is less to tell a coherent story (although that can be a bonus) than it is to turn the reader on, and what turns the reader on is absurdly personal and unpredictable. Erotica is arguably more usefully marked with story codes (which in this case would be something like MF, MMFF, FF, Mdom, Fdom, bd, ds, rom, cons, exhib, humil, tattoos) so that the reader has an idea whether the scenarios in the story are the sort of thing they find hot.

This is particularly true of BDSM erotica, since the point is arousal from situations that wouldn't work or might be downright horrifying in a different sort of book. Often the forbidden or taboo nature of the scene is why it's erotic. For example, in another genre I would complain about the exaggerated and quite sexist gender roles, where all the men are hulking cage fighters who want to control the women, but in male-dominant BDSM erotica that's literally the point.

As you can tell, I wrote a review anyway, primarily because of how I came to read this book. Kit Rocha (which is a pseudonym for the writing team of Donna Herren and Bree Bridges) recently published Deal with the Devil, a book about mercenary librarians in a post-apocalyptic future. Like every right-thinking person, I immediately wanted to read a book about mercenary librarians, but discovered that it was set in an existing universe. I hate not starting at the beginning of things, so even though there was probably no need to read the earlier books first, I figured out Beyond Shame was the first in this universe and the bundle of the first three books was only $2.

If any of you are immediately hooked by mercenary librarians but are back-story completionists, now you know what you'll be getting into.

That said, there are a few notable things about this book other than it has a lot of sex. The pivot of the romantic relationship was more interesting and subtle than most erotica. Noelle desperately wants a man to do all sorts of forbidden things to her, but she starts the book unable to explain or analyze why she wants what she wants, and both Jasper and the story are uncomfortable with that and unwilling to leave it alone. Noelle builds up a more coherent theory of herself over the course of the book, and while it's one that's obviously designed to enable lots of erotic scenes, it's not a bad bit of character development.

Even better is Lex, the partner (sort of) of the leader of the O'Kane gang and by far the best character in the book. She takes Noelle under her wing from the start, and while that relationship is sexualized like nearly everything in this book, it also turns into an interesting female friendship that I would have also enjoyed in a different genre. I liked Lex a lot, and the fact she's the protagonist of the next book might keep me reading.

Beyond Shame also has a lot more female gaze descriptions of the men than is often the case in male-dominant BDSM. The eye candy is fairly evenly distributed, although the gender roles are very much not. It even passes the Bechdel test, although it is still erotica and nearly all the conversations end up being about sex partners or sex eventually.

I was less fond of the fact that the men are all dangerous and violent and the O'Kane leader frequently acts like a controlling, abusive psychopath. A lot of that was probably the BDSM setup, but it was not my thing. Be warned that this is the sort of book in which one of the (arguably) good guys tortures someone to death (albeit off camera).

Recommendations are next to impossible for erotica, so I won't try to give one. If you want to read the mercenary librarian novel and are dubious about this one, it sounds like (although I can't confirm) that it's a bit more on the romance end of things and involves a lot fewer group orgies. Having read this book, I suspect it was entirely unnecessary to have done so for back-story. If you are looking for male-dominant BDSM, Beyond Shame is competently written, has a more thoughtful story than most, and has a female friendship that I fully enjoyed, which may raise it above the pack.

Rating: 6 out of 10

28 April, 2021 03:10AM

April 26, 2021

hackergotchi for Steve Kemp

Steve Kemp

Writing a text-based adventure game for CP/M

In my previous post I wrote about how I'd been running CP/M on a Z80-based single-board computer.

I've been slowly working my way through a bunch of text-based adventure games:

  • The Hitchhiker's Guide To The Galaxy
  • Zork 1
  • Zork 2
  • Zork 3

Along the way I remembered how much fun I used to have doing this in my early teens, and decided to write my own text-based adventure.

Since I'm not a masochist I figured I'd write something with only three or four locations, and solicited facebook for ideas. Shortly afterwards a "plot" was created and I started work.

I figured that the very last thing I wanted to be doing was to be parsing text-input with Z80 assembly language, so I hacked up a simple adventure game in C. I figured if I could get the design right that would ease the eventual port to assembly.

I had the realization pretty early that using a table-driven approach would be the best way - using structures to contain the name, description, and function-pointers appropriate to each object for example. In my C implementation I have things that look like this:

{name: "generator",
 desc: "A small generator.",
 use: use_generator,
 use_carried: use_generator_carried,
 get_fn: get_generator,
 drop_fn: drop_generator},

A bit noisy, but simple enough. If an object cannot be picked up, or dropped, the corresponding entries are blank:

{name: "desk",
 desc: "",
 edesc: "The desk looks solid, but old."},

Here we see something that is special, there's no description so the item isn't displayed when you enter a room, or LOOK. Instead the edesc (extended description) is available when you type EXAMINE DESK.

Anyway over a couple of days I hacked up the C-game, then I started work porting it to Z80 assembly. The implementation changed, the easter-eggs were different, but on the whole the two things are the same.

Certainly 99% of the text was recycled across the two implementations.

Anyway in the unlikely event you've got a craving for a text-based adventure game I present to you:

26 April, 2021 06:00PM

Vishal Gupta

Ramblings // On Sikkim and Backpacking

What I loved the most about Sikkim can’t be captured on cameras. It can’t be taped since it would be intrusive and it can’t be replicated because it’s unique and impromptu. It could be described, as I attempt to, but more importantly, it’s something that you simply have to experience to know.

Now I first heard about this from a friend who claimed he’d been offered free rides and Tropicanas by locals after finishing the Ladakh Marathon. And then I found Ronnie’s song, whose chorus goes : “Dil hai pahadi, thoda anadi. Par duniya ke maya mein phasta nahi” (My heart belongs to the mountains. Although a little childish, it doesn’t get hindered by materialism). While the song refers his life in Manali, I think this holds true for most Himalayan states.

Maybe it’s the pleasant weather, the proximity to nature, the sense of safety from Indian Army being round the corner, independence from material pleasures that aren’t available in remote areas or the absence of the pollution, commercialisation, & cutthroat-ness of cities, I don’t know, there’s just something that makes people in the mountains a lot kinder, more generous, more open and just more alive.

Sikkimese people, are honestly some of the nicest people I’ve ever met. The blend of Lepchas, Bhutias and the humility and the truthfulness Buddhism ingrains in its disciples is one that’ll make you fall in love with Sikkim (assuming the views, the snow, the fab weather and food, leave you pining for more).

As a product of Indian parenting, I’ve always been taught to be wary of the unknown and to stick to the safer, more-travelled path but to be honest, I enjoy bonding with strangers. To me, each person is a storybook waiting to be flipped open with the right questions and the further I get from home, the wilder the stories get. Besides there’s something oddly magical about two arbitrary curvilinear lines briefly running parallel until they diverge to move on to their respective paths. And I think our society has been so busy drawing lines and spreading hate that we forget that in the end, we’re all just lines on the universe’s infinite canvas. So the next time you travel, and you’re in a taxi, a hostel, a bar, a supermarket, or on a long walk to a monastery (that you’re secretly wishing is open despite a lockdown), strike up a conversation with a stranger. Small-talk can go a long way.


Header icon made by Freepik from www.flaticon.com

26 April, 2021 11:11AM by Vishal Gupta

April 25, 2021

Dominique Dumont

An improved GUI for cme and Config::Model

I’ve finally found the time to improve the GUI of my pet project: cme (aka Config::Model).

Several years ago, I stumbled on a usability problem on the GUI. Some configuration (like OpenSsh or Systemd) feature a lot of configuration parameters. Which means that the GUI displays all these parameters, so finding a specfic parameter might be challenging:

To workaround this problem, I’ve added a Filter widget in 2018 which did more or less the job, but it suffered from several bugs which made its behavior confusing.

This is now fixed. The Filter widget is now working in a more consistent way.

In the example below, I’ve typed “IdentityFile” (1) in the Filter widget to show the identityFile used for various hosts (2):

Which is quite good, but some hosts use the default identity file so no value show up in the GUI. You can then click on “hide empty value” checkbox to show only the hosts that use a specific identity file:

I hope that this new behavior of the Filter box will make this project more useful.

The improved GUI was released with Config::Model::TkUI 1.374. This new version is available on CPAN and on Debian/experimental). It will be released on Debian/unstable once the next Debian version is out.

All the best

25 April, 2021 03:15PM by dod

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

JavaScript madness

Yesterday, I had the problem that while socket.io from the browser would work just fine against a given server endpoint (which I do not control), talking to the same server from Node.js would just give hangs and/or inscrutinable “7:::1” messages (which I later learned meant “handshake missing”).

To skip six hours of debugging, the server set a cookie in the initial HTTP handshake, and expected to get it back when opening a WebSocket, presumably to steer the connection to the same backend that got the handshake. (Chrome didn't show the cookie in the WS debugging, but Firefox did.) So we need to keep track of chose cookies. While still remaining on socket.io 0.9.5 (for stupid reasons). No fear, we add this incredibly elegant bit of code:

var io = require('socket.io-client');
// Hook into XHR to pick out the cookie when we receive it.
var my_cookie;
io.util.request = function() {
        var XMLHttpRequest = require('xmlhttprequest').XMLHttpRequest;
        var xhr = new XMLHttpRequest();
        xhr.setDisableHeaderCheck(true);
        const old_send = xhr.send;
        xhr.send = function() {
                // Add our own readyStateChange hook in front, to get the cookie if we don't have it.
                xhr.old_onreadystatechange = xhr.onreadystatechange;
                xhr.onreadystatechange = function() {
                        if (xhr.readyState == xhr.HEADERS_RECEIVED) {
                                const cookie = xhr.getResponseHeader('set-cookie');
                                if (cookie) {
                                        my_cookie = cookie[0].split(';')[0];
                                }
                        }
                        xhr.old_onreadystatechange.call(xhr, arguments);
                };
                // Set the cookie if we have it.
                if (my_cookie) {
                        xhr.setRequestHeader("Cookie", my_cookie);
                }
                return old_send.call(this, arguments);
        };
        return xhr;
};
;
// Now override the socket.io WebSockets transport to include our header.
io.Transport['websocket'].prototype.open = function() {
        const query = io.util.query(this.socket.options.query);
        const WebSocket = require('ws');
        // Include our cookie.
        let options = {};
        if (my_cookie) {
                options['headers'] = { 'Cookie': my_cookie };
        }
        this.websocket = new WebSocket(this.prepareUrl() + query, options);
        // The rest is just repeated from the existing function.
        const self = this;
        this.websocket.onopen = function () {
                self.onOpen();
                self.socket.setBuffer(false);
        };
        this.websocket.onmessage = function (ev) {
                self.onData(ev.data);
        };
        this.websocket.onclose = function () {
                self.onClose();
                self.socket.setBuffer(true);
        };
        this.websocket.onerror = function (e) {
                self.onError(e);
        };
        return this;
};
// And now, finally!
var socket = io.connect('https://example.com', { transports: ['websocket'] });

It's a reminder that talking HTTP and executing JavaScript does not make you into a (headless) browser. And that you shouldn't let me write JavaScript. :-)

(Apologies for the lack of blank lines; evidently, they confuse Markdown.)

25 April, 2021 10:13AM

Russell Coker

Scanning with a MFC-9120CN on Bullseye

I previously wrote about getting a Brother MFC-9120CN multifunction printer/scanner to print on Linux [1]. I had also got it scanning which I didn’t blog about.

found USB scanner (vendor=0x04f9, product=0x021d) at libusb:003:002

I recently upgraded that Linux system to Debian/Testing (which will soon be released as Debian/Bullseye) and scanning broke. The command sane-find-scanner would find the USB connected scanner (with the above output), but “scanimage -L” didn’t.

It turned out that I had to edit /etc/sane.d/dll.d/hplip which had a single uncommented line of “hpaio” and replace that with “brother3” to make SANE load the driver /usr/lib64/sane/libsane-brother3.so from the brscan3 package (which Brother provided from their web site years ago).

I have the following script to do the scanning (which can run as non-root):

#!/bin/bash
set -e
if [ "$1" == "" ]; then
  echo "specify output filename"
  exit 1
fi

TMP=$(mktemp)

scanimage > $TMP
convert $TMP $1
rm $TMP

Final Note

This blog post doesn’t describe everything that needs to be done to setup a scanner, I already had part of it setup from 10 years ago. But for anyone who finds this after having trouble, /etc/sane.d/dll.d is one place you should look for important configuration (especially if sane-find-scanner works and “scanimage -L” fails). Also the Brother drivers are handy to have although I apparently had it working in the past with the hpaio driver from HP (the Brother device emulates a HP device).

25 April, 2021 06:54AM by etbe

Russ Allbery

Review: Learning React

Review: Learning React, by Alex Banks & Eve Porcello

Publisher: O'Reilly
Copyright: June 2020
ISBN: 1-4920-5172-1
Format: Trade paperback
Pages: 287

My first JavaScript project was a React frontend to a REST service. As part of that project, I read two books: JavaScript: The Definitive Guide to learn the language foundation and this book to learn the framework on top of it. This was an unintentional experiment in the ways programming books can approach the topic.

I commented in my review of JavaScript: the Definitive Guide that it takes the reference manual approach to the language. Learning React is the exact opposite. It's goal-driven, example-heavy, and has a problem and solution structure. The authors present a sample application, describe some desired new feature or deficiency in it, and then introduce the specific React technique that solves that problem. There is some rewriting of previous examples using more sophisticated techniques, but most chapters introduce new toy applications along with new parts of the React framework.

The best part of this book is its narrative momentum, so I think the authors were successful at their primary goal. The first eight chapters of the book (more on the rest of the book in a moment) feel like a whirlwind tour where one concept flows naturally into the next and one's questions while reading about one technique are often answered in the next section. I thought the authors tried too hard in places and overdid the enthusiasm, but it's very readable in a way that I think may appeal to people who normally find programming books dry. Learning React is also firm and definitive about the correct way to use React, which may appeal to readers who only want to learn the preferred way of using the framework. (For example, React class components are mentioned briefly, mostly to tell the reader not to use them, and the rest of the book only uses functional components.)

I had two major problems with this book, however. The first is that this breezy, narrative style turns out to be awful when one tries to use it as a reference. I read through most of this book with both enjoyment and curiosity, sat down to write a React component, and immediately struggled to locate the information I needed. Everything felt logically connected when I was focusing on the problems the authors introduced, but as soon as I started from my own problem, the structure of the book fell apart. I had to page through chapters to locate some nugget buried in the text, or re-read sections of the book to piece together which motivating problem my code was most similar to. It was a frustrating experience.

This may be a matter of learning style, since this is why I prefer programming books with a reference structure. But be warned that I can't recommend this book as a reference while you're programming, nor does it prepare you to use the official React documentation as a reference.

The second problem is less explicable and less defensible. I don't know what happened with O'Reilly's copy-editing for this book, but the code snippets are a train wreck. The Amazon reviews are full of people complaining about typos, syntax errors, omitted code, and glaring logical flaws, and they are entirely correct. It's so bad that I was left wondering if a very early, untested draft of the examples was somehow substituted into the book at the last minute by mistake.

I'm not the sort of person who normally types code in from a book, so I don't care about a few typos or obvious misprints as long as the general shape is correct. The general shape was not correct. In a few places, the code is so completely wrong and incomplete that even combined with the surrounding text I was unable to figure out what it was supposed to be. It's possible this is fixed in a later printing (I read the June 2020 printing of the second edition), but otherwise beware. The authors do include a link to a GitHub repository of the code samples, which are significantly different than what's printed in the book, but that repository is incomplete; many of the later chapter examples are only links to JavaScript web sandboxes, which bodes poorly for the longevity of the example code.

And then there's chapter nine of this book, which I found entirely baffling. This is a direct quote from the start of the chapter:

This is the least important chapter in this book. At least, that's what we've been told by the React team. They didn't specifically say, "this is the least important chapter, don't write it." They've only issued a series of tweets warning educators and evangelists that much of their work in this area will very soon be outdated. All of this will change.

This chapter is on suspense and error boundaries, with a brief mention of Fiber. I have no idea what I'm supposed to do with this material as a reader who is new to React (and thus presumably the target audience). Should I use this feature? When? Why is this material in the book at all when it's so laden with weird stream-of-consciousness disclaimers? It's a thoroughly odd editorial choice.

The testing chapter was similarly disappointing in that it didn't answer any of my concrete questions about testing. My instinct with testing UIs is to break out Selenium and do integration testing with its backend, but the authors are huge fans of unit testing of React applications. Great, I thought, this should be interesting; unit testing seems like a poor fit for UI code because of how artificial the test construction is, but maybe I'm missing some subtlety. Convince me! And then the authors... didn't even attempt to convince me. They just asserted unit testing is great and explained how to write trivial unit tests that serve no useful purpose in a real application. End of chapter. Sigh.

I'm not sure what to say about this book. I feel like it has so many serious problems that I should warn everyone away from it, and yet the narrative introduction to React was truly fun to read and got me excited about writing React code. Even though the book largely fell apart as a reference, I still managed to write a working application using it as my primary reference, so it's not all bad. If you like the problem and solution style and want a highly conversational and informal tone (that errs on the side of weird breeziness), this may still be the book for you. Just be aware that the code examples are a trash fire, so if you learn from examples, you're going to have to chase them down via the GitHub repository and hope that they still exist (or get a later edition of the book where this problem has hopefully been corrected).

Rating: 6 out of 10

25 April, 2021 05:15AM

Antoine Beaupré

Lost article ideas

I wrote for LWN for about two years. During that time, I wrote (what seems to me an impressive) 34 articles, but I always had a pile of ideas in the back of my mind. Those are ideas, notes, and scribbles lying around. Some were just completely abandoned because they didn't seem a good fit for LWN.

Concretely, I stored those in branches in a git repository, and used the branch name (and, naively, the last commit log) as indicators of the topic.

This was the state of affairs when I left:

remotes/private/attic/novena                    822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot                de09d82b quick review, add note and graph
remotes/private/attic/wireguard                 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat                     914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet                  9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks      dcf02676 config notes for http2
remotes/private/backlog/serverless              9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting             00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon                     f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview            21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018                 1edc5ec8 add series
remotes/private/fin/netconf                     3f4b7ece publish the netconf articles
remotes/private/fin/netdev                      6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline                 f841deed pgp offline branch ready for publication
remotes/private/fin/primes                      c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes                    4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks            5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy                   95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya                   20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks        fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin                     7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros         d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying                  a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery         8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench            451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics    2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time              f3c24a53 another way of telling time
remotes/private/ideas/wallabako                 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench       8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work

Wow, what a mess! Let's see if I can make sense of this:

Attic

Those are articles that I thought about, then finally rejected, either because it didn't seem worth it, or my editors rejected it, or I just moved on:

  • novena: the project is ooold now, didn't seem to fit a LWN article. it was basically "how can i build my novena now" and "you guys rock!" it seems like the MNT Reform is the brain child of the Novena now, and I dare say it's even cooler!
  • secureboot: my LWN editors were critical of my approach, and probably rightly so - it's a really complex subject and I was probably out of my depth... it's also out of date now, we did manage secureboot in Debian
  • wireguard: LWN ended up writing extensive coverage, and I was biased against Donenfeld because of conflicts in a previous project

Backlog

Those were articles I was planning to write about next.

  • dat: I already had written Sharing and archiving data sets with Dat, but it seems I had more to say... mostly performance issues, beaker, no streaming, limited adoption... to be investigated, I guess?
  • packet: a primer on data communications over ham radio, and the cool new tech that has emerged in the free software world. those are mainly notes about Pat, Direwolf, APRS and so on... just never got around to making sense of it or really using the tech...
  • performance-tweaks: "optimizing websites at the age of http2", the unwritten story of the optimization of this website with HTTP/2 and friends
  • serverless: god. one of the leftover topics at Kubecon, my notes on this were thin, and the actual subject, possibly even thinner... the only lie worse than the cloud is that there's no server at all! concretely, that's a pile of notes about Kubecon which I wanted to sort through. Probably belongs in the attic now.

Fin

Those are finished articles, they were published on my website and LWN, but the branches were kept because previous drafts had private notes that should not be published.

Ideas

A lot of those branches were actually just an empty commit, with the commitlog being the "pitch", more or less. I'd send that list to my editors, sometimes with a few more links (basically the above), and they would nudge me one way or the other.

Sometimes they would actively discourage me to write about something, and I would do it anyways, send them a draft, and they would patiently make me rewrite it until it was a decent article. This was especially hard with the terminal emulator series, which took forever to write and even got my editors upset when they realized I had never installed Fedora (I ended up installing it, and I was proven wrong!)

Stalled

Oh, and then there's those: those are either "ideas" or "backlog" that got so far behind that I just moved them out of the way because I was tired of seeing them in my list.

  • stalled/bench-bench-bench benchmarking http benchmarking tools, a horrible mess of links, copy-paste from terminals, and ideas about benchmarking... some of this trickled out into this benchmarking guide at Tor, but not much more than the list of tools
  • stalled/debian-survey-democracy: "free software surveys and Debian democracy, volunteer vs paid work"... A long standing concern of mine is that all Debian work is supposed to be volunteer, and paying explicitly for work inside Debian has traditionally been frowned upon, even leading to serious drama and dissent (remember Dunc-Tank)? back when I was writing for LWN, I was also doing paid work for Debian LTS. I also learned that a lot (most?) Debian Developers were actually being paid by their job to work on Debian. So I was confused by this apparent contradiction, especially given how the LTS project has been mostly accepted, while Dunc-Tank was not... See also this talk at Debconf 16. I had hopes that this study would show the "hunch" people have offered (that most DDs are paid to work on Debian) but it seems to show the reverse (only 36% of DDs, and 18% of all respondents paid). So I am still confused and worried about the sustainability of Debian.

What do you think?

So that's all I got. As people might have noticed here, I have much less time to write these days, but if there's any subject in there I should pick, what is the one that you would find most interesting?

Oh! and I should mention that you can write to LWN! If you think people should know more about some Linux thing, you can get paid to write for it! Pitch it to the editors, they won't bite. The worst that can happen is that they say "yes" and there goes two years of your life learning to write. Because no, you don't know how to write, no one does. You need an editor to write.

That's why this article looks like crap and has a smiley. :)

25 April, 2021 01:02AM

April 24, 2021

hackergotchi for Gunnar Wolf

Gunnar Wolf

FLISOL • Talking about Jitsi

Every year since 2005 there is a very good, big and interesting Latin American gathering of free-software-minded people. Of course, Latin America is a big, big, big place, and it’s not like we are the most economically buoyant region to meet in something equiparable to FOSDEM.

What we have is a distributed free software conference — originally, a distributed Linux install-fest (which I never liked, I am against install-fests), but gradually it morphed into a proper conference: Festival Latinoamericano de Instalación de Software Libre (Latin American Free Software Installation Festival)

This FLISOL was hosted by the always great and always interesting Rancho Electrónico, our favorite local hacklab, and has many other interesting talks.

I like talking about projects where I am involved as a developer… but this time I decided to do otherwise: I presented a talk on the Jitsi videoconferencing server. Why? Because of the relevance videoconferences have had over the last year.

So, without further ado — Here is a video I recorded locally from the talk I gave (MKV), as well as the slides (PDF).

24 April, 2021 11:24PM

Antoine Beaupré

A dead game clock

Time flies. Back in 2008, I wrote a game clock. Since then, what was first called "chess clock" was renamed to pychessclock and then Gameclock (2008). It shipped with Debian 6 squeeze (2011), 7 wheezy (4.0, 2013, with a new UI), 8 jessie (5.0, 2015, with a code cleanup, translation, go timers), 9 stretch (2017), and 10 buster (2019), phew! Eight years in Debian over 4 releases, not bad!

But alas, Debian 11 bullseye (2021) won't ship with Gameclock because both Python 2 and GTK 2 were removed from Debian. I lack the time, interest, and energy to port this program. Even if I could find the time, everyone is on their phone nowadays.

So finding the right toolkit would require some serious thinking about how to make a portable program that can run on Linux and Android. I care less about Mac, iOS, and Windows, but, interestingly, it feels it wouldn't be much harder to cover those as well if I hit both Linux and Android (which is already hard enough, paradoxically).

(And before you ask, no, Java is not an option for me thanks. If I switch to anything else than Python, it would be Golang or Rust. And I did look at some toolkit options a few years ago, was excited by none.)

So there you have it: that is how software dies, I guess. Alternatives include:

  • Chessclock - really old Ruby which made Gameclock rename
  • Ghronos - also really old Java app
  • Lichess - has a chess clock built into the app
  • Otter - if you squint a little

PS: Monkeysign also suffered the same fate, for what that's worth. Alternatives include caff, GNOME Keysign, and pius. Note that this does not affect the larger Monkeysphere project, which will ship with Debian bullseye.

24 April, 2021 05:56PM

hackergotchi for Joey Hess

Joey Hess

here's your shot

The nurse releases my shoulder and drops the needle in a sharps bin, slaps on a smiley bandaid. "And we're done!" Her cheeryness seems genuine but a little strained. There was a long line. "You're all boosted, and here's your vaccine card."

Waiting out the 15 minutes in observation, I look at the card.

Moderna COVID-19/22 vaccine booster
3/21/2025              lot #5829126

  🇺🇸 NOT A VACCINE PASSPORT 🇺🇸

(Tear at perforated line.)
- - - - - - - - - - - - - - - - - -

Here's your shot at
$$ ONE HUNDRED MILLION $$

       Scratch
       and win

I bite my nails, when I'm not wearing this mask. So I scrub inneffectively at the grainy silver box. Not like the woman across from me, three kids in tow, who's zipping through her sheaf of scratchers.

The message on mine becomes clear: 1 month free Amazon Prime

Ah well.

24 April, 2021 12:21AM

April 23, 2021

hackergotchi for Thomas Goirand

Thomas Goirand

Puppet and OS detection

As you may know, Puppet uses “facter” to get facts about the machine it is about to configure. That’s fine, and a nice concept. One can later use variables in a puppet manifest to do different things depending on what facter tells. For example, the operating system name … oh no! This thing is really stupid … Here’s the code one has to do to be compatible with puppet from version 3 up to 5:

if $::lsbdistcodename == undef{
# This works around differences between facter versions
if $facts['os']['lsb'] != undef{
$distro_codename = $facts['os']['lsb']['distcodename']
}else{
$distro_codename = $facts['os']['distro']['codename']
}
}else{
$distro_codename = downcase($::lsbdistcodename)
}

Indeed, the global variable $::lsbdistcodename still existed up to Stretch (and is gone in Buster). The global $::facts wasn’t an array before (but a hash), so in Jessie, it breaks with the error message “facts is not a hash or array when accessing it with os”. So, one need the full code above to make this work.

It’s ok to improve things. It is NOT OK to break os detection. To me it is a very bad practice from upstream Puppet authors. I’m publishing this in the hope to avoid others to fall in the same trap as I did.

23 April, 2021 12:56PM by Goirand Thomas

hackergotchi for Matthew Garrett

Matthew Garrett

An accidental bootsplash

Back in 2005 we had Debconf in Helsinki. Earlier in the year I'd ended up invited to Canonical's Ubuntu Down Under event in Sydney, and one of the things we'd tried to design was a reasonable graphical boot environment that could also display status messages. The design constraints were awkward - we wanted it to be entirely in userland (so we didn't need to carry kernel patches), and we didn't want to rely on vesafb[1] (because at the time we needed to reinitialise graphics hardware from userland on suspend/resume[2], and vesa was not super compatible with that). Nothing currently met our requirements, but by the time we'd got to Helsinki there was a general understanding that Paul Sladen was going to implement this.

The Helsinki Debconf ended being an extremely strange event, involving me having to explain to Mark Shuttleworth what the physics of a bomb exploding on a bus were, many people being traumatised by the whole sauna situation, and the whole unfortunate water balloon incident, but it also involved Sladen spending a bunch of time trying to produce an SVG of a London bus as a D-Bus logo and not really writing our hypothetical userland bootsplash program, so on the last night, fueled by Koff that we'd bought by just collecting all the discarded empty bottles and returning them for the deposits, I started writing one.

I knew that Debian was already using graphics mode for installation despite having a textual installer, because they needed to deal with more complex fonts than VGA could manage. Digging into the code, I found that it used BOGL - a graphics library that made use of the VGA framebuffer to draw things. VGA had a pre-allocated memory range for the framebuffer[3], which meant the firmware probably wouldn't map anything else there any hitting those addresses probably wouldn't break anything. This seemed safe.

A few hours later, I had some code that could use BOGL to print status messages to the screen of a machine booted with vga16fb. I woke up some time later, somehow found myself in an airport, and while sitting at the departure gate[4] I spent a while staring at VGA documentation and worked out which magical calls I needed to make to have it behave roughly like a linear framebuffer. Shortly before I got on my flight back to the UK, I had something that could also draw a graphical picture.

Usplash shipped shortly afterwards. We hit various issues - vga16fb produced a 640x480 mode, and some laptops were not inclined to do that without a BIOS call first. 640x400 worked basically everywhere, but meant we had to redraw the art because circles don't work the same way if you change the resolution. My brief "UBUNTU BETA" artwork that was me literally writing "UBUNTU BETA" on an HP TC1100 shortly after I'd got the Wacom screen working did not go down well, and thankfully we had better artwork before release.

But 16 colours is somewhat limiting. SVGALib offered a way to get more colours and better resolution in userland, retaining our prerequisites. Unfortunately it relied on VM86, which doesn't exist in 64-bit mode on Intel systems. I ended up hacking the X.org x86emu into a thunk library that exposed the same API as LRMI, so we could run it without needing VM86. Shockingly, it worked - we had support for 256 colour bootsplashes in any supported resolution on 64 bit systems as well as 32 bit ones.

But by now it was obvious that the future was having the kernel manage graphics support, both in terms of native programming and in supporting suspend/resume. Plymouth is much more fully featured than Usplash ever was, but relies on functionality that simply didn't exist when we started this adventure. There's certainly an argument that we'd have been better off making reasonable kernel modesetting support happen faster, but at this point I had literally no idea how to write decent kernel code and everyone should be happy I kept this to userland.

Anyway. The moral of all of this is that sometimes history works out such that you write some software that a huge number of people run without any idea of who you are, and also that this can happen without you having any fucking idea what you're doing.

Write code. Do crimes.

[1] vesafb relied on either the bootloader or the early stage kernel performing a VBE call to set a mode, and then just drawing directly into that framebuffer. When we were doing GPU reinitialisation in userland we couldn't guarantee that we'd run before the kernel tried to draw stuff into that framebuffer, and there was a risk that that was mapped to something dangerous if the GPU hadn't been reprogrammed into the same state. It turns out that having GPU modesetting in the kernel is a Good Thing.

[2] ACPI didn't guarantee that the firmware would reinitialise the graphics hardware, and as a result most machines didn't. At this point Linux didn't have native support for initialising most graphics hardware, so we fell back to doing it from userland. VBEtool was a terrible hack I wrote to try to re-execute the system's graphics hardware through a range of mechanisms, and it worked in a surprising number of cases.

[3] As long as you were willing to deal with 640x480 in 16 colours

[4] Helsinki-Vantaan had astonishingly comfortable seating for time

comment count unavailable comments

23 April, 2021 11:21AM

April 22, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

drat 0.2.0: Now with ‘docs/’

drat user

A new release of drat arrived on CRAN today. This is the first release in a few months (with the last release in July of last year) and it (finally) makes the leap to supporting docs/ in the main branch as we are all so tired of the gh-pages branch. We also have new vignettes, new (and very shiny) documentation and refreshed vignettes!

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code. See below for a few custom reference examples.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Or as we may now add: stay away from semi-random universes snapshots too.

Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for two-plus decades how to do this. And you can too: drat is easy to use, documented by (now) six vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.0 (2021-04-21)

  • A documentation website for the package was added at https://eddelbuettel.github.io/drat/ (Dirk)

  • The continuous integration was switched to using ‘r-ci’ (Dirk)

  • The docs/ directory of the main repository branch can now be used instead of gh-pages branch (Dirk in #112)

  • A new repository https://github.com/drat-base/drat can now be used to fork an initial drat repository (Dirk)

  • A new vignette “Drat Step-by-Step” was added (Roman Hornung and Dirk in #117 fixing #115 and #113)

  • The test suite was refactored for docs/ use (Felix Ernst in #118)

  • The minimum R version is now ‘R (>= 3.6)’ (Dirk fixing #119)

  • The vignettes were switched to minidown (Dirk fixing #116)

  • A new test file was added to ensure ‘NEWS.Rd’ is always at the current release version.

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 April, 2021 11:50PM

Russell Coker

HP MP350P Gen8

I’m playing with a HP Proliant ML350P Gen8 server (part num 646676-011). For HP servers “ML” means tower (see the ProLiant Wikipedia page for more details [1]). For HP servers the “generation” indicates how old the server is, Gen8 was announced in 2012 and Gen10 seems to be the current generation.

Debian Packages from HP

wget -O /usr/local/hpePublicKey2048_key1.pub https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub
echo "# HP RAID" >> /etc/apt/sources.list
echo "deb [signed-by=/usr/local/hpePublicKey2048_key1.pub] http://downloads.linux.hpe.com/SDR/downloads/MCP/Debian/ buster/current non-free" >> /etc/apt/sources.list

The above commands will setup the APT repository for Debian/Buster. See the HP Downloads FAQ [2] for more information about their repositories.

hponcfg

This package contains the hponcfg program that configures ILO (the HP remote management system) from Linux. One noteworthy command is “hponcfg -r” to reset the ILO, something you should do before selling an old system.

ssacli

This package contains the ssacli program to configure storage arrays, here are some examples of how to use it:

# list controllers and show slot numbers
ssacli controller all show
# list arrays on controller identified by slot and give array IDs
ssacli controller slot=0 array all show
# show details of one array
ssacli controller slot=0 array A show
# show all disks on one controller
ssacli controller slot=0 physicaldrive all show
# show config of a controller, this gives RAID level etc
ssacli controller slot=0 show config
# delete array B (you can immediately pull the disks from it)
ssacli controller slot=0 array B delete
# create an array type RAID0 with specified drives, do this with one drive per array for BTRFS/ZFS
ssacli controller slot=0 create type=arrayr0 drives=1I:1:1

When a disk is used in JBOD mode just under 33MB will be used at the end of the disk for the RAID metadata. If you have existing disks with a DOS partition table you can put it in a HP array as a JBOD and it will work with all data intact (GPT partition table is more complicated). When all disks are removed from the server the cooling fans run at high speed, this would be annoying if you wanted to have a diskless workstation or server using only external storage.

ssaducli

This package contains the ssaducli diagnostic utility for storage arrays. The SSD “wear gauge report” doesn’t work for the 2 SSDs I tested it on, maybe it only supports SAS SSDs not SATA SSDs. It doesn’t seem to do anything that I need.

storcli

This package contains both 32bit and 64bit versions of the MegaRAID utility and deletes whichever one doesn’t match the installation in the package postinst, so it fails debsums checks etc. The MegaRAID utility is for a different type of RAID controller to the “Smart Storage Array” (AKA SSA) that the other utilities work with. As an aside it seems that there are multiple types of MegaRAID controller, the management program from the storcli package doesn’t work on a Dell server with MegaRAID. They should have made separate 32bit and 64bit versions of this package.

Recommendations

Here is HP page for downloading firmware updates (including security updates) [3], you have to login first and have a warranty. This is legal but poor service. Dell servers have comparable prices (on the second hand marker) and comparable features but give free firmware updates to everyone. Dell have overall lower quality of Debian packages for supporting utilities, but a wider range of support so generally Dell support seems better in every way. Dell and HP hardware seems of equal quality so overall I think it’s best to buy Dell.

Suggestions for HP

Finding which of the signing keys to use is unreasonably difficult. You should get some HP employees to sign the HP keys used for repositories with their personal keys and then go to LUG meetings and get their personal keys well connected to the web of trust. Then upload the HP keys to the public key repositories. You should also use the same keys for signing all versions of the repositories. Having different keys for the different versions of Debian wastes people’s time.

Please provide firmware for all users, even if they buy systems second hand. It is in your best interests to have systems used long-term and have them run securely. It is not in your best interests to have older HP servers perform badly.

Having all the fans run at maximum speed when power is turned on is a standard server feature. Some servers can throttle the fan when the BIOS is running, it would be nice if HP servers did that. Having ridiculously loud fans until just before GRUB starts is annoying.

22 April, 2021 08:22AM by etbe

hackergotchi for Shirish Agarwal

Shirish Agarwal

The Great Train Robbery

I had a twitter fight few days back with a gentleman and the article is a result of that fight. Sadly, I do not know the name of the gentleman as he goes via a psuedo name and then again I’ve not taken permission from him to quote him in either way. So I will just state the observations I was able to make from the conversations we had. As people who read this blog regularly would know, I am and have been against Railway Privatization which is happening in India. And will be sharing some of the case studies from other countries as to how it panned out for them.

UK Railways


How Privatization Fails : Railways

The Above video is by a gentleman called Shaun who basically shared that privatization as far as UK is concerned is nothing but monopolies and while there are complex reasons for the same, the design of the Railways is such that it will always be a monopoly structure. At the most what you can do is have several monopolies but that is all that can happen. The idea of competition just cannot happen. Even the idea that subsidies will be less or/and trains will run on time is far from fact. Both of these facts have been checked and found to be truthful by fullfact.org. It is and argued that UK is small and perhaps it doesn’t have the right conditions. It is probably true but still we do deserve to have a glance at the UK railway map.

UK railway map with operatorsUK railway map with operators

The above map is copyrighted to Map Marketing where you could see it today . As can be seen above most companies had their own specified areas. Now if you had looked at the facts then you would have seen that UK fares have been higher. In fact, an oldish article from Metro (a UK publication) shares the same. In fact, UK nationalized its railways effectively as many large rail operators were running in red. Even Scotland is set to nationalised back in March 2022. Remember this is a country which hasn’t seen inflation go upwards of 5% in nearly a decade. The only outlier was 2011 where they indeed breached the 5% mark. So from this, what we see is ‘Private Gains’ and “Private Gains Public Losses’ perhaps seem fit. But then maybe we didn’t use the right example. Perhaps Japan would be better. They have bullet trains while UK is still thinking about it. (HS2).

Japanese Railway

Below is the map of Japanese Railway

Railway map of Japan with ‘private ownership’ – courtesy Wikimedia commons

Japan started privatizing its railway in 1987 and to date it has not been fully privatized. And on top of it, amount as much as ¥24 trillion of the long-term JNR debt was shouldered by the government at the expense of taxpayers of Japan while also reducing almost 1/4th of it employees. To add to it, while some parts of Japanese Railways did make profits, many of them made profits by doing large-scale non-railway business mostly real estate of land adjacent to railway stations. In many cases, it seems this went all the way up to 60% of the revenue. The most profitable has been the Shinkansen though. And while it has been profitable, it has not been without safety scandals over the years, the biggest in recent years was the 2005 Amagasaki derailment. What was interesting to me was the Aftermath, while the Wikipedia page doesn’t share much, I had read at the time and probably could be found how a lot of ordinary people stood up to the companies in a country where it is a known fact that most companies are owned by the Yakuza. And this is a country where people are loyal to their corporation or company no matter what. It is a strange culture to west and also here in India where people change jobs on drop of hat, although nowadays we have record unemployment. So perhaps Japan too does not meet our standard as it doesn’t do competition with each other but each is a set monopoly in those regions. Also how much subsidy is there or not is not really transparent.

U.S. Railways

Last, but not the least I share the U.S. Railway map. This is provided by A Mr. Tom Alison on reddit on channel maporn. As the thread itself is archived and I do not know the gentleman concerned, nor have taken permission for the map, hence sharing the compressed version –


U.S. Railway lines with the different owners

Now the U.S. Railways is and has always been peculiar as unlike the above two the U.S. has always been more of a freight network. Probably, much of it has to do that in the 1960’s when oil was cheap, the U.S. made zillions of roadways and romanticized the ‘road trip’ and has been doing it ever since. Also the creation of low-cost airlines definitely didn’t help the railways to have more passenger services, in fact the opposite.

There are and have been smaller services and attempts of privatization in both New Zealand and Australia and both have been failures. Please see papers in that regard. My simple point is this, as can be seen above, there have been various attempts at privatization of railways and most of them have been a mixed bag. The only one which comes close to what we think as good is Japanese but that also used a lot of public debt which we don’t know what will happen on next. Also for higher-speed train services like a bullet train or whatever, you need to direct, no hair pen bends. In fact, a good talk on the topic is the TBD podcast which while it talks about hyperloop, the same questions is and would be asked if were to do in India. Another thing to be kept in mind is that the Japanese have been exceptional builders and this is because they have been forced to. They live in a seismically active zone which made Fukushima disaster a reality but at the same time, their buildings are earthquake-resistant.

Standard Disclaimer – The above is a simplified version of things. I could have added in financial accounts but that again has no set pattern. For e.g. some Railways use accrual, some use cash and some use hybrid. I could have also shared in either the guage or electrification but all have slightly different standards, although uniguage is something that all Railways aspire for and electrification is again something that all Railways want although in many cases it just isn’t economically feasible.

Indian Railways

Indian Railways itself recently made the move from Cash to Accrual couple of years back. In-between for a couple of years, it was hybrid. The sad part is and was you can now never measure against past performance in the old way because it is so different. Hence, whether the Railways will be making a loss or a profit, we would come to know only much later. Also, most accountants don’t know the new system well, so it is gonna take more time, how much unknown. Sadly, what GOI did a few years back is merge the Railway budget into the Union Budget. Of course, the excuse they gave is too many pressures of new trains, while the truth is, by doing this, they decreased transparency about the whole thing. For e.g. for the last few years, the only state which had significant work being done is in U.P. (Uttar Pradesh) and a bit in Goa, although that is has been protested time and again. I being from the neighborly state of Maharashtra , and have been there several times. Now it does feels all like a dream, going to Goa :(.

Covid news

Now before I jump on the news, I should share the movie ‘Virus’ (2019) which was made by the talented Aashiq Abu. Even though, am not a Malayalee, I still have enjoyed many of his movies simply because he is a terrific director and Malayalam movies, at least most of them have English subtitles and lot of original content.. Interestingly, unlike the first couple of times when I saw it a couple of years back. The first time I saw it, I couldn’t sleep a wink for a week. Even the next time, it was heavy. I had shared the movie with mum, and even she couldn’t see it in one go. It is and was that powerful Now maybe because we are headlong in the pandemic, and the madness is all around us. There are two terms that helped me though understand a great deal of what is happening in the movie, the first term was ‘altered sensorium’ which has been defined here. The other is saturation or to be more precise ‘oxygen saturation‘. This term has also entered the Indian twitter lexicon quite a bit as India has started running out of oxygen. Just today Delhi High Court did an emergency hearing on the subject late at night. Although there is much to share about the mismanagement of the center, the best piece on the subject has been by Miss Priya Ramani. Yup, the same lady who has won against M.J. Akbar and this is when Mr. Akbar had 100 lawyers for this specific case. It would be interesting to see what happens ahead.

There are however few things even she forgot in her piece, For e.g. reverse migration i.e. from urban to rural migration started again. Two articles from different entities sharing a similar outlook.Sadly, the right have no empathy or feeling for either the poor or the sick. Even the labor minister Santosh Gangwar’s statement that around 1.04 crores were the only people who walked back home. While there is not much data, however some work/research has been done on migration to cites that the number could be easily 10 times as much. And this was in the lockdown of last year. This year, again the same issue has re-surfaced and migrants learning lessons started leaving cities. And I’m ashamed to say I think they are doing the right thing. Most State Governments have not learned lessons nor have they done any work to earn the trust of migrants. This is true of almost all state Governments. Last year, just before the lockdown was announced, me and my friend spent almost 30k getting a cab all the way from Chennai to Pune, how much we paid for the cab, how much we bribed the various people just so we could cross the state borders to return home to our anxious families. Thankfully, unlike the migrants, we were better off although we did make a loss. I probably wouldn’t be alive if I were in their situation as many didn’t. That number is still in the air ”undocumented deaths’ 😦

Vaccine issues

Currently, though the issue has been the Vaccine and the pricing of the same. A good article to get a summation of the issues outlined has been shared on Economist. Another article that goes to the heart of the issue is at scroll. To buttress the argument, the SII chairman had shared this few weeks back –

Adar Poonawala talking to Vishnu Som on Left, right center, 7th April 2021.

So, a licensee manufacturer wants to make super-profits during the pandemic. And now, as shared above they can very easily do it. Even the quotes given to nearby countries is smaller than the quotes given to Indian states –

Prices of AstraZeneca among various states and countries.

The situation around beds, vaccines, oxygen, anything is so dire that people could go to any lengths to save their loved ones. Even if they know if a certain medicine doesn’t work. For e.g. Remdesivir, 5 WHO trials have concluded that it doesn’t increase mortality. Heck, even AIIMS chief said the same. But both doctors and relatives desperation to cling on hope has made Remdesivir as a black market drug with unoffical prices hovering anywhere between INR 14k/- to INR30k/- per vial. One of the executives of a top firm was also arrested in Gujarat. In Maharashtra, the opposition M.P. came to the ‘rescue‘ of the officials of Bruick pharms in Mumbai.

Sadly, this strange affliction to the party in the center is also there in my extended family. At one end, they will heap praise on Mr. Modi, at the same time they can’t get wait to get fast out of India. Many of them have settled in horrors of horror Dubai, as it is the best place to do business, get international schools for the young ones at decent prices, cheaper or maybe a tad more than what they paid in Delhi or elsewhere. Being an Agarwal or a Gupta makes it easier to compartmentalize both things. Ease of doing business, 5 days flat to get a business registered, up and running. And the paranoia is still there. They won’t talk on the phone about him because they are afraid they may say something which comes back to bite them. As far as their decision to migrate, can’t really blame them. If I were 20-25 yeas younger and my mum were in a better shape than she is, we probably would have migrated as well, although would have preferred Europe than anywhere else.

Internet Freedom and Aarogya Setu App.


Internet Freedom had shared the chilling effects of the Aarogya Setu App. This had also been shared by FSCI in the past, and recently had their handle being banned on Twitter. This was also apparent in a legal bail order which the high court judge gave. While I won’t go into the merits and demerits of the bail order, it is astounding for the judge to say that the accused, even though he would be on bail install an app. so he can be surveilled. And this is a high court judge, such a sad state of affairs. We seem to be putting up new lows every day when it comes to judicial jurisprudence. One interesting aspect of the whole case was shared by Aishwarya Iyer. She shared a story that she and her team worked on quint which raises questions on the quality of the work done by Delhi Police. This is of course, up to Delhi Police to ascertain the truth of the matter because unless and until they are able to tie in the PMO’s office in for a leak or POTUS’s office it hardly seems possible. For e.g. the dates when two heads of state can meet each other would be decided by the secretaries of the two. Once the date is known, it would be shared with the press while at the same time some sort of security apparatus would kick in place. It is incumbent, especially on the host to take as much care as he can of the guest. We all remember that World War 1 (the war to end all wars) started due to the murder of Archduke Ferdinand.

As nobody wants that, the best way is to make sure that a political murder doesn’t happen on your watch. Now while I won’t comment on what it would be, it would be safe to assume that it would be z+ security along with higher readiness. Especially if it as somebody as important as POTUS. Now, it would be quite a reach for Delhi Police to connect the two dates. They either will have to get creative with the dates or some other way. Otherwise, with practically no knowledge in the public domain, they can”t work in limbo. In either case, I do hope the case comes up for hearing soon and we see what the Delhi Police says and contends in the High Court about the same. At the very least, it would be irritating for them to talk of the dates unless they can contend some mass conspiracy which involves the PMO (and would bring into question the constant vetting done by the Intelligence dept. of all those who work in PMO). And this whole case is to kind of shelter to the Delhi riots which happened in which majorly the Muslims died but their deaths lay unaccounted till date 😦

Conclusion

In Conclusion, I would like to share a bit of humor because right now the atmosphere is humorless, both with authoritarian tendencies of the Central Govt. and the mass mismanagement of public health which they now have left to the state to do as they fit. The peice I am sharing is from arre, one of my goto sites whenever I feel low.

22 April, 2021 04:09AM by shirishag75

April 21, 2021

Enrico Zini

Python output buffering

Here's a little toy program that displays a message like a split-flap display:

#!/usr/bin/python3

import sys
import time

def display(line: str):
    cur = '0' * len(line)
    while True:
        print(cur, end="\r")
        if cur == line:
            break
        time.sleep(0.09)
        cur = "".join(chr(min(ord(c) + 1, ord(oc))) for c, oc in zip(cur, line))
    print()

message = " ".join(sys.argv[1:])
display(message.upper())

This only works if the script's stdout is unbuffered. Pipe the output through cat, and you get a long wait, and then the final string, without the animation.

What is happening is that since the output is not going to a terminal, optimizations kick in that buffer the output and send it in bigger chunks, to make processing bulk I/O more efficient.

I haven't found a good introductory explanation of buffering in Python's documentation. The details seem to be scattered in the io module documentation and they mostly assume that one is already familiar with concepts like unbuffered, line-buffered or block-buffered. The libc documentation has a good quick introduction that one can read to get up to speed.

Controlling buffering in Python

In Python, one can force a buffer flush with the flush() method of the output file descriptor, like sys.stdout.flush(), to make sure pending buffered output gets sent.

Python's print() function also supports flush=True as an optional argument:

    print(cur, end="\r", flush=True)

If one wants to change the default buffering for a file descriptor, since Python 3.7 there's a convenient reconfigure() method, which can reconfigure line buffering only:

sys.stdout.reconfigure(line_buffering=True)

Otherwise, the technique is to reassign sys.stdout to something that has the behaviour one wants (code from this StackOverflow thread):

import io
# Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)

If one needs all this to implement a progressbar, one should make sure to have a look at the progressbar module first.

21 April, 2021 06:00PM

Sven Hoexter

bullseye: doveadm as unprivileged user with dovecot ssl config

The dovecot version which will be released with bullseye seems to require some subtle config adjustment if you

  • use ssl (ok that should be almost everyone)
  • and you would like to execute doveadm as a user, who can not read the ssl cert and keys (quite likely).

I guess one of the common cases is executing doveadm pw e.g. if you use postfixadmin. For myself that manifested in the nginx error log, which I use in combination with php-fpm, as.

2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal: 
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied

You easily see the same error message if you just execute something like doveadm pw -p test123. The workaround is to move your ssl configuration to a new file which is only readable by root, and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe best explained by showing the modification:

cd /etc/dovecot/conf.d
cp 10-ssl.conf 10-ssl_server
chmod 600 10-ssl_server
echo 'ssl = no' > 10-ssl.conf
echo '!include_try 10-ssl_server' >> 10-ssl.conf

Discussed upstream here.

21 April, 2021 09:45AM

April 20, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.11: Several Updates

A new version 0.3.11 of Rblpapi is now arriving at CRAN. It comes two years after the release of version Rblpapit 0.3.10 and brings a few updates and extensions.

Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the eleventh release since the package first appeared on CRAN in 2016. Changes are detailed below. Special thanks to James, Maxime and Michael for sending us pull requests.

Changes in Rblpapi version 0.3.11 (2021-04-20)

  • Support blpAutoAuthenticate and B-PIPE access, refactor and generalise authentication (James Bell in #285)

  • Deprecate excludeterm (John in #306)

  • Correct example in README.md (Maxime Legrand in #314)

  • Correct bds man page (and code) (Michael Kerber, and John, in #320)

  • Add GitHub Actions continuous integration (Dirk in #323)

  • Remove bashisms detected by R CMD check (Dirk #324)

  • Switch vignette to minidown (Dirk in #331)

  • Switch unit tests framework to tinytest (Dirk in #332)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 April, 2021 11:55PM

April 19, 2021

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Catching Up Your Sources

I’ve mostly had the preference of controlling my data rather than depend on someone else. That’s one reason why I still believe email to be my most reliable medium for data storage, one that is not plagued/locked by a single entity. If I had the resources, I’d prefer all digital data to be broken down to its simplest form for storage, like email format, and empower the user with it i.e. their data.

Yes, there are free services that are indirectly forced upon common users, and many of us get attracted to it. Many of us do not think that the information, which is shared for the free service in return, is of much importance. Which may be fair, depending on the individual, given that they get certain services without paying any direct dime.

New age communication

So first, we had email and usenet. As I mentioned above, email was designed with fine intentions. Intentions that make it stand even today, independently.

But not everything, back then, was that great either. For example, instant messaging was very closed and centralised then too. Things like: ICQ, MSN, Yahoo Messenger; all were centralized. I wonder if people still have access to their ICQ logs.

Not much has chagned in the current day either. We now have domination by: Facebook Messenger, Google Whatever the new marketing term they introdcue, WhatsApp, Telegram, Signal etc. To my knowledge, they are all centralized.

Over all this time, I’m yet to see a product come up with good (business) intentions, to really empower the end user. In this information age, the most invaluable data is user activity. That’s one data everyone is after. If you decline to share that bit of free data in exchange for the free services, mind you, that that free service like Facebook, Google, Instagram, WhatsApp, Truecaller, Twitter; none of it would come to you at all. Try it out.

So the reality is that while you may not be valuating the data you offer in exchange correctly, there’s a lot that is reaped from it. But still, I think each user has (and should have) the freedom to opt in for these tech giants and give them their personal bit, for free services in return. That is a decent barter deal. And it is a choice that one if free to choose

Retaining my data

I’m fond of keeping an archive folder in my mailbox. A folder that holds significant events in the form of an email usually, if documented. Over the years, I chose to resort to the email format because I felt it was more reliable in the longer term than any other formats.

The next best would be plain text.

In my lifetime, I have learnt a lot from the internet; so it is natural that my preference has been with it. Mailing Lists, IRCs, HOWTOs, Guides, Blog posts; all have helped. And over the years, I’ve come across hundreds of such content that I’d always like to preserve.

Now there are multiple ways to preserving data. Like, for example, big tech giants. In most usual cases, your data for your lifetime, should be fine with a tech giant. In some odd scenarios, you may be unlucky if you relied on a service provider that went bankrupt. But seriously, I think users should be fine if they host their data with Microsoft, Google etc; as long as they abide by their policies.

There’s also the catch of alignment. As the user, you should ensure to align (and transition) with the product offerings of your service provider. Otherwise, what may look constant and always reliable, will vanish in the blink of an eye. I guess Google Plus would be a good example. There was some Google Feed service too. Maybe Google Photos in the near decade future, just like Google Picasa in the previous (or current) decade.

History what is

On the topic of retaining information, lets take a small drift. I still admire our ancestors. I don’t know what went in their mind when they were documenting events in the form of scriptures, carvings, temples, churches, mosques etc; but one things for sure, they were able to leave a fine means of communication. They are all gone but a large number of those events are evident through the creations that they left. Some of those events have been strong enough that further rulers/invaders have had tough times trying to wipe it out from history. Remember, history is usually not the truth, but the statement to be believed by the teller. And the teller is usually the survivor, or the winner you may call.

But still, the information retention techniques were better.

I haven’t visited, but admire whosoever built the Kailasa Temple, Ellora, without which, we’d be made to believe what not by all invaders and rulers of the region. But the majestic standing of the temple, is a fine example of the history and the events that have occured in the past.

Ellora Temple -  The majectic carving believed to be carved out of a single stone

Ellora Temple - The majectic carving believed to be carved out of a single stone

Dominance has the power to rewrite history and unfortunately that’s true and it has done its part. It is just that in a mere human’s defined lifetime, it is not possible to witness the transtion from current to history, and say that I was there then and I’m here now, and this is not the reality.

And if not dominance, there’s always the other bit, hearsay. With it, you can always put anything up for dispute. Because there’s no way one can go back in time and produce a fine evidence.

There’s also a part about religion. Religion can be highly sentimental. And religion can be a solid way to get an agenda going. For example, in India - a country which today consitutionally is a secular country, there have been multiple attempts to discard the belief, that never ever did the thing called Ramayana exist. That the Rama Setu, nicely reworded as Adam’s Bridge by who so ever, is a mere result of science. Now Rama, or Hanumana, or Ravana, or Valmiki, aren’t going to come over and prove that that is true or false. So such subjects serve as a solid base to get an agenda going. And probably we’ve even succeeded in proving and believing that there was never an event like Ramayana or the Mahabharata. Nor was there ever any empire other than the Moghul or the British Empire.

But yes, whosoever made the Ellora Temple or the many many more of such creations, did a fine job of making a dent for the future, to know of what the history possibly could also be.

Enough of the drift

So, in my opinion, having events documented is important. It’d be nice to have skills documented too so that it can be passed over generations but that’s a debatable topic. But events, I believe should be documented. And documented in the best possible ways so that its existence is not diminished.

A documentation in the form of certain carvings on a rock is far more better than links and posts shared on Facebook, Twitter, Reddit etc. For one, these are all corporate entities with vested interests and can pretext excuse in the name of compliance and conformance.

So, for the helpless state and generation I am in, I felt email was the best possible independent form of data retention in today’s age. If I really had the resources, I’d not rely on digital age. This age has no guarantee of retaining and recording information in any reliable manner. Instead, it is just mostly junk, which is manipulative and changeable, conditionally.

Email and RSS

So for my communication, I like to prefer emails over any other means. That doesn’t mean I don’t use the current trends. I do. But this blog is mostly about penning my desires. And desire be to have communication over email format.

Such is the case that for information useful over the internet, I crave to have it formatted in email for archival.

RSS feeds is my most common mode of keeping track of information I care about. Not all that I care for is available in RSS feeds but hey such is life. And adaptability is okay.

But my preference is still RSS.

So I use RSS feeds through a fine software called feed2imap. A software that fits my bill fairly well.

feed2imap is:

  • An rss feed news aggregator
  • Pulls and extracts news feeds in the form of an email
  • Can push the converted email over pop/imap
  • Can convert all image content to email mime attachment

In a gist, it makes the online content available to me offline in the most admired email format

In my mail box, in today’s day, my preferred email client is Evolution. It does a good job of dealing with such emails (rss feed items). An image example of accessing the rrs feed item through it is below

RSS News Item through Evolution

RSS News Item through Evolution

The good part is that my actual data is always independent of such MUAs. Tomorrow, as technology - trends - economics evolve, something new would come as a replacement but my data would still be mine.

Trends have shown. Data mobility is a commodity expectation now. As such, I wanted to have something fill in that gap for me. So that I could access my information - which I’ve preferred in a format - easily in today’s trendy options.

I tried multiple options on my current preferred platform of choice for mobiles, i.e. Android. Finally I came across Aqua Mail, which fits in most of my requirements.

Aqua Mail does

  • Connect to my laptop over imap
  • Can sync the requested folders
  • Can sync requested data for offlince accessibility
  • Can present the synchronized data in a quite flexible and customizable manner, to match my taste
  • Has a very extensible User Interface, allowing me to customize it to my taste

Pictures can do a better job of describing my english words.

Aqua Mail Message List Interface

Aqua Mail Message List Interface

Aqua Mail Message View Normal

Aqua Mail Message View Normal

Aqua Mail Message View Zoomed - Fit to Screen

Aqua Mail Message View Zoomed - Fit to Screen

All of this done with no dependence on network connectivity, post the sync. And all information is stored is best possible simple format.

19 April, 2021 07:50AM by Ritesh Raj Sarraf (rrs@researchut.com)

Ian Jackson

Otter - a game server for arbitrary board games

One of the things that I found most vexing about lockdown was that I was unable to play some of my favourite board games. There are online systems for many games, but not all. And online systems cannot support games like Mao where the players make up the rules as we go along.

I had an idea for how to solve this problem, and set about implementing it. The result is Otter (the Online Table Top Environment Renderer).

We have played a number of fun games of Penultima with it, and have recently branched out into Mao. The Otter is now ready to be released!

More about Otter

(cribbed shamelessly from the README)

Otter, the Online Table Top Environment Renderer, is an online game system.

But it is not like most online game systems. It does not know (nor does it need to know) the rules of the game you are playing. Instead, it lets you and your friends play with common tabletop/boardgame elements such as hands of cards, boards, and so on.

So it’s something like a “tabletop simulator” (but it does not have any 3D, or a physics engine, or anything like that).

This means that with Otter:

  • Supporting a new game, that Otter doesn’t know about yet, would usually not involve writing or modifying any computer programs.

  • If Otter already has the necessary game elements (cards, say) all you need to do is write a spec file saying what should be on the table at the start of the game. For example, most Whist variants that start with a standard pack of 52 cards are already playable.

  • You can play games where the rules change as the game goes along, or are made up by the players, or are too complicated to write as a computer program.

  • House rules are no problem, since the computer isn’t enforcing the rules - you and your friends are.

  • Everyone can interact with different items on the game table, at any time. (Otter doesn’t know about your game’s turn-taking, so doesn’t know whose turn it might be.)

Installation and usage

Otter is fully functional, but the installation and account management arrangements are rather unsophisticated and un-webby. And there is not currently any publicly available instance you can use to try it out.

Users on chiark will find an instance there.

Other people who who are interested in hosting games (of Penultima or Mao, or other games we might support) will have to find a Unix host or VM to install Otter on, and will probably want help from a Unix sysadmin.

Otter is distributed via git, and is available on Salsa, Debian's gitlab instance.

There is documentation online.

Future plans

I have a number of ideas for improvement, which go off in many different directions.

Quite high up on my priority list is making it possible for players to upload and share game materials (cards, boards, pieces, and so on), rather than just using the ones which are bundled with Otter itself (or dumping files ad-hoc on the server). This will make it much easier to play new games. One big reason I wrote Otter is that I wanted to liberate boardgame players from the need to implemet their game rules as computer code.

The game management and account management is currently done with a command line tool. It would be lovely to improve that, but making a fully-featured management web ui would be a lot of work.

Screenshots!

(Click for the full size images.)



comment count unavailable comments

19 April, 2021 12:35AM

April 18, 2021

Russell Coker

IMA/EVM Certificates

I’ve been experimenting with IMA/EVM. Here is the Sourceforge page for the upstream project [1]. The aim of that project is to check hashes and maybe public key signatures on files before performing read/exec type operations on them. It can be used as the next logical step from booting a signed kernel with TPM. I am a long way from getting that sort of thing going, just getting the kernel to boot and load keys is my current challenge and isn’t helped due to the lack of documentation on error messages. This blog post started as a way of documenting the error messages so future people who google errors can get a useful result. I am not trying to document everything, just help people get through some of the first problems.

I am using Debian for my work, but some of this will apply to other distributions (particularly the kernel error messages). The Debian distribution has the ima-evm-utils but no other support for IMA/EVM. To get this going in Debian you need to compile your own kernel with IMA support and then boot it with kernel command-line options to enable IMA, in recent kernels that includes “lsm=integrity” as a mandatory requirement to prevent a kernel Oops after mounting the initrd (there is already a patch to fix this).

If you want to just use IMA (not get involved in development) then a good option would be to use RHEL (here is their documentation) [2] or SUSE (here is their documentation) [3]. Note that both RHEL and SUSE use older kernels so their documentation WILL lead you astray if you try and use the latest kernel.org kernel.

The Debian initrd

I created a script named /etc/initramfs-tools/hooks/keys with the following contents to copy the key(s) from /etc/keys to the initrd where the kernel will load it/them. The kernel configuration determines whether x509_evm.der or x509_ima.der (or maybe both) is loaded. I haven’t yet worked out which key is needed when.

#!/bin/bash

mkdir -p ${DESTDIR}/etc/keys
cp /etc/keys/* ${DESTDIR}/etc/keys

Making the Keys

#!/bin/sh

GENKEY=ima.genkey

cat << __EOF__ >$GENKEY
[ req ]
default_bits = 1024
distinguished_name = req_distinguished_name
prompt = no
string_mask = utf8only
x509_extensions = v3_usr

[ req_distinguished_name ]
O = `hostname`
CN = `whoami` signing key
emailAddress = `whoami`@`hostname`

[ v3_usr ]
basicConstraints=critical,CA:FALSE
#basicConstraints=CA:FALSE
keyUsage=digitalSignature
#keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid
#authorityKeyIdentifier=keyid,issuer
__EOF__

openssl req -new -nodes -utf8 -sha1 -days 365 -batch -config $GENKEY \
                -out csr_ima.pem -keyout privkey_ima.pem
openssl x509 -req -in csr_ima.pem -days 365 -extfile $GENKEY -extensions v3_usr \
                -CA ~/kern/linux-5.11.14/certs/signing_key.pem -CAkey ~/kern/linux-5.11.14/certs/signing_key.pem -CAcreateserial \
                -outform DER -out x509_evm.der

To get the below result I used the above script to generate a key, it is the /usr/share/doc/ima-evm-utils/examples/ima-genkey.sh script from the ima-evm-utils package but changed to use the key generated from kernel compilation to sign it. You can copy the files in the certs directory from one kernel build tree to another to have the same certificate and use the same initrd configuration. After generating the key I copied x509_evm.der to /etc/keys on the target host and built the initrd before rebooting.

[    1.050321] integrity: Loading X.509 certificate: /etc/keys/x509_evm.der
[    1.092560] integrity: Loaded X.509 cert 'xev: etbe signing key: 99d4fa9051e2c178017180df5fcc6e5dbd8bb606'

Errors

Here are some of the kernel error messages I received along with my best interpretation of what they mean.

[ 1.062031] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[ 1.063689] integrity: Problem loading X.509 certificate -74

Error -74 means -EBADMSG, which means there’s something wrong with the certificate file. I have got that from /etc/keys/x509_ima.der not being in der format and I have got it from a der file that contained a key pair that wasn’t signed.

[    1.049170] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[    1.093092] integrity: Problem loading X.509 certificate -126

Error -126 means -ENOKEY, so the key wasn’t in the file or the key wasn’t signed by the kernel signing key.

[    1.074759] integrity: Unable to open file: /etc/keys/x509_evm.der (-2)

Error -2 means -ENOENT, so the file wasn’t found on the initrd. Note that it does NOT look at the root filesystem.

References

18 April, 2021 11:56AM by etbe

hackergotchi for Bits from Debian

Bits from Debian

Debian Project Leader election 2021, Jonathan Carter re-elected.

The voting period and tally of votes for the Debian Project Leader election has just concluded, and the winner is Jonathan Carter!

455 of 1,018 Developers voted using the Condorcet method.

More information about the results of the voting are available on the Debian Project Leader Elections 2021 page.

Many thanks to Jonathan Carter and Sruthi Chandran for their campaigns, and to our Developers for voting.

18 April, 2021 08:00AM by Donald Norwood

April 17, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAPT 0.0.7: Micro Update

A new version of the RcppAPT package interfacing from R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like arrived on CRAN yesterday. This comes a good year after the previous maintenance update for release 0.0.6.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

The maintenance release responds to call for updates from CRAN desiring that make all implicit dependencies on packages markdown and rmarkdown explicit via a Suggests: entry. Two of the many packages I maintain were part of the (large !!) list in the CRAN email, and this is one of them. While making the update, we refreshed two other packaging details.

Changes in version 0.0.7 (2021-04-16)

  • Add rmarkdown to Suggests: as an implicit conditional dependency

  • Switch vignette to minidown and its water framework, add minidown to Suggests as well

  • Update two URLs in the README.md file

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as as the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 April, 2021 04:33PM

hackergotchi for Chris Lamb

Chris Lamb

Tour d'Orwell: Wallington

Previously in George Orwell travel posts: Sutton Courtenay, Marrakesh, Hampstead, Paris, Southwold & The River Orwell.

§

Wallington is a small village in Hertfordshire, approximately fifty miles north of London and twenty-five miles from the outskirts of Cambridge. George Orwell lived at No. 2 Kits Lane, better known as 'The Stores', on a mostly-permanent basis from 1936 to 1940, but he would continue to journey up from London on occasional weekends until 1947.

His first reference to The Stores can be found in early 1936, where Orwell wrote from Lancashire during research for The Road to Wigan Pier to lament that he would very much like "to do some work again — impossible, of course, in the [current] surroundings":

I am arranging to take a cottage at Wallington near Baldock in Herts, rather a pig in a poke because I have never seen it, but I am trusting the friends who have chosen it for me, and it is very cheap, only 7s. 6d. a week [£20 in 2021].

For those not steeped in English colloquialisms, "a pig in a poke" is an item bought without seeing it in advance. In fact, one general insight that may be drawn from reading Orwell's extant correspondence is just how much he relied on a close network of friends, belying the lazy and hagiographical picture of an independent and solitary figure. (Still, even Orwell cultivated this image at times, such as in a patently autobiographical essay he wrote in 1946. But note the off-hand reference to varicose veins here, for they would shortly re-appear as a symbol of Winston's repressed humanity in Nineteen Eighty-Four.)

Nevertheless, the porcine reference in Orwell's idiom is particularly apt, given that he wrote the bulk of Animal Farm at The Stores — his 1945 novella, of course, portraying a revolution betrayed by allegorical pigs. Orwell even drew inspiration for his 'fairy story' from Wallington itself, principally by naming the novel's farm 'Manor Farm', just as it is in the village. But the allusion to the purchase of goods is just as appropriate, as Orwell returned The Stores to its former status as the village shop, even going so far as to drill peepholes in a door to keep an Orwellian eye on the jars of sweets. (Unfortunately, we cannot complete a tidy circle of references, as whilst it is certainly Napoleon — Animal Farm's substitute for Stalin — who is quoted as describing Britain as "a nation of shopkeepers", it was actually the maraisard Bertrand Barère who first used the phrase).

§

"It isn't what you might call luxurious", he wrote in typical British understatement, but Orwell did warmly emote on his animals. He kept hens in Wallington (perhaps even inspiring the opening line of Animal Farm: "Mr Jones, of the Manor Farm, had locked the hen-houses for the night, but was too drunk to remember to shut the pop-holes.") and a photograph even survives of Orwell feeding his pet goat, Muriel. Orwell's goat was the eponymous inspiration for the white goat in Animal Farm, a decidedly under-analysed character who, to me, serves to represent an intelligentsia that is highly perceptive of the declining political climate but, seemingly content with merely observing it, does not offer any meaningful opposition. Muriel's aesthetic of resistance, particularly in her reporting on the changes made to the Seven Commandments of the farm, thus rehearses the well-meaning (yet functionally ineffective) affinity for 'fact checking' which proliferates today. But I digress.

There is a tendency to "read Orwell backwards", so I must point out that Orwell wrote several other works whilst at The Stores as well. This includes his Homage to Catalonia, his aforementioned The Road to Wigan Pier, not to mention countless indispensable reviews and essays as well. Indeed, another result of focusing exclusively on Orwell's last works is that we only encounter his ideas in their highly-refined forms, whilst in reality, it often took many years for concepts to fully mature — we first see, for instance, the now-infamous idea of "2 + 2 = 5" in an essay written in 1939.

This is important to understand for two reasons. Although the ostentatiously austere Barnhill might have housed the physical labour of its writing, it is refreshing to reflect that the philosophical heavy-lifting of Nineteen Eighty-Four may have been performed in a relatively undistinguished North Hertfordshire village. But perhaps more importantly, it emphasises that Orwell was just a man, and that any of us is fully capable of equally significant insight, with — to quote Christopher Hitchens — "little except a battered typewriter and a certain resilience."

§

The red commemorative plaque not only limits Orwell's tenure to the time he was permanently in the village, it omits all reference to his first wife, Eileen O'Shaughnessy, whom he married in the village church in 1936.
Wallington's Manor Farm, the inspiration for the farm in Animal Farm. The lower sign enjoins the public to inform the police "if you see anyone on the [church] roof acting suspiciously". Non-UK-residents may be surprised to learn about the systematic theft of lead.

17 April, 2021 02:56PM

hackergotchi for Steve Kemp

Steve Kemp

Having fun with CP/M on a Z80 single-board computer.

In the past, I've talked about building a Z80-based computer. I made some progress towards that goal, in the sense that I took the initial (trivial steps) towards making something:

  • I built a clock-circuit.
  • I wired up a Z80 processor to the clock.
  • I got the thing running an endless stream of NOP instructions.
    • No RAM/ROM connected, tying all the bus-lines low, meaning every attempted memory-read returned 0x00 which is the Z80 NOP instruction.

But then I stalled, repeatedly, at designing an interface to RAM and ROM, so that it could actually do something useful. Over the lockdown I've been in two minds about getting sucked back down the rabbit-hole, so I compromised. I did a bit of searching on tindie, and similar places, and figured I'd buy a Z80-based single board computer. My requirements were minimal:

  • It must run CP/M.
  • The source-code to "everything" must be available.
  • I want it to run standalone, and connect to a host via a serial-port.

With those goals there were a bunch of boards to choose from, rc2014 is the standard choice - a well engineered system which uses a common backplane and lets you build mini-boards to add functionality. So first you build the CPU-card, then the RAM card, then the flash-disk card, etc. Over-engineered in one sense, extensible in another. (There are some single-board variants to cut down on soldering overhead, at a cost of less flexibility.)

After a while I came across https://8bitstack.co.uk/, which describes a simple board called the the Z80 playground.

The advantage of this design is that it loads code from a USB stick, making it easy to transfer files to/from it, without the need for a compact flash card, or similar. The downside is that the system has only 64K RAM, meaning it cannot run CP/M 3, only 2.2. (CP/M 3.x requires more RAM, and a banking/paging system setup to swap between pages.)

When the system boots it loads code from an EEPROM, which then fetches the CP/M files from the USB-stick, copies them into RAM and executes them. The memory map can be split so you either have ROM & RAM, or you have just RAM (after the boot the ROM will be switched off). To change the initial stuff you need to reprogram the EEPROM, after that it's just a matter of adding binaries to the stick or transferring them over the serial port.

In only a couple of hours I got the basic stuff working as well as I needed:

  • A z80-assembler on my Linux desktop to build simple binaries.
  • An installation of Turbo Pascal 3.00A on the system itself.
  • An installation of FORTH on the system itself.
    • Which is nice.
  • A couple of simple games compiled from Pascal
    • Snake, Tetris, etc.
  • The Zork trilogy installed, along with Hitchhikers guide.

I had some fun with a CP/M emulator to get my hand back in things before the board arrived, and using that I tested my first "real" assembly language program (cls to clear the screen), as well as got the hang of using the wordstar keyboard shortcuts as used within the turbo pascal environment.

I have some plans for development:

  • Add command-line history (page-up/page-down) for the CP/M command-processor.
  • Add paging to TYPE, and allow terminating with Q.

Nothing major, but fun changes that won't be too difficult to implement.

Since CP/M 2.x has no concept of sub-directories you end up using drives for everything, I implemented a "search-path" so that when you type "FOO" it will attempt to run "A:FOO.COM" if there is no file matching on the current-drive. That's a nicer user-experience at all.

I also wrote some Z80-assembly code to search all drives for an executable, if not found in current drive and not already qualified. Remember CP/M doesn't have a concept of sub-directories) that's actually pretty useful:

  B>LOCATE H*.COM
  P:HELLO   COM
  P:HELLO2  COM
  G:HITCH   COM
  E:HYPHEN  COM

I've also written some other trivial assembly language tools, which was surprisingly relaxing. Especially once I got back into the zen mode of optimizing for size.

I forked the upstream repository, mostly to tidy up the contents, rather than because I want to go into my own direction. I'll keep the contents in sync, because there's no point splitting a community even further - I guess there are fewer than 100 of these boards in the wild, probably far far fewer!

17 April, 2021 02:00PM

April 16, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Announcing ‘Introductions to Emacs Speaks Statistics’

A new website containing introductory videos and slide decks is now available for your perusal at ess-intro.github.io. It provides a series of introductions to the excellent Emacs Speaks Statistics (ESS) mode for the Emacs editor.

This effort started following my little tips, tricks, tools and toys series of short videos and slide decks “for the command-line and R, broadly-speaking”. Which I had mentioned to friends curious about Emacs, and on the ess-help mailing list. And lo and behold, over the fall and winter sixteen of us came together in one GitHub org and are now proud to present the initial batch of videos about first steps, installing, using with spaceemacs, customizing, and org-mode with ESS. More may hopefully fellow, the group is open and you too can join: see the main repo and its wiki.

This is in fact the initial announcement post, so it is flattering that we have already received over 350 views, four comments and twenty-one likes.

We hope it proves to be a useful starting point for some of you. The Emacs editor is quite uniquely powerful, and coupled with ESS makes for a rather nice environment for programming with data, or analysing, visualising, exploring, … data. But we are not zealots: there are many editors and environments under the sun, and most people are perfectly happy with their choice, which is wonderful. We also like ours, and sometimes someone asks ‘tell me more’ or ‘how do I start’. We hope this series satisifies this initial curiousity and takes it from here.

With that, my thanks to Frédéric, Alex, Tyler and Greg for the initial batch, and for everybody else in the org who chipped in with comments and suggestion. We hope it grows from here, so happy Emacsing with R from us!

16 April, 2021 12:49AM

April 15, 2021

Ian Jackson

Dreamwidth blocking many RSS readers and aggregators

There is a serious problem with Dreamwidth, which is impeding access for many RSS reader tools.

This started at around 0500 UTC on Wednesday morning, according to my own RSS reader cron job. A friend found #43443 in the DW ticket tracker, where a user of a minority web browser found they were blocked.

Local tests demonstrated that Dreamwidth had applied blocking by the HTTP User-Agent header, and were rejecting all user-agents not specifically permitted. Today, this rule has been relaxed and unknown user-agents are permitted. But user-agents for general http client libraries are still blocked.

I'm aware of three unresolved tickets about this: #43444 #43445 #43447

We're told there by a volunteer member of Dreamwidth's support staff that this has been done deliberately for "blocking automated traffic". I'm sure the volunteer is just relaying what they've been told by whoever is struggling to deal with what I suppose is probably a spam problem. But it's still rather unsatisfactory.

I have suggested in my own ticket that a good solution might be to apply the new block only to posting and commenting (eg, maybe, by applying it only to HTTP POST requests). If the problem is indeed spam then that ought to be good enough, and would still let RSS readers work properly.

I'm told that this new blocking has been done by "implementing" (actually, configuring or enabling) "some AWS rules for blocking automated traffic". I don't know what facilities AWS provides. This kind of helplessness is of course precisely the kind of thing that the Free Software movement is against and precisely the kind of thing that proprietary services like AWS produce.

I don't know if this blog entry will appear on planet.debian.org and on other people's reader's and aggregators. I think it will at least be seen by other Dreamwidth users. I thought I would post here in the hope that other Dreamwidth users might be able to help get this fixed. At the very least other Dreamwidth blog owners need to know that many of their readers may not be seeing their posts at all.

If this problem is not fixed I will have to move my blog. One of the main points of having a blog is publishing it via RSS. RSS readers are of course based on general http client libraries and many if not most RSS readers have not bothered to customise their user-agent. Those are currently blocked.



comment count unavailable comments

15 April, 2021 11:08PM

hackergotchi for Martin Michlmayr

Martin Michlmayr

ledger2beancount 2.6 released

I released version 2.6 of ledger2beancount, a ledger to beancount converter.

Here are the changes in 2.6:

  • Round calculated total if needed for price==cost comparison
  • Add narration_tag config variable to set narration from metadata
  • Retain unconsummated payee/payer metadata
  • Ensure UTF-8 output and assume UTF-8 input
  • Document UTF-8 issue on Windows systems
  • Add option to move posting-level tags to the transaction itself
  • Add support for the alias sub-directive of account declarations
  • Add support for the payee sub-directive of account declarations
  • Support configuration file called .ledger2beancount.yaml
  • Fix uninitialised value warning in hledger mode
  • Print warning if account in assertion has sub-accounts
  • Set commodity for commodity-less balance assertion
  • Expand path name of beancount_header config variable
  • Document handling of buckets
  • Document pre- and post-processing examples
  • Add Dockerfile to create Docker image

Thanks to Alexander Baier, Daniele Nicolodi, and GitHub users bratekarate, faaafo and mefromthepast for various bug reports and other input.

Thanks to Dennis Lee for adding a Dockerfile and to Vinod Kurup for fixing a bug.

Thanks to Stefano Zacchiroli for testing.

You can get ledger2beancount from GitHub.

15 April, 2021 05:18AM by Martin Michlmayr

April 14, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.10.4.0.0 on CRAN: New Upstream ‘Plus’

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 852 other packages on CRAN.

This new release brings us the just release Armadillo 10.4.0. Upstream moves at a speed that is a little faster than the cadence CRAN likes. We release RcppArmadillo 0.10.2.2.0 on March 9; and upstream 10.3.0 came out shortly thereafter. We aim to accomodate CRAN with (roughly) monthly (or less frequent) releases) so by the time we were ready 10.4.0 had just come out.

As it turns, the full testing had a benefit. Among the (currently) 852 CRAN packages using RcppArmadillo, two were failing tests. This is due to a subtle, but important point. Early on we realized that it would be beneficial if the standard R control over random-number creation and seeding affected Armadillo too, which Conrad accomodated kindly with an optional RNG interface—which RcppArmadillo supplies. With recent changes he made, the R side saw normally-distributed draws (via the Armadillo interface) changed, which lead to the two changes. All hail unit tests. So I mentioned this to Conrad, and with the usual Chicago-Brisbane time difference late my evening a fix was in my inbox. The CRAN upload was then halted as I had missed that due to other changes he had made random draws from a Gamma would now call std::rand() which CRAN flags. Another email to Brisbane, another late (one-line) fix back and all was good. We still encountered one package with an error but flagged this as internal to that package’s setup, so Uwe let RcppArmadillo onto CRAN, I contacted that package’s maintainer—who was very receptive and a change should be forthcoming. So with all that we have 0.10.4.0.0 on CRAN giving us Armadillo 10.4.0.

The full set of changes follows. As Armadillo 10.3.0 was not uploaded to CRAN, its changes are included too.

Changes in RcppArmadillo version 0.10.4.0.0 (2021-04-12)

  • Upgraded to Armadillo release 10.4.0 (Pressure Cooker)

    • faster handling of triangular matrices by log_det()

    • added log_det_sympd() for log determinant of symmetric positive matrices

    • added ARMA_WARN_LEVEL configuration option, to control the degree of emitted warning messages

    • reduced the default degree of warning messages, so that failed decompositions, failed saving/loading, etc, no longer emit warnings

  • Apply one upstream corrections for arma::randn draws when using alternative (here R) generator, and arma::randg.

Changes in RcppArmadillo version 0.10.3.0.0 (2021-03-10)

  • Upgraded to Armadillo release 10.3 (Sunrise Chaos)

    • faster handling of symmetric positive definite matrices by pinv()

    • expanded .save() / .load() for dense matrices to handle coord_ascii format

    • for out of bounds access, element accessors now throw the more nuanced std::out_of_range exception, instead of only std::logic_error

    • improved quality of random numbers

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 April, 2021 12:22AM

April 13, 2021

François Marier

Deleting non-decryptable restic snapshots

Due to what I suspect is disk corruption error due to a faulty RAM module or network interface on my GnuBee, my restic backup failed with the following error:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-854484247
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-854484247
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
error for tree 4645312b:
  decrypting blob 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c failed: ciphertext verification failed
error for tree 2c3248ce:
  decrypting blob 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 failed: ciphertext verification failed
Fatal: repository contains errors

I started by locating the snapshots which make use of these corrupt trees:

$ restic find --tree 4645312b
repository b0b0516c opened successfully, password is correct
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot e75876ed (2021-02-28 08:35:29)

$ restic find --tree 2c3248ce
repository b0b0516c opened successfully, password is correct
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot e75876ed (2021-02-28 08:35:29)

and then deleted them:

$ restic forget 41e138c8 e75876ed
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  2 / 2 files deleted

$ restic prune 
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:23] 100.00%  58964 / 58964 packs
repository contains 58964 packs (1417910 blobs) with 278.913 GiB
processed 1417910 blobs: 0 duplicate blobs, 0 B duplicate
load all snapshots
find data that is still in use for 20 snapshots
[1:15] 100.00%  20 / 20 snapshots
found 1364852 of 1417910 data blobs still in use, removing 53058 blobs
will remove 0 invalid files
will delete 942 packs and rewrite 1358 packs, this frees 6.741 GiB
[10:50] 31.96%  434 / 1358 packs rewritten
hash does not match id: want 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57, got 95d90aa48ffb18e6d149731a8542acd6eb0e4c26449a4d4c8266009697fd1904
github.com/restic/restic/internal/repository.Repack
    github.com/restic/restic/internal/repository/repack.go:37
main.pruneRepository
    github.com/restic/restic/cmd/restic/cmd_prune.go:242
main.runPrune
    github.com/restic/restic/cmd/restic/cmd_prune.go:62
main.glob..func19
    github.com/restic/restic/cmd/restic/cmd_prune.go:27
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra/command.go:960
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra/command.go:897
main.main
    github.com/restic/restic/cmd/restic/main.go:98
runtime.main
    runtime/proc.go:204
runtime.goexit
    runtime/asm_amd64.s:1374

As you can see above, the prune command failed due to a corrupt pack and so I followed the process I previously wrote about and identified the affected snapshots using:

$ restic find --pack 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57

before deleting them with:

$ restic forget 031ab8f1 1672a9e1 1f23fb5b 2c58ea3a 331c7231 5e0e1936 735c6744 94f74bdb b11df023 dfa17ba8 e3f78133 eefbd0b0 fe88aeb5 
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  13 / 13 files deleted

$ restic prune
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:37] 100.00%  60020 / 60020 packs
repository contains 60020 packs (1548315 blobs) with 283.466 GiB
processed 1548315 blobs: 129812 duplicate blobs, 4.331 GiB duplicate
load all snapshots
find data that is still in use for 8 snapshots
[0:53] 100.00%  8 / 8 snapshots
found 1219895 of 1548315 data blobs still in use, removing 328420 blobs
will remove 0 invalid files
will delete 6232 packs and rewrite 1275 packs, this frees 36.302 GiB
[23:37] 100.00%  1275 / 1275 packs rewritten
counting files in repo
[11:45] 100.00%  52822 / 52822 packs
finding old index files
saved new indexes as [a31b0fc3 9f5aa9b5 db19be6f 4fd9f1d8 941e710b 528489d9 fb46b04a 6662cd78 4b3f5aad 0f6f3e07 26ae96b2 2de7b89f 78222bea 47e1a063 5abf5c2d d4b1d1c3 f8616415 3b0ebbaa]
remove 23 old index files
[0:00] 100.00%  23 / 23 files deleted
remove 7507 old packs
[0:08] 100.00%  7507 / 7507 files deleted
done

And with 13 of my 21 snapshots deleted, the checks now pass:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-407999210
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-407999210
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

This represents a significant amount of lost backup history, but at least it's not all of it.

13 April, 2021 03:19AM

hackergotchi for Shirish Agarwal

Shirish Agarwal

what to write

First up, I am alive and well. I have been receiving calls from friends for quite sometime but now that I have become deaf, it is a pain and the hearing aids aren’t all that useful. But moreover, where we have been finding ourselves each and every day sinking lower and lower feels absurd as to what to write and not write about India. Thankfully, I ran across this piece which does tell in far more detail than I ever could. The only interesting and somewhat positive news I had is from south of India otherwise sad days, especially for the poor. The saddest story is that this time Covid has reached alarming proportions in India and surprise, surprise this time the villain for many is my state of Maharashtra even though it hasn’t received its share of GST proceeds for last two years and this was Kerala’s perspective, different state, different party, different political ideology altogether.

Kerala Finance Minister Thomas Issac views on GST, October 22, 2020 Indian Express.

I briefly also share the death of somewhat liberal Film censorship in India unlike Italy which abolished film censorship altogether. I don’t really want spend too much on how we have become No. 2 in Covid cases in the world and perhaps death also. Many people still believe in herd immunity but don’t really know what it means. So without taking too much time and effort, bid adieu. May post when I’m hopefully emotionally feeling better, stronger 😦

13 April, 2021 03:14AM by shirishag75

April 12, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Squirrel!

“All comments on this article will now be moderated. The bar to pass moderation will be high, it's really time to think about something else. Did you all see that we have an exciting article on spinlocks?” Poor LWN <3

SPINLOCK!

12 April, 2021 11:24PM

hackergotchi for Clint Adams

Clint Adams

April 11, 2021

Jelmer Vernooij

The upstream ontologist

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The upstream ontologist is a project that extracts metadata about upstream projects in a consistent format. It does this with a combination of heuristics and reading ecosystem-specific metadata files, such as Python’s setup.py, rust’s Cargo.toml as well as e.g. scanning README files.

Supported Data Sources

It will extract information from a wide variety of sources, including:

Supported Fields

Fields that it currently provides include:

  • Homepage: homepage URL
  • Name: name of the upstream project
  • Contact: contact address of some sort of the upstream (e-mail, mailing list URL)
  • Repository: VCS URL
  • Repository-Browse: Web URL for viewing the VCS
  • Bug-Database: Bug database URL (for web viewing, generally)
  • Bug-Submit: URL to use to submit new bugs (either on the web or an e-mail address)
  • Screenshots: List of URLs with screenshots
  • Archive: Archive used - e.g. SourceForge
  • Security-Contact: e-mail or URL with instructions for reporting security issues
  • Documentation: Link to documentation on the web:
  • Wiki: Wiki URL
  • Summary: one-line description of the project
  • Description: longer description of the project
  • License: Single line license description (e.g. “GPL 2.0”) as declared in the metadata[1]
  • Copyright: List of copyright holders
  • Version: Current upstream version
  • Security-MD: URL to markdown file with security policy

All data fields have a “certainty” associated with them (“certain”, “confident”, “likely” or “possible”), which gets set depending on how the data was derived or where it was found. If multiple possible values were found for a specific field, then the value with the highest certainty is taken.

Interface

The ontologist provides a high-level Python API as well as two command-line tools that can write output in two different formats:

For example, running guess-upstream-metadata on dulwich:

 % guess-upstream-metadata
 <string>:2: (INFO/1) Duplicate implicit target name: "contributing".
 Name: dulwich
 Repository: https://www.dulwich.io/code/
 X-Security-MD: https://github.com/dulwich/dulwich/tree/HEAD/SECURITY.md
 X-Version: 0.20.21
 Bug-Database: https://github.com/dulwich/dulwich/issues
 X-Summary: Python Git Library
 X-Description: |
   This is the Dulwich project.
   It aims to provide an interface to git repos (both local and remote) that
   doesn't call out to git directly but instead uses pure Python.
 X-License: Apache License, version 2 or GNU General Public License, version 2 or later.
 Bug-Submit: https://github.com/dulwich/dulwich/issues/new

Lintian-Brush

lintian-brush can update DEP-12-style debian/upstream/metadata files that hold information about the upstream project that is packaged as well as the Homepage in the debian/control file based on information provided by the upstream ontologist. By default, it only imports data with the highest certainty - you can override this by specifying the —uncertain command-line flag.

[1]Obviously this won’t be able to describe the full licensing situation for many projects. Projects like scancode-toolkit are more appropriate for that.

11 April, 2021 10:40PM by Jelmer Vernooij

Vishal Gupta

Sikkim 101 for Backpackers

Host to Kanchenjunga, the world’s third-highest mountain peak and the endangered Red Panda, Sikkim is a state in northeastern India. Nestled between Nepal, Tibet (China), Bhutan and West Bengal (India), the state offers a smorgasbord of cultures and cuisines. That said, it’s hardly surprising that the old spice route meanders through western Sikkim, connecting Lhasa with the ports of Bengal. Although the latter could also be attributed to cardamom (kali elaichi), a perennial herb native to Sikkim, which the state is the second-largest producer of, globally. Lastly, having been to and lived in India, all my life, I can confidently say Sikkim is one of the cleanest & safest regions in India, making it ideal for first-time backpackers.

Brief History

  • 17th century: The Kingdom of Sikkim is founded by the Namgyal dynasty and ruled by Buddhist priest-kings known as the Chogyal.
  • 1890: Sikkim becomes a princely state of British India.
  • 1947: Sikkim continues its protectorate status with the Union of India, post-Indian-independence.
  • 1973: Anti-royalist riots take place in front of the Chogyal's palace, by Nepalis seeking greater representation.
  • 1975: Referendum leads to the deposition of the monarchy and Sikkim joins India as its 22nd state.

Languages

  • Official: English, Nepali, Sikkimese/Bhotia and Lepcha
  • Though Hindi and Nepali share the same script (Devanagari), they are not mutually intelligible. Yet, most people in Sikkim can understand and speak Hindi.

Ethnicity

  • Nepalis: Migrated in large numbers (from Nepal) and soon became the dominant community
  • Bhutias: People of Tibetan origin. Major inhabitants in Northern Sikkim.
  • Lepchas: Original inhabitants of Sikkim

Food

  • Tibetan/Nepali dishes (mostly consumed during winter)
    • Thukpa: Noodle soup, rich in spices and vegetables. Usually contains some form of meat. Common variations: Thenthuk and Gyathuk
    • Momos: Steamed or fried dumplings, usually with a meat filling.
    • Saadheko: Spicy marinated chicken salad.
    • Gundruk Soup: A soup made from Gundruk, a fermented leafy green vegetable.
    • Sinki : A fermented radish tap-root product, traditionally consumed as a base for soup and as a pickle. Eerily similar to Kimchi.
  • While pork and beef are pretty common, finding vegetarian dishes is equally easy.
  • Staple: Dal-Bhat with Subzi. Rice is a lot more common than wheat (rice) possibly due to greater carb content and proximity to West Bengal, India’s largest producer of Rice.
  • Good places to eat in Gangtok
    • Hamro Bhansa Ghar, Nimtho (Nepali)
    • Taste of Tibet
    • Dragon Wok (Chinese & Japanese)

Buddhism in Sikkim

  • Bayul Demojong (Sikkim), is the most sacred Land in the Himalayas as per the belief of the Northern Buddhists and various religious texts.
  • Sikkim was blessed by Guru Padmasambhava, the great Buddhist saint who visited Sikkim in the 8th century and consecrated the land.
  • However, Buddhism is said to have reached Sikkim only in the 17th century with the arrival of three Tibetan monks viz. Rigdzin Goedki Demthruchen, Mon Kathok Sonam Gyaltshen & Rigdzin Legden Je at Yuksom. Together, they established a Buddhist monastery.
  • In 1642 they crowned Phuntsog Namgyal as the first monarch of Sikkim and gave him the title of Chogyal, or Dharma Raja.
  • The faith became popular through its royal patronage and soon many villages had their own monastery.
  • Today Sikkim has over 200 monasteries.

Major monasteries

  • Rumtek Monastery, 20Km from Gangtok
  • Lingdum/Ranka Monastery, 17Km from Gangtok
  • Phodong Monastery, 28Km from Gangtok
  • Ralang Monastery, 10Km from Ravangla
  • Tsuklakhang Monastery, Royal Palace, Gangtok
  • Enchey Monastery, Gangtok
  • Tashiding Monastery, 35Km from Ravangla


Reaching Sikkim

  • Gangtok, being the capital, is easiest to reach amongst other regions, by public transport and shared cabs.
  • By Air:
    • Pakyong (PYG) :
      • Nearest airport from Gangtok (about 1 hour away)
      • Tabletop airport
      • Reserved cabs cost around INR 1200.
      • As of Apr 2021, the only flights to PYG are from IGI (Delhi) and CCU (Kolkata).
    • Bagdogra (IXB) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Larger airport with flights to most major Indian cities.
      • Reserved cabs cost about INR 3000. Shared cabs cost about INR 350.
  • By Train:
    • New Jalpaiguri (NJP) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Reserved cabs cost about INR 3000. Shared cabs from INR 350.
  • By Road:
    • NH10 connects Siliguri to Gangtok
    • If you can’t find buses plying to Gangtok directly, reach Siliguri and then take a cab to Gangtok.
  • Sikkim Nationalised Transport Div. also runs hourly buses between Siliguri and Gangtok and daily buses on other common routes. They’re cheaper than shared cabs.
  • Wizzride also operates shared cabs between Siliguri/Bagdogra/NJP, Gangtok and Darjeeling. They cost about the same as shared cabs but pack in half as many people in “luxury cars” (Innova, Xylo, etc.) and are hence more comfortable.

Gangtok

  • Time needed: 1D/1N
  • Places to visit:
    • Hanuman Tok
    • Ganesh Tok
    • Tashi View Point [6,800ft]
    • MG Marg
    • Sikkim Zoo
    • Gangtok Ropeway
    • Enchey Monastery
    • Tsuklakhang Palace & Monastery
  • Hostels: Tagalong Backpackers (would strongly recommend), Zostel Gangtok
  • Places to chill: Travel Cafe, Café Live & Loud and Gangtok Groove
  • Places to shop: Lal Market and MG Marg

Getting Around

  • Taxis operate on a reserved or shared basis. In case of the latter, you can pool with other commuters your taxis will pick up and drop en-route.
  • Naturally shared taxis only operate on popular routes. The easiest way to get around Gangtok is to catch a shared cab from MG Marg.
  • Reserved taxis for Gangtok sightseeing cost around INR 1000-1500, depending upon the spots you’d like to see
  • Key taxi/bus stands :
    • Deorali stand: For Darjeeling, Siliguri, Kalimpong
    • Vajra stand: For North & East Sikkim (Tsomgo Lake & Nathula)
    • Rumtek taxi: For Ravangla, Pelling, Namchi, Geyzing, Jorethang and Singtam.

Exploring Gangtok on an MTB


North Sikkim

  • The easiest & most economical way to explore North Sikkim is the 3D/2N package offered by shared-cab drivers.
  • This includes food, permits, cab rides and accommodation (1N in Lachen and 1N in Lachung)
  • The accommodation on both nights are at homestays with bare necessities, so keep your hopes low.
  • In the spirit of sustainable tourism, you’ll be asked to discard single-use plastic bottles, so please carry a bottle that you can refill along the way.
  • Zero Point and Gurdongmer Lake are snow-capped throughout the year

3D/2N Shared-cab Package Itinerary

  • Day 1
    • Gangtok (10am) - Chungthang - Lachung (stay)
  • Day 2
    • Pre-lunch : Lachung (6am) - Yumthang Valley [12,139ft] - Zero Point - Lachung [15,300ft]
    • Post-lunch : Lachung - Chungthang - Lachen (stay)
  • Day 3
    • Pre-lunch : Lachen (5am) - Kala Patthar - Gurdongmer Lake [16,910ft] - Lachen
    • Post-lunch : Lachen - Chungthang - Gangtok (7pm)
  • This itinerary is idealistic and depends on the level of snowfall.
  • Some drivers might switch up Day 2 and 3 itineraries by visiting Lachen and then Lachung, depending upon the weather.
  • Areas beyond Lachen & Lachung are heavily militarized since the Indo-China border is only a few miles away.


East Sikkim

Zuluk and Silk Route

  • Time needed: 2D/1N
  • Zuluk [9,400ft] is a small hamlet with an excellent view of the eastern Himalayan range including the Kanchenjunga.
  • Was once a transit point to the historic Silk Route from Tibet (Lhasa) to India (West Bengal).
  • The drive from Gangtok to Zuluk takes at least four hours. Hence, it makes sense to spend the night at a homestay and space out your trip to Zuluk

Tsomgo Lake and Nathula

  • Time Needed : 1D
  • A Protected Area Permit is required to visit these places, due to their proximity to the Chinese border
  • Tsomgo/Chhangu Lake [12,313ft]
    • Glacial lake, 40 km from Gangtok.
    • Remains frozen during the winter season.
    • You can also ride on the back of a Yak for INR 300
  • Baba Mandir
    • An old temple dedicated to Baba Harbhajan Singh, a Sepoy in the 23rd Regiment, who died in 1962 near the Nathu La during Indo – China war.
  • Nathula Pass [14,450ft]
    • Located on the Indo-Tibetan border crossing of the Old Silk Route, it is one of the three open trading posts between India and China.
    • Plays a key role in the Sino-Indian Trade and also serves as an official Border Personnel Meeting(BPM) Point.
    • May get cordoned off by the Indian Army in event of heavy snowfall or for other security reasons.


West Sikkim

  • Time needed: 3N/1N
  • Hostels at Pelling : Mochilerro Ostillo

Itinerary

Day 1: Gangtok - Ravangla - Pelling

  • Leave Gangtok early, for Ravangla through the Temi Tea Estate route.
  • Spend some time at the tea garden and then visit Buddha Park at Ravangla
  • Head to Pelling from Ravangla

Day 2: Pelling sightseeing

  • Hire a cab and visit Skywalk, Pemayangtse Monastery, Rabdentse Ruins, Kecheopalri Lake, Kanchenjunga Falls.

Day 3: Pelling - Gangtok/Siliguri

  • Wake up early to catch a glimpse of Kanchenjunga at the Pelling Helipad around sunrise
  • Head back to Gangtok on a shared-cab
  • You could take a bus/taxi back to Siliguri if Pelling is your last stop.

Darjeeling

  • In my opinion, Darjeeling is lovely for a two-day detour on your way back to Bagdogra/Siliguri and not any longer (unless you’re a Bengali couple on a honeymoon)
  • Once a part of Sikkim, Darjeeling was ceded to the East India Company after a series of wars, with Sikkim briefly receiving a grant from EIC for “gifting” Darjeeling to the latter
  • Post-independence, Darjeeling was merged with the state of West Bengal.

Itinerary

Day 1 :

  • Take a cab from Gangtok to Darjeeling (shared-cabs cost INR 300 per seat)
  • Reach Darjeeling by noon and check in to your Hostel. I stayed at Hideout.
  • Spend the evening visiting either a monastery (or the Batasia Loop), Nehru Road and Mall Road.
  • Grab dinner at Glenary whilst listening to live music.

Day 2:

  • Wake up early to catch the sunrise and a glimpse of Kanchenjunga at Tiger Hill. Since Tiger Hill is 10km from Darjeeling and requires a permit, book your taxi in advance.
  • Alternatively, if you don’t want to get up at 4am or shell out INR1500 on the cab to Tiger Hill, walk to the Kanchenjunga View Point down Mall Road
  • Next, queue up outside Keventers for breakfast with a view in a century-old cafe
  • Get a cab at Gandhi Road and visit a tea garden (Happy Valley is the closest) and the Ropeway. I was lucky to meet 6 other backpackers at my hostel and we ended up pooling the cab at INR 200 per person, with INR 1400 being on the expensive side, but you could bargain.
  • Get lunch, buy some tea at Golden Tips, pack your bags and hop on a shared-cab back to Siliguri. It took us about 4hrs to reach Siliguri, with an hour to spare before my train.
  • If you’ve still got time on your hands, then check out the Peace Pagoda and the Darjeeling Himalayan Railway (Toy Train). At INR 1500, I found the latter to be too expensive and skipped it.


Tips and hacks

  • Download offline maps, especially when you’re exploring Northern Sikkim.
  • Food and booze are the cheapest in Gangtok. Stash up before heading to other regions.
  • Keep your Aadhar/Passport handy since you need permits to travel to North & East Sikkim.
  • In rural areas and some cafes, you may get to try Rhododendron Wine, made from Rhododendron arboreum a.k.a Gurans. Its production is a little hush-hush since the flower is considered holy and is also the National Flower of Nepal.
  • If you don’t want to invest in a new jacket, boots or a pair of gloves, you can always rent them at nominal rates from your hotel or little stores around tourist sites.
  • Check the weather of a region before heading there. Low visibility and precipitation can quite literally dampen your experience.
  • Keep your itinerary flexible to accommodate for rest and impromptu plans.
  • Shops and restaurants close by 8pm in Sikkim and Darjeeling. Plan for the same.

Carry…

  • a couple of extra pairs of socks (woollen, if possible)
  • a pair of slippers to wear indoors
  • a reusable water bottle
  • an umbrella
  • a power bank
  • a couple of tablets of Diamox. Helps deal with altitude sickness
  • extra clothes and wet bags since you may not get a chance to wash/dry your clothes
  • a few passport size photographs

Shared-cab hacks

  • Intercity rides can be exhausting. If you can afford it, pay for an additional seat.
  • Call shotgun on the drives beyond Lachen and Lachung. The views are breathtaking.
  • Return cabs tend to be cheaper (WB cabs travelling from SK and vice-versa)

Cost

  • My median daily expenditure (back when I went to Sikkim in early March 2021) was INR 1350.
  • This includes stay (bunk bed), food, wine and transit (shared cabs)
  • In my defence, I splurged on food, wine and extra seats in shared cabs, but if you’re on a budget, you could easily get by on INR 1 - 1.2k per day.
  • For a 9-day trip, I ended up shelling out nearly INR 15k, including 2AC trains to & from Kolkata
  • Note : Summer (March to May) and Autumn (October to December) are peak seasons, and thereby more expensive to travel around.

Souvenirs and things you should buy

Buddhist souvenirs :

  • Colourful Prayer Flags (great for tying on bikes or behind car windshields)
  • Miniature Prayer/Mani Wheels
  • Lucky Charms, Pendants and Key Chains
  • Cham Dance masks and robes
  • Singing Bowls
  • Common symbols: Om mani padme hum, Ashtamangala, Zodiac signs

Handicrafts & Handlooms

  • Tibetan Yak Wool shawls, scarfs and carpets
  • Sikkimese Ceramic cups
  • Thangka Paintings

Edibles

  • Darjeeling Tea (usually brewed and not boiled)
  • Wine (Arucha Peach & Rhododendron)
  • Dalle Khursani (Chilli) Paste and Pickle

Header Icon made by Freepik from www.flaticon.com is licensed by CC 3.0 BY

11 April, 2021 02:04PM by Vishal Gupta