October 16, 2019

Antoine Beaupré

Theory: average bus factor = 1

Two articles recently made me realize that all my free software projects basically have a bus factor of one. I am the sole maintainer of every piece of software I have ever written that I still maintain. There are projects that I have been the maintainer of which have other maintainers now (most notably AlternC, Aegir and Linkchecker), but I am not the original author of any of those projects.

Now that I have a full time job, I feel the pain. Projects like Gameclock, Monkeysign, Stressant, and (to a lesser extent) Wallabako all need urgent work: the first three need to be ported to Python 3, the first two to GTK 3, and the latter will probably die because I am getting a new e-reader. (For the record, more recent projects like undertime and feed2exec are doing okay, mostly because they were written in Python 3 from the start, and the latter has extensive unit tests. But they do suffer from the occasional bitrot (the latter in particular) and need constant upkeep.)

Now that I barely have time to keep up with just the upkeep, I can't help but think all of my projects will just die if I stop working on them. I have the same feeling about the packages I maintain in Debian.

What does that mean? Does that mean those packages are useless? That no one cares enough to get involved? That I'm not doing a good job at including contributors?

I don't think so. I think I'm a friendly person online, and I try my best at doing good documentation and followup on my projects. What I have come to understand is even more depressing and scary that this being a personal failure: that is the situation with everyone, everywhere. The LWN article is not talking about silly things like a chess clock or a feed reader: we're talking about the Linux input drivers. A very deep, core component of the vast majority of computers running on the planet, that depend on that single maintainer. And I'm not talking about whether those people are paid or not, that's related, but not directly the question here. The same realization occured with OpenSSL and NTP, GnuPG is in a similar situation, the list just goes on and on.

A single guy maintains those projects! Is that a fluke? A statistical anomaly? Everything I feel, and read, and know in my decades of experience with free software show me a reality that I've been trying to deny for all that time: it's the average.

My theory is this: our average bus factor is one. I don't have any hard evidence to back this up, no hard research to rely on. I'd love to be proven wrong. I'd love for this to change.

But unless economics of technology production change significantly in the coming decades, this problem will remain, and probably worsen, as we keep on scaffolding an entire civilization on shoulders of hobbyists that are barely aware their work is being used to power phones, cars, airplanes and hospitals. A lot has been written on this, but nothing seems to be moving.

And if that doesn't scare you, it damn well should. As a user, one thing you can do is, instead of wondering if you should buy a bit of proprietary software, consider using free software and donating that money to free software projects instead. Lobby governments and research institutions to sponsor only free software projects. Otherwise this civilization will collapse in a crash of spaghetti code before it even has time to get flooded over.

16 October, 2019 07:21PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

PhD Poster

Thumbnail of the poster

Thumbnail of the poster

Today the School of Computing organised a poster session for stage 2 & 3 PhD candidates. Here's the poster I submitted. jdowland-phd-poster.pdf (692K)

This is the first poster I've prepared for my PhD work. I opted to follow the "BetterPoster" principles established by Mike Morrison. These are best summarized in his #BetterPoster 2 minute YouTube video. I adapted this LaTeX #BetterPoster template. This template is licensed under the GPL v3.0 which requires me to provide the source of the poster, so here it is.

After browsing around other student's posters, two things I would now add to the poster would be a mugshot (so people could easily determine who's poster it was, if they wanted to ask questions) and Red Hat's logo, to acknowledge their support of my work.

16 October, 2019 01:53PM

October 15, 2019

hackergotchi for Julien Danjou

Julien Danjou

Sending Emails in Python — Tutorial with Code Examples

Sending Emails in Python — Tutorial with Code Examples

What do you need to send an email with Python? Some basic programming and web knowledge along with the elementary Python skills. I assume you’ve already had a web app built with this language and now you need to extend its functionality with notifications or other emails sending. This tutorial will guide you through the most essential steps of sending emails via an SMTP server:

  1. Configuring a server for testing (do you know why it’s important?)
  2. Local SMTP server
  3. Mailtrap test SMTP server
  4. Different types of emails: HTML, with images, and attachments
  5. Sending multiple personalized emails (Python is just invaluable for email automation)
  6. Some popular email sending options like Gmail and transactional email services

Served with numerous code examples written and tested on Python 3.7!

Sending an email using an SMTP

The first good news about Python is that it has a built-in module for sending emails via SMTP in its standard library. No extra installations or tricks are required. You can import the module using the following statement:

import smtplib

To make sure that the module has been imported properly and get the full description of its classes and arguments, type in an interactive Python session:

help(smtplib)

At our next step, we will talk a bit about servers: choosing the right option and configuring it.

An SMTP server for testing emails in Python

When creating a new app or adding any functionality, especially when doing it for the first time, it’s essential to experiment on a test server. Here is a brief list of reasons:

  1. You won’t hit your friends’ and customers’ inboxes. This is vital when you test bulk email sending or work with an email database.
  2. You won’t flood your own inbox with testing emails.
  3. Your domain won’t be blacklisted for spam.

Local SMTP server

If you prefer working in the local environment, the local SMTP debugging server might be an option. For this purpose, Python offers an smtpd module. It has a DebuggingServer feature, which will discard messages you are sending out and will print them to stdout. It is compatible with all operations systems.

Set your SMTP server to localhost:1025

python -m smtpd -n -c DebuggingServer localhost:1025

In order to run SMTP server on port 25, you’ll need root permissions:

sudo python -m smtpd -n -c DebuggingServer localhost:25

It will help you verify whether your code is working and point out the possible problems if there are any. However, it won’t give you the opportunity to check how your HTML email template is rendered.

Fake SMTP server

Fake SMTP server imitates the work of a real 3rd party web server. In further examples in this post, we will use Mailtrap. Beyond testing email sending, it will let us check how the email will  be rendered and displayed, review the message raw data as well as will provide us with a spam report. Mailtrap is very easy to set up: you will need just copy the credentials generated by the app and paste them into your code.

Sending Emails in Python — Tutorial with Code Examples

Here is how it looks in practice:

import smtplib

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # your password generated by Mailtrap

Mailtrap makes things even easier. Go to the Integrations section in the SMTP settings tab and get the ready-to-use template of the simple message, with your Mailtrap credentials in it. It is the most basic option of instructing your Python script on who sends what to who is the sendmail() instance method:

Sending Emails in Python — Tutorial with Code Examples

The code looks pretty straightforward, right? Let’s take a closer look at it and add some error handling (see the comments in between). To catch errors, we use the try and except blocks.

# The first step is always the same: import all necessary components:
import smtplib
from socket import gaierror

# Now you can play with your code. Let’s define the SMTP server separately here:
port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

# Specify the sender’s and receiver’s email addresses:
sender = "from@example.com"
receiver = "mailtrap@example.com"

# Type your message: use two newlines (\n) to separate the subject from the message body, and use 'f' to  automatically insert variables in the text
message = f"""\
Subject: Hi Mailtrap
To: {receiver}
From: {sender}
This is my first message with Python."""

try:
  # Send your message with credentials specified above
  with smtplib.SMTP(smtp_server, port) as server:
    server.login(login, password)
    server.sendmail(sender, receiver, message)
except (gaierror, ConnectionRefusedError):
  # tell the script to report if your message was sent or which errors need to be fixed
  print('Failed to connect to the server. Bad connection settings?')
except smtplib.SMTPServerDisconnected:
  print('Failed to connect to the server. Wrong user/password?')
except smtplib.SMTPException as e:
  print('SMTP error occurred: ' + str(e))
else:
  print('Sent')

Once you get the Sent result in Shell, you should see your message in your Mailtrap inbox:

Sending Emails in Python — Tutorial with Code Examples

Sending emails with HTML content

In most cases, you need to add some formatting, links, or images to your email notifications. We can simply put all of these with the HTML content. For this purpose, Python has an email package.

We will deal with the MIME message type, which is able to combine HTML and plain text. In Python, it is handled by the email.mime module.

It is better to write a text version and an HTML version separately, and then merge them with the MIMEMultipart("alternative") instance. It means that such a message has two rendering options accordingly. In case an HTML isn’t be rendered successfully for some reason, a text version will still be available.

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "multipart test"
message["From"] = sender_email
message["To"] = receiver_email
# Write the plain text part
text = """\ Hi, Check out the new post on the Mailtrap blog: SMTP Server for Testing: Cloud-based or Local? https://blog.mailtrap.io/2018/09/27/cloud-or-local-smtp-server/ Feel free to let us know what content would be useful for you!"""

# write the HTML part
html = """\ <html> <body> <p>Hi,<br> Check out the new post on the Mailtrap blog:</p> <p><a href="https://blog.mailtrap.io/2018/09/27/cloud-or-local-smtp-server">SMTP Server for Testing: Cloud-based or Local?</a></p> <p> Feel free to <strong>let us</strong> know what content would be useful for you!</p> </body> </html> """

# convert both parts to MIMEText objects and add them to the MIMEMultipart message
part1 = MIMEText(text, "plain")
part2 = MIMEText(html, "html")
message.attach(part1)
message.attach(part2)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail( sender_email, receiver_email, message.as_string() )

print('Sent')
Sending Emails in Python — Tutorial with Code ExamplesThe resulting output

Sending Emails with Attachments in Python

The next step in mastering sending emails with Python is attaching files. Attachments are still the MIME objects but we need to encode them with the base64 module. A couple of important points about the attachments:

  1. Python lets you attach text files, images, audio files, and even applications. You just need to use the appropriate email class like email.mime.audio.MIMEAudio or email.mime.image.MIMEImage. For the full information, refer to this section of the Python documentation.
  2. Remember about the file size: sending files over 20MB is a bad practice.

In transactional emails, the PDF files are the most frequently used: we usually get receipts, tickets, boarding passes, order confirmations, etc. So let’s review how to send a boarding pass as a PDF file.

import smtplib
from email import encoders
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

subject = "An example of boarding pass"
sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart()
message["From"] = sender_email
message["To"] = receiver_email
message["Subject"] = subject

# Add body to email
body = "This is an example of how you can send a boarding pass in attachment with Python"
message.attach(MIMEText(body, "plain"))

filename = "yourBP.pdf"
# Open PDF file in binary mode
# We assume that the file is in the directory where you run your Python script from
with open(filename, "rb") as attachment:
# The content type "application/octet-stream" means that a MIME attachment is a binary file
part = MIMEBase("application", "octet-stream")
part.set_payload(attachment.read())
# Encode to base64
encoders.encode_base64(part)
# Add header
part.add_header("Content-Disposition", f"attachment; filename= {filename}")
# Add attachment to your message and convert it to string
message.attach(part)

text = message.as_string()
# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, text)

print('Sent')
Sending Emails in Python — Tutorial with Code ExamplesThe received email with your PDF

To attach several files, you can call the message.attach() method several times.

How to send an email with image attachment

Images, even if they are a part of the message body, are attachments as well. There are three types of them: CID attachments (embedded as a MIME object), base64 images (inline embedding), and linked images.

For adding a CID attachment, we will create a MIME multipart message with MIMEImage component:

import smtplib
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "CID image test"
message["From"] = sender_email
message["To"] = receiver_email

# write the HTML part
html = """\
<html>
<body>
<img src="cid:myimage">
</body>
</html>
"""
part = MIMEText(html, "html")
message.attach(part)

# We assume that the image file is in the same directory that you run your Python script from
with open('mailtrap.jpg', 'rb') as img:
  image = MIMEImage(img.read())
# Specify the  ID according to the img src in the HTML part
image.add_header('Content-ID', '<myimage>')
message.attach(image)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, message.as_string())

print('Sent')
Sending Emails in Python — Tutorial with Code ExamplesThe received email with CID image

The CID image is shown both as a part of the HTML message and as an attachment. Messages with this image type are often considered spam: check the Analytics tab in Mailtrap to see the spam rate and recommendations on its improvement. Many email clients — Gmail in particular — don’t display CID images in most cases. So let’s review how to embed a base64 encoded image instead.

Here we will use base64 module and experiment with the same image file:

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
import base64

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "inline embedding"
message["From"] = sender_email
message["To"] = receiver_email

# We assume that the image file is in the same directory that you run your Python script from
with open("image.jpg", "rb") as image:
  encoded = base64.b64encode(image.read()).decode()

html = f"""\
<html>
<body>
<img src="data:image/jpg;base64,{encoded}">
</body>
</html>
"""
part = MIMEText(html, "html")
message.attach(part)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, message.as_string())

print('Sent')
Sending Emails in Python — Tutorial with Code ExamplesA base64 encoded image

Now the image is embedded into the HTML message and is not available as an attached file. Python has encoded our JPEG image, and if we go to the HTML Source tab, we will see the long image data string in the img src attribute.

How to Send Multiple Emails

Sending multiple emails to different recipients and making them personal is the special thing about emails in Python.

To add several more recipients, you can just type their addresses in separated by a comma, add Cc and Bcc. But if you work with a bulk email sending, Python will save you with loops.

One of the options is to create a database in a CSV format (we assume it is saved to the same folder as your Python script).

We often see our names in transactional or even promotional examples. Here is how we can make it with Python.

Let’s organize the list in a simple table with just two columns: name and email address. It should look like the following example:

#name,email
John Johnson,john@johnson.com
Peter Peterson,peter@peterson.com

The code below will open the file and loop over its rows line by line, replacing the {name} with the value from the “name” column.

import csv
import smtplib

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

message = """Subject: Order confirmation
To: {recipient}
From: {sender}
Hi {name}, thanks for your order! We are processing it now and will contact you soon"""
sender = "new@example.com"
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  with open("contacts.csv") as file:
  reader = csv.reader(file)
  next(reader)  # it skips the header row
  for name, email in reader:
    server.sendmail(
      sender,
      email,
      message.format(name=name, recipient=email, sender=sender),
    )
    print(f'Sent to {name}')

In our Mailtrap inbox, we see two messages: one for John Johnson and another for Peter Peterson, delivered simultaneously:

Sending Emails in Python — Tutorial with Code Examples


Sending emails with Python via Gmail

When you are ready for sending emails to real recipients, you can configure your production server. It also depends on your needs, goals, and preferences: your localhost or any external SMTP.

One of the most popular options is Gmail so let’s take a closer look at it.

We can often see titles like “How to set up a Gmail account for development”. In fact, it means that you will create a new Gmail account and will use it for a particular purpose.

To be able to send emails via your Gmail account, you need to provide access to it for your application. You can Allow less secure apps or take advantage of the OAuth2 authorization protocol. It’s a way more difficult but recommended due to the security reasons.

Further, to use a Gmail server, you need to know:

  • the server name = smtp.gmail.com
  • port = 465 for SSL/TLS connection (preferred)
  • or port = 587 for STARTTLS connection
  • username = your Gmail email address
  • password = your password
import smtplib
import ssl

port = 465
password = input("your password")
context = ssl.create_default_context()

with smtplib.SMTP_SSL("smtp.gmail.com", port, context=context) as server:
  server.login("my@gmail.com", password)

If you tend to simplicity, then you can use Yagmail, the dedicated Gmail/SMTP. It makes email sending really easy. Just compare the above examples with these several lines of code:

import yagmail

yag = yagmail.SMTP()
contents = [
"This is the body, and here is just text http://somedomain/image.png",
"You can find an audio file attached.", '/local/path/to/song.mp3'
]
yag.send('to@someone.com', 'subject', contents)

Next steps with Python

Those are just basic options of sending emails with Python. To get great results, review the Python documentation and experiment with your own code!

There are a bunch of various Python frameworks and libraries, which make creating apps more elegant and dedicated. In particular, some of them can help improve your experience with building emails sending functionality:

The most popular frameworks are:

  1. Flask, which offers a simple interface for email sending: Flask Mail.
  2. Django, which can be a great option for building HTML templates.
  3. Zope comes in handy for a website development.
  4. Marrow Mailer is a dedicated mail delivery framework adding various helpful configurations.
  5. Plotly and its Dash can help with mailing graphs and reports.

Also, here is a handy list of Python resources sorted by their functionality.

Good luck and don’t forget to stay on the safe side when sending your emails!

This article was originally published at Mailtrap’s blog: Sending emails with Python

15 October, 2019 10:33AM by Julien Danjou

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, September 2019

A Debian LTS logo
Like each month, here comes a report about
the work of paid contributors
to Debian LTS.

Individual reports

In September, 212.75 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Adrian Bunk did nothing (and got no hours assigned), but has been carrying 26h from August to October.
  • Ben Hutchings did 20h (out of 20h assigned).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 30h (out of 23.75h assigned and 5.25h from August), thus anticipating 1h from October.
  • Hugo Lefeuvre did nothing (out of 23.75h assigned), thus is carrying over 23.75h for October.
  • Jonas Meurer did 5h (out of 10h assigned and 9.5h from August), thus carrying over 14.5h to October.
  • Markus Koschany did 23.75h (out of 23.75h assigned).
  • Mike Gabriel did 11h (out of 12h assigned + 0.75h remaining), thus carrying over 1.75h to October.
  • Ola Lundqvist did 2h (out of 8h assigned and 8h from August), thus carrying over 14h to October.
  • Roberto C. Sánchez did 16h (out of 16h assigned).
  • Sylvain Beucler did 23.75h (out of 23.75h assigned).
  • Thorsten Alteholz did 23.75h (out of 23.75h assigned).

Evolution of the situation

September was more like a regular month again, though two contributors were not able to dedicate any time to LTS work.

For October we are welcoming Utkarsh Gupta as a new paid contributor. Welcome to the team, Utkarsh!

This month, we’re glad to announce that Cloudways is joining us as a new silver level sponsor ! With the reduced involvment of another long term sponsor, we are still at the same funding level (roughly 216 hours sponsored by month).

The security tracker currently lists 32 packages with a known CVE and the dla-needed.txt file has 37 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

15 October, 2019 07:20AM by Raphaël Hertzog

hackergotchi for Norbert Preining

Norbert Preining

State of Calibre in Debian

To counter some recent FUD spread about Calibre in general and Calibre in Debian in particular, here a concise explanation of the current state.

Many might have read my previous post on Calibre as a moratorium, but that was not my intention. Development of Calibre in Debian is continuing, despite the current stall.

Since it seems to be unclear what the current blockers are, there are two orthogonal problems regarding recent Calibre in Debian: One is the update to version 4 and the switch to qtwebengine, one is the purge of Python 2 from Debian.

Current state

Debian sid and testing currently hold Calibre 3.48 based on Python 2. Due to the ongoing purge, necessary modules (in particular python-cherrypy3) have been removed from Debian/sid, making the current Calibre package RC buggy (see this bug report). That means, that within reasonable time frame, Calibre will be removed from testing.


Now for the two orthogonal problems we are facing:

Calibre 4 packaging

Calibre 4 is already packaged for Debian (see the master-4.0 branch in the git repository). Uploading was first blocked due to a disappearing python-pyqt5.qwebengine which was extracted from PyQt5 package into its own. Thanks to the maintainers we now have a Python2 version build from the qtwebengine-opensource-src package.

But that still doesn’t cut it for Calibre 4, because it requires Qt 5.12, but Debian still carries 5.11 (released 1.5 years ago).

So the above mentioned branch is ready for upload as soon as Qt 5.12 is included in Debian.

Python 3

The other big problem is the purge of Python 2 from Debian. Upstream Calibre already supports building Python 3 versions since some months, with ongoing bug fixes. But including this into Debian poses some problems: The first stumbling block was a missing Python3 version of mechanize, which I have adopted after a 7 years hiatus, updated to the newest version and provided Python 3 modules for it.

Packaging for Debian is done in the experimental branch of the git repository, and is again ready to be uploaded to unstable.

But the much bigger concern here is that practically none of the external plugins of Calibre is ready for Python 3. Paired with the fact that probably most users of Calibre are using one or the other external plugin (just to mention Kepub plugin, DeDRM, …), uploading a Python 3 based version of Calibre would break usage for practically all users.


Since I put our (Debian’s) users first, I have thus decided to keep Calibre based on Python 2 as long as Debian allows. Unfortunately the overzealous purge spree has already introduced RC bugs, which means I am now forced to decide whether I upload a version of Calibre that breaks most users, or I don’t upload and see Calibre removed from testing. Not an easy decision.

Thus, my original plan was to keep Calibre based on Python 2 as long as possible, and hope that upstream switches to Python 3 in time before the next Debian release. This would trigger a continuous update of most plugins and would allow users in Debian to have a seamless transition without complete breakage. Unfortunately, this plan seems to be not actually executable.

Now let us return to the FUD spread:

  • Calibre is actively developed upstream
  • Calibre in Debian is actively maintained
  • Calibre is Python 3 ready, but the plugins are not
  • Calibre 4 is ready for Debian as soon as the dependencies are updated
  • Calibre/Python3 is ready for upload to Debian, but breaks practically all users

Hope that helps everyone to gain some understanding about the current state of Calibre in Debian.

15 October, 2019 05:25AM by Norbert Preining

Sergio Durigan Junior

Installing Gerrit and Keycloak for GDB

Back in September, we had the GNU Tools Cauldron in the gorgeous city of Montréal (perhaps I should write a post specifically about it...). One of the sessions we had was the GDB BoF, where we discussed, among other things, how to improve our patch review system.

I have my own personal opinions about the current review system we use (mailing list-based, in a nutshell), and I haven't felt very confident to express it during the discussion. Anyway, the outcome was that at least 3 global maintainers have used or are currently using the Gerrit Code Review system for other projects, are happy with it, and that we should give it a try. Then, when it was time to decide who wanted to configure and set things up for the community, I volunteered. Hey, I'm already running the Buildbot master for GDB, what is the problem to manage yet another service? Oh, well.

Before we dive into the details involved in configuring and running gerrit in a machine, let me first say that I don't totally support the idea of migrating from mailing list to gerrit. I volunteered to set things up because I felt the community (or at least the its most active members) wanted to try it out. I don't necessarily agree with the choice.

Ah, and I'm writing this post mostly because I want to be able to close the 300+ tabs I had to open on my Firefox during these last weeks, when I was searching how to solve the myriad of problems I faced during the set up!

The initial plan

My very initial plan after I left the session room was to talk to the sourceware.org folks and ask them if it would be possible to host our gerrit there. Surprisingly, they already have a gerrit instance up and running. It's been set up back in 2016, it's running an old version of gerrit, and is pretty much abandoned. Actually, saying that it has been configured is an overstatement: it doesn't support authentication, user registration, barely supports projects, etc. It's basically what you get from a pristine installation of the gerrit RPM package in RHEL 6.

I won't go into details here, but after some discussion it was clear to me that the instance on sourceware would not be able to meet our needs (or at least what I had in mind for us), and that it would be really hard to bring it to the quality level I wanted. I decided to go look for other options.

The OSCI folks

Have I mentioned the OSCI project before? They are absolutely awesome. I really love working with them, because so far they've been able to meet every request I made! So, kudos to them! They're the folks that host our GDB Buildbot master. Their infrastructure is quite reliable (I never had a single problem), and Marc Dequénes (Duck) is very helpful, friendly and quick when replying to my questions :-).

So, it shouldn't come as a surprise the fact that when I decided to look for other another place to host gerrit, they were my first choice. And again, they delivered :-).

Now, it was time to start thinking about the gerrit set up.

User registration?

Over the course of these past 4 weeks, I had the opportunity to learn a bit more about how gerrit does things. One of the first things that negatively impressed me was the fact that gerrit doesn't handle user registration by itself. It is possible to have a very rudimentary user registration "system", but it relies on the site administration manually registering the users (via htpasswd) and managing everything by him/herself.

It was quite obvious to me that we would need some kind of access control (we're talking about a GNU project, with a copyright assignment requirement in place, after all), and the best way to implement it is by having registered users. And so my quest for the best user registration system began...

Gerrit supports some user authentication schemes, such as OpenID (not OpenID Connect!), OAuth2 (via plugin) and LDAP. I remembered hearing about FreeIPA a long time ago, and thought it made sense using it. Unfortunately, the project's community told me that installing FreeIPA on a Debian system is really hard, and since our VM is running Debian, it quickly became obvious that I should look somewhere else. I felt a bit sad at the beginning, because I thought FreeIPA would really be our silver bullet here, but then I noticed that it doesn't really offer a self-service user registration.

After exchanging a few emails with Marc, he told me about Keycloak. It's a full-fledged Identity Management and Access Management software, supports OAuth2, LDAP, and provides a self-service user registration system, which is exactly what we needed! However, upon reading the description of the project, I noticed that it is written in Java (JBOSS, to be more specific), and I was afraid that it was going to be very demanding on our system (after all, gerrit is also a Java program). So I decided to put it on hold and take a look at using LDAP...

Oh, man. Where do I start? Actually, I think it's enough to say that I just tried installing OpenLDAP, but gave up because it was too cumbersome to configure. Have you ever heard that LDAP is really complicated? I'm afraid this is true. I just didn't feel like wasting a lot of time trying to understand how it works, only to have to solve the "user registration" problem later (because of course, OpenLDAP is just an LDAP server).

OK, so what now? Back to Keycloak it is. I decided that instead of thinking that it was too big, I should actually install it and check it for real. Best decision, by the way!

Setting up Keycloak

It's pretty easy to set Keycloak up. The official website provides a .tar.gz file which contains the whole directory tree for the project, along with helper scripts, .jar files, configuration, etc. From there, you just need to follow the documentation, edit the configuration, and voilà.

For our specific setup I chose to use PostgreSQL instead of the built-in database. This is a bit more complicated to configure, because you need to download the JDBC driver, and install it in a strange way (at least for me, who is used to just editing a configuration file). I won't go into details on how to do this here, because it's easy to find on the internet. Bear in mind, though, that the official documentation is really incomplete when covering this topic! This is one of the guides I used, along with this other one (which covers MariaDB, but can be adapted to PostgreSQL as well).

Another interesting thing to notice is that Keycloak expects to be running on its own virtual domain, and not under a subdirectory (e.g, https://example.org instead of https://example.org/keycloak). For that reason, I chose to run our instance on another port. It is supposedly possible to configure Keycloak to run under a subdirectory, but it involves editing a lot of files, and I confess I couldn't make it fully work.

A last thing worth mentioning: the official documentation says that Keycloak needs Java 8 to run, but I've been using OpenJDK 11 without problems so far.

Setting up Gerrit

The fun begins now!

The gerrit project also offers a .war file ready to be deployed. After you download it, you can execute it and initialize a gerrit project (or application, as it's called). Gerrit will create a directory full of interesting stuff; the most important for us is the etc/ subdirectory, which contains all of the configuration files for the application.

After initializing everything, you can try starting gerrit to see if it works. This is where I had my first trouble. Gerrit also requires Java 8, but unlike Keycloak, it doesn't work out of the box with OpenJDK 11. I had to make a small but important addition in the file etc/gerrit.config:

[container]
    ...
    javaOptions = "--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED"
    ...

After that, I was able to start gerrit. And then I started trying to set it up for OAuth2 authentication using Keycloak. This took a very long time, unfortunately. I was having several problems with Gerrit, and I wasn't sure how to solve them. I tried asking for help on the official mailing list, and was able to make some progress, but in the end I figured out what was missing: I had forgotten to add the AddEncodedSlashes On in the Apache configuration file! This was causing a very strange error on Gerrit (as you can see, a java.lang.StringIndexOutOfBoundsException!), which didn't make sense. In the end, my Apache config file looks like this:

<VirtualHost *:80>
    ServerName gnutoolchain-gerrit.osci.io

    RedirectPermanent / https://gnutoolchain-gerrit.osci.io/r/
</VirtualHost>

<VirtualHost *:443>
    ServerName gnutoolchain-gerrit.osci.io

    RedirectPermanent / /r/

    SSLEngine On
    SSLCertificateFile /path/to/cert.pem
    SSLCertificateKeyFile /path/to/privkey.pem
    SSLCertificateChainFile /path/to/chain.pem

    # Good practices for SSL
    # taken from: <https://mozilla.github.io/server-side-tls/ssl-config-generator/>

    # intermediate configuration, tweak to your needs
    SSLProtocol             all -SSLv3
    SSLCipherSuite          ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
    SSLHonorCipherOrder     on
    SSLCompression          off
    SSLSessionTickets       off

    # OCSP Stapling, only in httpd 2.3.3 and later
    #SSLUseStapling          on
    #SSLStaplingResponderTimeout 5
    #SSLStaplingReturnResponderErrors off
    #SSLStaplingCache        shmcb:/var/run/ocsp(128000)

    # HSTS (mod_headers is required) (15768000 seconds = 6 months)
    Header always set Strict-Transport-Security "max-age=15768000"

    ProxyRequests Off
    ProxyVia Off
    ProxyPreserveHost On
    <Proxy *>
        Require all granted
    </Proxy>

    AllowEncodedSlashes On
        ProxyPass /r/ http://127.0.0.1:8081/ nocanon
        #ProxyPassReverse /r/ http://127.0.0.1:8081/r/
</VirtualHost>

I confess I was almost giving up Keycloak when I finally found the problem...

Anyway, after that things went more smoothly. I was finally able to make the user authentication work, then I made sure Keycloak's user registration feature also worked OK...

Ah, one interesting thing: the user logout wasn't really working as expected. The user was able to logout from gerrit, but not from Keycloak, so when the user clicked on "Sign in", Keycloak would tell gerrit that the user was already logged in, and gerrit would automatically log the user in again! I was able to solve this by redirecting the user to Keycloak's logout page, like this:

[auth]
    ...
    logoutUrl = https://keycloak-url:port/auth/realms/REALM/protocol/openid-connect/logout?redirect_uri=https://gerrit-url/
    ...

After that, it was already possible to start worrying about configure gerrit itself. I don't know if I'll write a post about that, but let me know if you want me to.

Conclusion

If you ask me if I'm totally comfortable with the way things are set up now, I can't say that I am 100%. I mean, the set up seems robust enough that it won't cause problems in the long run, but what bothers me is the fact that I'm using technologies that are alien to me. I'm used to setting up things written in Python, C, C++, with very simple yet powerful configuration mechanisms, and an easy to discover what's wrong when something bad happens.

I am reasonably satisfied with the Keycloak logs things, but Gerrit leaves a lot to be desired in that area. And both projects are written in languages/frameworks that I am absolutely not comfortable with. Like, it's really tough to debug something when you don't even know where the code is or how to modify it!

All in all, I'm happy that this whole adventure has come to an end, and now all that's left is to maintain it. I hope that the GDB community can make good use of this new service, and I hope that we can see a positive impact in the quality of the whole patch review process.

My final take is that this is all worth as long as the Free Software and the User Freedom are the ones who benefit.

P.S.: Before I forget, our gerrit instance is running at https://gnutoolchain-gerrit.osci.io.

15 October, 2019 04:00AM by Sergio Durigan Junior

hackergotchi for Chris Lamb

Chris Lamb

Tour d'Orwell: The River Orwell

Continuing my Orwell-themed peregrination, a certain Eric Blair took his pen name "George Orwell" because of his love for a certain river just south of Ipswich, Suffolk. With sheepdog trials being undertaken in the field underneath, even the concrete Orwell Bridge looked pretty majestic from the garden centre — cum — food hall.

15 October, 2019 12:19AM

hackergotchi for Martin Pitt

Martin Pitt

Hardening Cockpit with systemd (socket activation)³

Background A major future goal for Cockpit is support for client-side TLS authentication, primarily with smart cards. I created a Proof of Concept and a demo long ago, but before this can be called production-ready, we first need to harden Cockpit’s web server cockpit-ws to be much more tamper-proof than it is today. This heavily uses systemd’s socket activation. I believe we are now using this in quite a unique and interesting way that helped us to achieve our goal rather elegantly and robustly.

15 October, 2019 12:00AM

October 14, 2019

Arturo Borrero González

What to expect in Debian 11 Bullseye for nftables/iptables

Logo

Debian 11 codename Bullseye is already in the works. Is interesting to make decision early in the development cycle to give people time to accommodate and integrate accordingly, and this post brings you the latest update on the plans for Netfilter software in Debian 11 Bullseye. Mind that Bullseye is expected to be released somewhere in 2021, so still plenty of time ahead.

The situation with the release of Debian 10 Buster is that iptables was using by default the -nft backend and one must explicitly select -legacy in the alternatives system in case of any problem. That was intended to help people migrate from iptables to nftables. Now the question is what to do next.

Back in July 2019, I started an email thread in the debian-devel@lists.debian.org mailing lists looking for consensus on lowering the archive priority of the iptables package in Debian 11 Bullseye. My proposal is to drop iptables from Priority: important and promote nftables instead.

In general, having such a priority level means the package is installed by default in every single Debian installation. Given that we aim to deprecate iptables and that starting with Debian 10 Buster iptables is not even using the x_tables kernel subsystem but nf_tables, having such priority level seems pointless and inconsistent. There was agreement, and I already made the changes to both packages.

This is another step in deprecating iptables and welcoming nftables. But it does not mean that iptables won’t be available in Debian 11 Bullseye. If you need it, you will need to use aptitude install iptables to download and install it from the package repository.

The second part of my proposal was to promote firewalld as the default ‘wrapper’ for firewaling in Debian. I think this is in line with the direction other distros are moving. It turns out firewalld integrates pretty well with the system, includes a DBus interface and many system daemons (like libvirt) already have native integration with firewalld. Also, I believe the days of creating custom-made scripts and hacks to handle the local firewall may be long gone, and firewalld should be very helpful here too.

14 October, 2019 05:00PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Bpfcc New Release

BPF Compiler Collection 0.11.0

bpfcc version 0.11.0 has been uploaded to Debian Unstable and should be accessible in the repositories by now. After the 0.8.0 release, this has been the next one uploaded to Debian.

Multiple source respositories

This release brought in dependencies to another set of sources from the libbpf project. In the upstream repo, this is still a topic of discussion on how to release tools where one depends on another, in unison. Right now, libbpf is configured as a git submodule in the bcc repository. So anyone using the upstream git repoistory should be able to build it.

Multiple source archive for a Debian package

So I had read in the past about Multiple source tarballs for a single package in Debian but never tried it because I wasn’t maintaining anything in Debian which was such. With bpfcc it was now a good opportunity to try it out. First, I came across this post from RaphaĂŤl Hertzog which gives a good explanation of what all has been done. This article was very clear and concise on the topic

Git Buildpackage

gbp is my tool of choice for packaging in Debian. So I did a quick look to check how gbp would take care of it. And everything was in place and Just Worked

rrs@priyasi:~/rrs-home/Community/Packaging/bpfcc (master)$ gbp buildpackage --git-component=libbpf
gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig.tar.gz
gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig-libbpf.tar.gz
gbp:info: Performing the build
dpkg-checkbuilddeps: error: Unmet build dependencies: arping clang-format cmake iperf libclang-dev libedit-dev libelf-dev libzip-dev llvm-dev libluajit-5.1-dev luajit python3-pyroute2
W: Unmet build-dependency in source
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying fix-install-path.patch
dh clean --buildsystem=cmake --with python3 --no-parallel
   dh_auto_clean -O--buildsystem=cmake -O--no-parallel
   dh_autoreconf_clean -O--buildsystem=cmake -O--no-parallel
   dh_clean -O--buildsystem=cmake -O--no-parallel
dpkg-source: info: using source format '3.0 (quilt)'
dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig-libbpf.tar.gz
dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig.tar.gz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: warning: ignoring deletion of directory src/cc/libbpf
dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.debian.tar.xz
dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.dsc
I: Generating source changes file for original dsc
dpkg-genchanges: info: including full source code in upload
dpkg-source: info: unapplying fix-install-path.patch
ERROR: ld.so: object 'libeatmydata.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
W: cgroups are not available on the host, not using them.
I: pbuilder: network access will be disabled during build
I: Current time: Sun Oct 13 19:53:57 IST 2019
I: pbuilder-time-stamp: 1570976637
I: Building the build Environment
I: extracting base tarball [/var/cache/pbuilder/sid-amd64-base.tgz]
I: copying local configuration
I: mounting /proc filesystem
I: mounting /sys filesystem
I: creating /{dev,run}/shm
I: mounting /dev/pts filesystem
I: redirecting /dev/ptmx to /dev/pts/ptmx
I: Mounting /var/cache/apt/archives/
I: policy-rc.d already exists
W: Could not create compatibility symlink because /tmp/buildd exists and it is not a directory
I: using eatmydata during job
I: Using pkgname logfile
I: Current time: Sun Oct 13 19:54:04 IST 2019
I: pbuilder-time-stamp: 1570976644
I: Setting up ccache
I: Copying source file
I: copying [../bpfcc_0.11.0-1.dsc]
I: copying [../bpfcc_0.11.0.orig-libbpf.tar.gz]
I: copying [../bpfcc_0.11.0.orig.tar.gz]
I: copying [../bpfcc_0.11.0-1.debian.tar.xz]
I: Extracting source
dpkg-source: warning: extracting unsigned source package (bpfcc_0.11.0-1.dsc)
dpkg-source: info: extracting bpfcc in bpfcc-0.11.0
dpkg-source: info: unpacking bpfcc_0.11.0.orig.tar.gz
dpkg-source: info: unpacking bpfcc_0.11.0.orig-libbpf.tar.gz
dpkg-source: info: unpacking bpfcc_0.11.0-1.debian.tar.xz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying fix-install-path.patch
I: Not using root during the build.

14 October, 2019 09:24AM by Ritesh Raj Sarraf (rrs@researchut.com)

Utkarsh Gupta

Joining Debian LTS!

Hey,

(DPL Style):
TL;DR: I joined Debian LTS as a trainee in July (during DebConf) and finally as a paid contributor from this month onward! :D


Here’s something interesting that happened last weekend!
Back during the good days of DebConf19, I finally got a chance to meet Holger! As amazing and inspiring a person he is, it was an absolute pleasure meeting him and also, I got a chance to talk about Debian LTS in more detail.

I was introduced to Debian LTS by Abhijith during his talk in MiniDebConf Delhi. And since then, I’ve been kinda interested in that project!
But finally it was here that things got a little “official” and after a couple of mail exchanges with Holger and Raphael, I joined in as a trainee!

I had almost no idea what to do next, so the next month I stayed silent, observing the workflow as people kept committing and announcing updates.
And finally in September, I started triaging and fixing the CVEs for Jessie and Stretch (mostly the former).

Thanks to Abhijith who explained the basics of what DLA is and how do we go about fixing bugs and then announcing them.

With that, I could fix a couple of CVEs and thanks to Holger (again) for reviewing and sponsoring the uploads! :D

I mostly worked (as a trainee) on:

  • CVE-2019-10751, affecting httpie, and
  • CVE-2019-16680, affecting file-roller.

And finally this happened:
Commit that mattered!
(Though there’s a little hiccup that happened there, but that’s something we can ignore!)

So finally, I’ll be working with the team from this month on!
As Holger says, very much yay! :D

Until next time.
:wq for today.

14 October, 2019 12:20AM

October 13, 2019

hackergotchi for Debian XMPP Team

Debian XMPP Team

New Dino in Debian

Dino (dino-im in Debian), the modern and beautiful chat client for the desktop, has some nice, new features. Users of Debian testing (bullseye) might like to try them:

  • XEP-0391: Jingle Encrypted Transports (explained here)
  • XEP-0402: Bookmarks 2 (explained here)

Note, that users of Dino on Debian 10 (buster) should upgrade to version 0.0.git20181129-1+deb10u1, because of a number of security issues, that have been found (CVE-2019-16235, CVE-2019-16236, CVE-2019-16237).

There have been other XMPP related updates in Debian since release of buster, among them:

You might be interested in the Octobers XMPP newsletter, also available in German.

13 October, 2019 10:00PM by Martin

Iustin Pop

Actually fixing a bug

One of the outcomes of my recent (last few years) sports ramp-up is that my opensource work is almost entirely left aside. Having an office job makes it very hard to spend more time sitting at the computer at home too…

So even my travis dashboard was red for a while now, but I didn’t look into it until today. Since I didn’t change anything recently, just travis builds started to fail, I was sure it’s just environment changes that need to be taken into account.

And indeed it was so, for two out of three projects. The third one… I actually got to fix a bug, introduced at the beginning of the year, but for which gcc (same gcc that originally passed) started to trip on a while back. I even had to read the man page of snprintf! Was fun ☺, too bad I don’t have enough time to do this more often…

My travis dashboard is green again, and “test suite” (if you can call it that) is expanded to explicitly catch this specific problem in the future.

13 October, 2019 06:43PM

October 12, 2019

hackergotchi for Shirish Agarwal

Shirish Agarwal

Social media, knowledge and some history of Banking

First of all Happy Dusshera to everybody. While Dusshera is India is a symbol of many things, it is a symbol of forgiveness and new beginnings. While I don’t know about new beginnings I do feel there is still lot of baggage which needs to be left I would try to share some insights I uncovered over last few months and few realizations I came across.

First of all thank you to the Debian-gnome-team to keep working at new version of packages. While there are still a bunch of bugs which need to be fixed especially #895990 and #913978 among others, still kudos for working at it. Hopefully, those bugs and others will be fixed soon so we could install gnome without a hiccup. I have not been on IRC because my riot-web has been broken for several days now. Also most of the IRC and telegram channels at least related to Debian become mostly echo chambers one way or the other as you do not get any serious opposition. On twitter, while it’s highly toxic, you also get the urge to fight the good fight when either due to principles or for some other reason (usually paid trolls) people fight, While I follow my own rules on twitter apart from their TOS, I feel at least new people who are going on social media in India or perhaps elsewhere as well could use are –

  1. It is difficult to remain neutral and stick to the facts. If you just stick to the facts, you will be branded as urban naxal or some such names.
  2. I find many times, if you are calm and don’t react, many a times, they are curious and display ignorance of knowledge which you thought everyone knew is not there. Now whether that is due to either due to lack of education, lack of knowledge or pretensions, although if its pretentious, you are caught sooner or later.
  3. Be civil at all times, if somebody harassess you, calls you names, report them and block them, although twitter still needs to fix the reporting thing a whole lot more. Although, when even somebody like me (bit of understanding of law, technology, language etc.) had a hard time figuring out twitter’s reporting ways, I dunno how many people would be able to use it successfully ? Maybe they make it so unhelpful so the traffic flows no matter what. I do realize they still haven’t figured out their business model but that’s a question for another day. In short, they need to make it far more simpler than it is today.
  4. You always have an option to block people but it has its own consequences.
  5. Be passive-aggressive if the situation demands it.
  6. Most importantly though, if somebody starts making jokes about you or start abusing you, it is sure that the person on the other side doesn’t have any more arguments and you have won.

Banking

Before I start, let me share why I am putting a blog post on the topic. The reason is pretty simple. It seems a huge number of Indians don’t either know the history of how banking started, the various turns it took and so on and so forth. In fact, nowadays history is being so hotly contested and perhaps even being re-written. Hence for some things I would be sharing some sources but even within them, there is possibiity of contestations. One of the contestations for a long time is when ancient coinage and the technique of smelting, flattening came to India. Depending on whom you ask, you have different answers. Lot of people are waiting to get more insight from the Keezhadi excavation which may also give some insight to the topic as well. There are rumors that the funding is being stopped but hope that isn’t true and we gain some more insight in Indian history. In fact, in South India, there seems to be lot of curiousity and attraction towards the site. It is possible that the next time I get a chance to see South India, I may try to see if there is a chance to see this unique location if a museum gets built somewhere nearby. Sorry from deviating from the topic, but it seems that ancient coinage started anywhere between 1st millenium BCE to 6th century BCE so it could be anywhere between 1500 – 2000 years old in India. While we can’t say anything for sure, but it’s possible that there was barter before that. There has also been some history about sharing tokens in different parts of the world as well. The various timelines get all jumbled up hence I would suggest people to use the wikipedia page of History of Money as a starting point. While it may not be give a complete, it would probably broaden the understanding a little bit. One of the reasons why history is so hotly contested could also perhaps lie because of the destruction of the Ancient Library of Alexandria. Who knows what more we would have known of our ancients if it was not destroyed 😦

Hundi (16th Centry)

I am jumping to 16th century as it is more closer to today’s modern banking otherwise the blog post would be too long. Now Hundi was a financial instrument which was used from 16th century onwards. This could be as either a forbearer of a cheque or a Traveller’s cheque. There doesn’t seem to be much in way of information, whether this was introduced by the Britishers or before by the Portugese when they came to India in via when the Portugese came when they came to India or was it in prevelance before. There is a fascinating in-depth study of Hundi though between 1858-1978 done by Marina Bernadette for London School of Economics as her dissertion paper.

Banias and Sarafs

As I had shared before, history in India is intertwined with mythology and history. While there is possibility of a lot of history behind this is documented somewhere I haven’t been able to find it. As I come from Bania , I had learnt lot of stories about both the migratory strain that Banias had as well as how banias used to split their children in adjoining states. Before the Britishers ruled over India, popular history tells us that it was Mughal emprire that ruled over us. What it doesn’t tell us though that both during the Mughal empire as well as Britishers, Banias and Sarafs who were moneylenders and bullion traders respectively hedged their bets. More so, if they were in royal service or bound to be close to the power of administration of the state/mini-kingdom/s . What they used to do is make sure that one of the sons would obey the king here while the other son may serve the muslim ruler. The idea behind that irrespective of whoever wins, the banias or sarafs would be able to continue their traditions and it was very much possible that the other brother would not be killed or even if he was, any or all wealth will pass to the victorious brother and the family name will live on. If I were to look at that, I’m sure I’ll find the same not only in Banias and Sarafs but perhaps other castes and communities as well. Modern history also tells of Rothschilds who did and continue to be an influence on the world today.

As to why did I share about how people acted in their self-interest because nowadays in the Indian social media, it is because many people chose to believe a very simplistic black and white narrative and they are being fed that by today’s dominant political party in power. What I’m trying to simply say is that history is much more complex than that. While you may choose to believe either of the beliefs, it might open a window in at least some Indian’s minds that there is possibility of another way things were done and ways in which people acted then what is being perceived today. It is also possible it may be contested today as lot of people would like to appear in the ‘right side’ of history as it seems today.

Banking in British Raj till nationalization

When the Britishers came, they bought the modern Banking system with them. These lead to creation of various banks like Bank of Bengal, Bank of Bombay and Bank of Madras which was later subsumed under Imperial Bank of India which later became State Bank of India in 1955. While I will not go into details, I do have curiousity so if life has, would surely want to visit either the Banca Monte dei Paschi di Siena S.p.A of Italy or the Berenberg Bank both of which probably has lot of history than what is written on their wikipedi pages. Soon, other banks. Soon there was whole clutch of banks which will continue to facilitate the British till independance and leak money overseas even afterwards till the Banks were nationalized in 1956 due to the ‘Gorwala Committee’ which recommended. Apart from the opaqueness of private banking and leakages, there was non provision of loans to priority sector i.e. farming in India, A.D. Gorawala recommended nationalization to weed out both issues in a single solution. One could debate efficacy of the same, but history has shown us that privatization in financial sector has many a times been costly to depositors. The financial Crisis of 2008 and the aftermath in many of the financial markets, more so private banks is a testament to it. Even the documentary Plenary’s Men gives whole lot of insight in the corruption that Private banks do today.

The plenary’s men on Youtube at least to my mind is evidence enough that at least India should be cautious in dealings with private banks.

Co-operative banks and their Rise

The Co-operative banks rise in India was largely in part due to rise of co-operative societies. While the co-operative Societies Act was started in 1904 itself. While there were quite a few co-operative societies and banks, arguably the real filip to Co-operative Banking was done by Amul when it started in 1946 and the milk credit society it started with it. I dunno how many people saw ‘Manthan‘ which chronicled the story and bought the story of both the co-operative societies and co-operative banks to millions of India. It is a classic movie which lot of today’s youth probably doesn’t know and even if he would would take time to identify with, although people of my generation the earlier generations do get it. One of the things that many people don’t get is that for lot of people even today, especially for marginal farmers and such in rural areas, co-operative banks are still the only solution. While in recent times, the Govt. of the day has tried something called Jan Dhan Yojana it hasn’t been as much a success as they were hoping to. While reams of paper have been written about it, like most policies it didn’t deliver to the last person which such inclusion programs try. Issues from design to implementation are many but perhaps some other time. I am sharing about Co-operative banks as a recent scam took place in one of the banks, probably one of the most respected and widely held co-operative banks. I would rather share sucheta dalal’s excellent analysis done on the PMC bank crisis which is 1unfolding and perhaps continue to unfold in days to come.

Conclusion

At the end I have to admit I took a lot of short-cuts to reach till here. There is possibility that there may be details people might want me to incorporate, if so please let me know and would try and add that. I did try to compress as much as possible while trying to be as exhaustive as possible. I also haven’t used any data as I wanted to keep the explanations as simple as possible and try not to have as little of politics as possible even though biases which are there, are there.

12 October, 2019 10:58PM by shirishag75

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

GitHub Streak: Round Six

Five ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking

github activity october 2013 to october 2014github activity october 2013 to october 2014

And four year ago a first follow-up appeared in this post:

github activity october 2014 to october 2015github activity october 2014 to october 2015

And three years ago we had a followup

github activity october 2015 to october 2016github activity october 2015 to october 2016

And two years ago we had another one

github activity october 2016 to october 2017github activity october 2016 to october 2017

And last year another one

github activity october 2017 to october 2018github activity october 2017 to october 2018

As today is October 12, here is the newest one from 2018 to 2019:

github activity october 2018 to october 2019github activity october 2018 to october 2019

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 October, 2019 03:53PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Alpine MusicSafe Classic Hearing Protection Review

Yesterday, I went to a punk rock show and had tons of fun. One of the bands playing (Jeunesse Apatride) hadn't played in 5 years and the crowd was wild. The other bands playing were also great. Here's a few links if you enjoy Oi! and Ska:

Sadly, those kind of concerts are always waaaaayyyyy too loud. I mostly go to small venue concerts and for some reason the sound technicians think it's a good idea to make everyone's ears bleed. You really don't need to amplify the drums when the whole concert venue is 50m²...

So I bough hearing protection. It was the first time I wore earplugs at a concert and it was great! I can't really compare the model I got (Alpine MusicSafe Classic earplugs) to other brands since it's the only one I tried out, but:

  • They were very comfortable. I wore them for about 5 hours and didn't feel any discomfort.

  • They came with two sets of plastic tips you insert in the silicone earbuds. I tried the -17db ones but I decided to go with the -18db inserts as it was still freaking loud.

  • They fitted very well in my ears even tough I was in the roughest mosh pit I've ever experienced (and I've seen quite a few). I was sweating profusely from all the heavy moshing and never once I feared loosing them.

  • My ears weren't ringing when I came back home so I guess they work.

  • The earplugs didn't distort sound, only reduce the volume.

  • They came with a handy aluminium carrying case that's really durable. You can put it on your keychain and carry them around safely.

  • They only cost me ~25 CAD with taxes.

The only thing I disliked was that I found it pretty much impossible to sing while wearing them. as I couldn't really hear myself. With a bit of practice, I was able to sing true but it wasn't great :(

All in all, I'm really happy with my purchase and I don't think I'll ever go to another concert without earplugs.

12 October, 2019 04:00AM by Louis-Philippe Véronneau

October 11, 2019

Molly de Blanc

Conferences

I think there are too many conferences.

I conducted this very scientific Twitter poll and out of 52 respondants, only 23% agreed with me. Some people who disagreed with me pointed out specifically what they think is lacking:  more regional events, more in specific countries, and more “generic” FLOSS events.

Many projects have a conference, and then there are “generic” conferences, like FOSDEM, LibrePlanet, LinuxConfAU, and FOSSAsia. Some are more corporate (OSCON), while others more community focused (e.g. SeaGL).

There are just a lot of conferences.

I average a conference a month, with most of them being more general sorts of events, and a few being project specific, like DebConf and GUADEC.

So far in 2019, I went to: FOSDEM, CopyLeft Conf, LibrePlanet, FOSS North, Linux Fest Northwest, OSCON, FrOSCon, GUADEC, and GitLab Commit. I’m going to All Things Open next week. In November I have COSCon scheduled. I’m skipping SeaGL this year. I am not planning on attending 36C3 unless my talk is accepted. I canceled my trip to DebConf19. I did not go to Camp this year. I also had a board meeting in NY, an upcoming one in Berlin, and a Debian meeting in the other Cambridge. I’m skipping LAS and likely going to SFSCon for GNOME.

So 9 so far this year,  and somewhere between 1-4 more, depending on some details.

There are also conferences that don’t happen every year, like HOPE and CubaConf. There are some that I haven’t been to yet, like PyCon, and more regional events like Ohio Linux Fest, SCALE, and FOSSCon in Philadelphia.

I think I travel too much, and plenty of people travel more than I do. This is one of the reasons why we have too many events: the same people are traveling so much.

When you’re nose deep in it, when you think that you’re doing is important, you keep going to them as long as you’re invited. I really believe in the messages I share during my talks, and I know by speaking I am reaching audiences I wouldn’t otherwise. As long as I keep getting invited places, I’ll probably keep going.

Finding sponsors is hard(er).

It is becoming increasingly difficult to find sponsors for conferences. This is my experience, and what I’ve heard from speaking with others about it. Lower response rates to requests and people choosing lower sponsorship levels than they have in past years.

CFP responses are not increasing.

I sort of think the Tweet says it all. Some conferences aren’t having this experiences. Ones I’ve been involved with, or spoken to the organizers of, are needing to extend their deadlines and generally having lower response rates.

Do I think we need fewer conferences?

Yes and no. I think smaller, regional conferences are really important to reaching communities and individuals who don’t have the resources to travel. I think it gives new speakers opportunities to share what they have to say, which is important for the growth and robustness of FOSS.

Project specific conferences are useful for those projects. It gives us a time to have meetings and sprints, to work and plan, and to learn specifically about our project and feel more connected to out collaborators.

On the other hand, I do think we have more conferences than even a global movement can actively support in terms of speakers, organizer energy, and sponsorship dollars.

What do I think we can do?

Not all of these are great ideas, and not all of them would work for every event. However, I think some combination of them might make a difference for the ecosystem of conferences.

More single-track or two-track conferences. All Things Open has 27 sessions occurring concurrently. Twenty-seven! It’s a huge event that caters to many people, but seriously, that’s too much going on at once. More 3-4 track conferences should consider dropping to 1-2 tracks, and conferences with more should consider dropping their numbers down as well. This means fewer speakers at a time.

Stop trying to grow your conference. Growth feels like a sign of success, but it’s not. It’s a sign of getting more people to show up. It helps you make arguments to sponsors, because more attendees means more people being reached when a company sponsors.

Decrease sponsorship levels. I’ve seen conferences increasing their sponsorship levels. I think we should all agree to decrease those numbers. While we’ll all have to try harder to get more sponsors, companies will be able to sponsor more events.

Stop serving meals. I appreciate a free meal. It makes it easier to attend events, but meals are expensive and difficult to logisticate. I know meals make it easier for some people, especially students, to attend. Consider offering special RSVP lunches for students, recent grads, and people looking for work.

Ditch the fancy parties. Okay, I also love a good conference party. They’re loads of fun and can be quite expensive. They also encourage drinking, which I think is bad for the culture.

Ditch the speaker dinners. Okay, I also love a good speaker dinner. It’s fun to relax, see my friends, and have a nice meal that isn’t too loud of overwhelming. These are really expensive. I’ve been trying to donate to local food banks/food insecurity charities an equal amount of the cost of dinner per person, but people are rarely willing to share that information! Paying for a nice dinner out of pocket — with multiple bottles of wine — usually runs $50-80 with tip. I know one dinner I went to was $150 a person. I think the community would be better served if we spent that money on travel grants. If you want to be nice to speakers, I enjoy a box of chocolates I can take home and share with my roommates.

 Give preference to local speakers. One of the things conferences do is bring in speakers from around the world to share their ideas with your community, or with an equally global community. This is cool. By giving preference to local speakers, you’re building expertise in your geography.

Consider combining your community conferences. Rather than having many conferences for smaller communities, consider co-locating conferences and sharing resources (and attendees). This requires even more coordination to organize, but could work out well.

Volunteer for your favorite non-profit or project. A lot of us have booths at conferences, and send people around the world in order to let you know about the work we’re doing. Consider volunteering to staff a booth, so that your favorite non-profits and projects have to send fewer people.

While most of those things are not “have fewer conferences,” I think they would help solve the problems conference saturation is causing: it’s expensive for sponsors, it’s expensive for speakers, it creates a large carbon footprint, and it increases burnout among organizers and speakers.

I must enjoy traveling because I do it so much. I enjoy talking about FOSS, user rights, and digital rights. I like meeting people and sharing with them and learning from them. I think what I have to say is important. At the same time, I think I am part of an unhealthy culture in FOSS, that encourages burnout, excessive travel, and unnecessary spending of money, that could be used for better things.

One last thing you can do, to help me, is submit talks to your local conference(s). This will help with some of these problems as well, can be a great experience, and is good for your conference and your community!

11 October, 2019 11:23PM by mollydb

October 10, 2019

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in September 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Reiner Herrmann investigated a build failure of supertuxkart on several architectures and prepared an update to link against libatomic. I reviewed and sponsored the new revision which allowed supertuxkart 1.0 to migrate to testing.
  • Python 3 ports: Reiner also ported bouncy, a game for small kids, to Python3 which I reviewed and uploaded to unstable.
  • Myself upgraded atomix to version 3.34.0 as requested although it is unlikely that you will find a major difference to the previous version.

Debian Java

Misc

  • I packaged new upstream releases of ublock-origin and privacybadger, two popular Firefox/Chromium addons and
  • packaged a new upstream release of wabt, the WebAssembly Binary Toolkit.

Debian LTS

This was my 43. month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 11.09.2019 until 15.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in libonig, bird, curl, openssl, wpa, httpie, asterisk, wireshark and libsixel.
  • DLA-1922-1. Issued a security update for wpa fixing 1 CVE.
  • DLA-1932-1. Issued a security update for openssl fixing 2 CVE.
  • DLA-1900-2. Issued a regression update for apache fixing 1 CVE.
  • DLA-1943-1. Issued a security update for jackson-databind fixing 4 CVE.
  • DLA-1954-1. Issued a security update for lucene-solr fixing 1 CVE. I triaged CVE-2019-12401 and marked Jessie as not-affected because we use the system libraries of woodstox in Debian.
  • DLA-1955-1. Issued a security update for tcpdump fixing 24 CVE by backporting the latest upstream release to Jessie. I discovered several test failures but after more investigation I came to the conclusion that the test cases were simply created with a newer version of libpcap which causes the test failures with Jessie’s older version.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my sixteenth month and I have been assigned to work 15 hours on ELTS plus five hours from August. I used 15 of them for the following:

  • I was in charge of our ELTS frontdesk from 30.09.2019 until 06.10.2019 and I triaged CVE in tcpdump. There were no reports of other security vulnerabilities for supported packages in this week.
  • ELA-163-1. Issued a security update for curl fixing 1 CVE.
  • ELA-171-1. Issued a security update for openssl fixing 2 CVE.
  • ELA-172-1. Issued a security update for linux fixing 23 CVE.
  • ELA-174-1. Issued a security update for tcpdump fixing 24 CVE.

10 October, 2019 08:49PM by apo

hackergotchi for Norbert Preining

Norbert Preining

R with TensorFlow 2.0 on Debian/sid

I recently posted on getting TensorFlow 2.0 with GPU support running on Debian/sid. At that time I didn’t manage to get the tensorflow package for R running properly. It didn’t need much to get it running, though.

The biggest problem I faced was that the R/TensorFlow package recommends using install_tensorflow, which can use either auto, conda, virtualenv, or system (at least according to the linked web page). I didn’t want to set up neither a conda nor virtualenv environment, since TensorFlow was already installed, so I thought system would be correct, but then, I had it already installed. Anyway, the system option is gone and not accepted, but I still got errors. In particular because the code mentioned on the installation page is incorrect for TF2.0!

It turned out to be a simple error on my side – the default is to use the program python which in Debian is still Python2, while I have TF only installed for Python3. The magic incantation to fix that is use_python("/usr/bin/python3") and one is set.

So here is a full list of commands to get R/TensorFlow running on top of an already installed TensorFlow for Python3 (as usual either as root to be installed into /usr/local or as user to have a local installation):

devtools::install_github("rstudio/tensorflow")

And if you want to run some TF program:

library(tensorflow)
use_python("/usr/bin/python3")
tf$math$cumprod(1:5)

This gives lots of output but mentioning that it is running on my GPU.

At least for the (probably very short) time being this looks like a workable system. Now off to convert my TF1.N code to TF2.0.

10 October, 2019 06:15AM by Norbert Preining

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Trying out Sourcehut

Last month, I decided it was finally time to move a project I maintain from Github1 to another git hosting platform.

While polling other contributors (I proposed moving to gitlab.com), someone suggested moving to Sourcehut, a newish git hosting platform written and maintained by Drew DeVault. I've been following Drew's work for a while now and although I had read a few blog posts on Sourcehut's development, I had never really considered giving it a try. So I did!

Sourcehut is still in alpha and I'm expecting a lot of things to change in the future, but here's my quick review.

Things I like

Sustainable FOSS

Sourcehut is 100% Free Software. Github is proprietary and I dislike Gitlab's Open Core business model.

Sourcehut's business model also seems sustainable to me, as it relies on people paying a monthly fee for the service. You'll need to pay if you want your code hosted on https://sr.ht once Sourcehut moves into beta. As I've written previously, I like that a lot.

In comparison, Gitlab is mainly funded by venture capital and I'm afraid of the long term repercussions this choice will have.

Continuous Integration

Continuous Integration is very important to me and I'm happy to say Sourcehut's CI is pretty good! Like Travis and Gitlab CI, you declare what needs to happen in a YAML file. The CI uses real virtual machines backed by QEMU, so you can run many different distros and CPU archs!

Even nicer, you can actually SSH into a failed CI job to debug things. In comparison, Gitlab CI's Interactive Web Terminal is ... web based and thus not as nice. Worse, it seems it's still somewhat buggy as Gitlab still hasn't enabled it on their gitlab.com instance.

Here's what the instructions to SSH into the CI look like when a job fails:

This build job failed. You may log into the failed build environment within 10
minutes to examine the results with the following command:

ssh -t builds@foo.bar connect NUMBER

Sourcehut's CI is not as feature-rich or as flexible as Gitlab CI, but I feel it is more powerful then Gitlab CI's default docker executor. Folks that run integration tests or more complicated setups where Docker fails should definitely give it a try.

From the few tests I did, Sourcehut's CI is also pretty quick (it's definitely faster than Travis or Gitlab CI).

No JS

Although Sourcehut's web interface does bundle some Javascript, all features work without it. Three cheers for that!

Things I dislike

Features division

I'm not sure I like the way features (the issue tracker, the CI builds, the git repository, the wikis, etc.) are subdivided in different subdomains.

For example, when you create a git repository on git.sr.ht, you only get a git repository. If you want an issue tracker for that git repository, you have to create one at todo.sr.ht with the same name. That issue tracker isn't visible from the git repository web interface.

That's the same for all the features. For example, you don't see the build status of a merged commit when you look at it. This design choice makes you feel like the different features aren't integrated to one another.

In comparison, Gitlab and Github use a more "centralised" approach: everything is centered around a central interface (your git repository) and it feels more natural to me.

Discoverability

I haven't seen a way to search sr.ht for things hosted there. That makes it hard to find repositories, issues or even the Sourcehut source code!

Merge Request workflow

I'm a sucker for the Merge Request workflow. I really like to have a big green button I can click on to merge things. I know some people prefer a more manual workflow that uses git merge and stuff, but I find that tiresome.

Sourcehut chose a workflow based on sending patches by email. It's neat since you can submit code without having an account. Sourcehut also provides mailing lists for projects, so people can send patches to a central place.

I find that workflow harder to work with, since to me it makes it more difficult to see what patches have been submitted. It also makes the review process more tedious, since the CI isn't ran automatically on email patches.

Summary

All in all, I don't think I'll be moving ISBG to Sourcehut (yet?). At the moment it doesn't quite feel as ready as I'd want it to be, and that's OK. Most of the things I disliked about the service can be fixed by some UI work and I'm sure people are already working on it.

Github was bought by MS for 7.5 billion USD and Gitlab is currently valued at 2.7 billion USD. It's not really fair to ask Sourcehut to fully compete just yet :)

With Sourcehut, Drew DeVault is fighting the good fight and I wish him the most resounding success. Who knows, maybe I'll really migrate to it in a few years!


  1. Github is a proprietary service, has been bought by Microsoft and gosh darn do I hate Travis CI. 

10 October, 2019 04:00AM by Louis-Philippe Véronneau

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.9.800.1.0

armadillo image

Another month, another Armadillo upstream release! Hence a new RcppArmadillo release arrived on CRAN earlier today, and was just shipped to Debian as well. It brings a faster solve() method and other goodies. We also switched to the (awesome) tinytest unit test frameowrk, and Min Kim made the configure.ac script more portable for the benefit of NetBSD and other non-bash users; see below for more details. One again we ran two full sets of reverse-depends checks, no issues were found, and the packages was auto-admitted similarly at CRAN after less than two hours despite there being 665 reverse depends. Impressive stuff, so a big Thank You! as always to the CRAN team.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 665 other packages on CRAN.

Changes in RcppArmadillo version 0.9.800.1.0 (2019-10-09)

  • Upgraded to Armadillo release 9.800 (Horizon Scraper)

    • faster solve() in default operation; iterative refinement is no longer applied by default; use solve_opts::refine to explicitly enable refinement

    • faster expmat()

    • faster handling of triangular matrices by rcond()

    • added .front() and .back()

    • added .is_trimatu() and .is_trimatl()

    • added .is_diagmat()

  • The package now uses tinytest for unit tests (Dirk in #269).

  • The configure.ac script is now more careful about shell portability (Min Kim in #270).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 October, 2019 12:59AM

October 09, 2019

hackergotchi for Adnan Hodzic

Adnan Hodzic

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

09 October, 2019 03:55PM by admin

Enrico Zini

Fixed XSS issue on debtags.debian.org

Thanks to Moritz Naumann who found the issues and wrote a very useful report, I fixed a number of Cross Site Scripting vulnerabilities on https://debtags.debian.org.

The core of the issue was code like this in a Django view:

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package %s was not found" % name)
    # …

The default content-type of HttpResponseNotFound is text/html, and the string passed is the raw HTML with clearly no escaping, so this allows injection of arbitrary HTML/<script> code in the name variable.

I was so used to Django doing proper auto-escaping that I missed this place in which it can't do that.

There are various things that can be improved in that code.

One could introduce escaping (and while one's at it, migrate the old % to format):

from django.utils.html import escape

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package {} was not found".format(escape(name)))
    # …

Alternatively, set content_type to text/plain:

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package {} was not found".format(name), content_type="text/plain")
    # …

Even better, raise Http404:

from django.utils.html import escape

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        raise Http404(f"Package {name} was not found")
    # …

Even better, use standard shortcuts and model functions if possible:

from django.shortcuts import get_object_or_404

def pkginfo_view(request, name):
    pkg = get_object_or_404(bmodels.Package, name=name)
    # …

And finally, though not security related, it's about time to switch to class-based views:

class PkgInfo(TemplateView):
    template_name = "reports/package.html"

    def get_context_data(self, **kw):
        ctx = super().get_context_data(**kw)
        ctx["pkg"] = get_object_or_404(bmodels.Package, name=self.kwargs["name"])
    # …
        return ctx

I proceeded with a review of the other Django sites I maintain in case I reproduced this mistake also there.

09 October, 2019 08:51AM

hackergotchi for Chris Lamb

Chris Lamb

Tour d'Orwell: Southwold

I recently read that that during 1929 George Orwell returned to his family home in the Suffolk town of Southwold but when I further learned that he had acquired a motorbike during this time to explore the surrounding villages I could not resist visiting myself on such a transport mode.

Orwell would end up writing his first novel here ("Burmese Days") followed by his first passable one ("A Clergyman's Daughter") but unfortunately the local bookshop was only to have the former in stock. He moved back to London in 1934 to work in a bookshop in Hampstead, now a «Le Pain Quotidien».

If you are thinking of visiting, Southwold has some lovely quaint beach huts, a brewery and the officially signposted A1120 "Scenic Route" I took on the way out was neither as picturesque nor as fun to ride as the A1066

09 October, 2019 12:29AM

October 08, 2019

hackergotchi for Steve Kemp

Steve Kemp

A blog overhaul

When this post becomes public I'll have successfully redeployed my blog!

My blog originally started in 2005 as a Wordpress installation, at some point I used Mephisto, and then I wrote my own solution.

My project was pretty cool; I'd parse a directory of text-files, one file for each post, and insert them into an SQLite database. From there I'd initiate a series of plugins, each one to generate something specific:

  • One plugin would output an archive page.
  • Another would generate a tag cloud.
  • Yet another would generate the actual search-results for a particular month/year, or tag-name.

All in all the solution was flexible and it wasn't too slow because finding posts via the SQLite database was pretty good.

Anyway I've come to realize that freedom and architecture was overkill. I don't need to do fancy presentation, I don't need a loosely-coupled set of plugins.

So now I have a simpler solution which uses my existing template, uses my existing posts - with only a few cleanups - and generates the site from scratch, including all the comments, in less than 2 seconds.

After running make clean a complete rebuild via make upload (which deploys the generated site to the remote host via rsync) takes 6 seconds.

I've lost the ability to be flexible in some areas, but I've gained all the speed. The old project took somewhere between 20-60 seconds to build, depending on what had changed.

In terms of simplifying my life I've dropped the remote installation of a site-search which means I can now host this site on a static site with only a single handler to receive any post-comments. (I was 50/50 on keeping comments. I didn't want to lose those I'd already received, and I do often find valuable and interesting contributions from readers, but being 100% static had its appeal too. I guess they stay for the next few years!)

08 October, 2019 06:00PM

Antoine Beaupré

Tip of the day: batch PDF conversion with LibreOffice

Someone asked me today why they couldn't write on the DOCX document they received from a student using the pen in their Onyx Note Pro reader. The answer, of course, is that while the Onyx can read those files, it can't annotate them: that only works with PDFs.

Next question then, is of course: do I really need to open each file separately and save them as PDF? That's going to take forever, I have 30 students per class!

Fear not, shell scripting and headless mode flies in to the rescue!

As it turns out, one of the Libreoffice parameters allow you to run batch operations on files. By calling:

libreoffice --headless --convert-to pdf *.docx

LibreOffice will happily convert all the *.docx files in the current directory to PDF. But because navigating the commandline can be hard, I figured I could push this a tiny little bit further and wrote the following script:

#!/bin/sh

exec libreoffice --headless --convert-to pdf "$@"

Drop this in ~/.local/share/nautilus/scripts/libreoffice-pdf, mark it executable, and voilà! You can batch-convert basically any text file (or anything supported by LibreOffice, really) into PDF.

Now I wonder if this would be a useful addition to the Debian package, anyone?

08 October, 2019 04:28PM

Jonas Meurer

debian lts report 2019.09

Debian LTS report for September 2019

This month I was allocated 10 hours and carried over 9.5 hours from August. Unfortunately, again I didn't find much time to work on LTS issues, partially because I was travelling. I spent 5 hours on the task listed below. That means that I carry over 14.5 hours to October.

08 October, 2019 02:34PM

Jamie McClelland

Editing video without a GUI? Really?

It seems counter intuitive - if ever there was a program in need of a graphical user interface, it's a non-linear video editing program.

However, as part of the May First board elections, I discovered otherwise.

We asked each board candidate to submit a 1 - 2 minute video introduction about why they want to be on the board. My job was to connect them all into a single video.

I had an unrealistic thought that I could find some simple tool that could concatenate them all together (like mkvmerge) but I soon realized that this approach requires that everyone use the exact same format, codec, bit rate, sample rate and blah blah blah.

I soon realized that I needed to actually make a video, not compile one. I create videos so infrequently, that I often forget the name of the video editing software I used last time so it takes some searching. This time I found that I had openshot-qt installed but when I tried to run it, I got a back trace (which someone else has already reported).

I considered looking for another GUI editor, but I wasn't that interested in learning what might be a complicated user interface when what I need is so simple.

So I kept searching and found melt. Wow.

I ran:

melt originals/* -consumer avformat:all.webm acodec=libopus vcodec=libvpx

And a while later I had a video. Impressive. It handled people who submitted their videos in portrait mode on their cell phones in mp4 as well as web cam submissions using webm/vp9 on landscape mode.

Thank you melt developers!

08 October, 2019 01:19PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Ada Lovelace Day: 5 Amazing Women in Tech

Ada Lovelace portrait

It’s Ada Lovelace day and I’ve been lax in previous years about celebrating some of the talented women in technology I know or follow on the interwebs. So, to make up for it, here are 5 amazing technologists.

Allison Randal

I was initially aware of Allison through her work on Perl, was vaguely aware of the fact she was working on Ubunutu, briefly overlapped with her at HPE (and thought it was impressive HP were hiring such high calibre of Free Software folk) when she was working on OpenStack, and have had the pleasure of meeting her in person due to the fact we both work on Debian. In the continuing theme of being able to do all things tech she’s currently studying a PhD at Cambridge (the real one), and has already written a fascinating paper about about the security misconceptions around virtual machines and containers. She’s also been doing things with home automation, properly, with local speech recognition rather than relying on any external assistant service (I will, eventually, find the time to follow her advice and try this out for myself).

Alyssa Rosenzweig

Graphics are one of the many things I just can’t do. I’m not artistic and I’m in awe of anyone who is capable of wrangling bits to make computers do graphical magic. People who can reverse engineer graphics hardware that would otherwise only be supported by icky binary blobs impress me even more. Alyssa is such a person, working on the Panfrost driver for ARM’s Mali Midgard + Bifrost GPUs. The lack of a Free driver stack for this hardware is a real problem for the ARM ecosystem and she has been tirelessly working to bring this to many ARM based platforms. I was delighted when I saw one of my favourite Free Software consultancies, Collabora, had given her an internship over the summer. (Selfishly I’m hoping it means the Gemini PDA will eventually be able to run an upstream kernel with accelerated graphics.)

Angie McKeown

The first time I saw Angie talk it was about the user experience of Virtual Reality, and how it had an entirely different set of guidelines to conventional user interfaces. In particular the premise of not trying to shock or surprise the user while they’re in what can be a very immersive environment. Obvious once someone explains it to you! Turns out she was also involved in the early days of custom PC builds and internet cafes in Northern Ireland, and has interesting stories to tell. These days she’s concentrating on cyber security - I’ve her to thank for convincing me to persevere with Ghidra - having looked at Bluetooth security as part of her Masters. She’s also deeply aware of the implications of the GDPR and has done some interesting work on thinking about how it affects the computer gaming industry - both from the perspective of the author, and the player.

Claire Wilgar

I’m not particularly fond of modern web design. That’s unfair of me, but web designers seem happy to load megabytes of Javascript from all over the internet just to display the most basic of holding pages. Indeed it seems that such things now require all the includes rather than being simply a matter of HTML, CSS and some graphics, all from the same server. Claire talked at Women Techmakers Belfast about moving away from all of this bloat and back to a minimalistic approach with improved performance, responsiveness and usability, without sacrificing functionality or presentation. She said all the things I want to say to web designers, but from a position of authority, being a front end developer as her day job. It’s great to see someone passionate about front-end development who wants to do things the right way, and talks about it in a way that even people without direct experience of the technologies involved (like me) can understand and appreciate.

Karen Sandler

There aren’t enough people out there who understand law and technology well. Karen is one of the few I’ve encountered who do, and not only that, but really, really gets Free software and the impact of the four freedoms on users in a way many pure technologists do not. She’s had a successful legal career that’s transitioned into being the general counsel for the Software Freedom Law Center, been the executive director of GNOME and is now the executive director of the Software Freedom Conservancy. As someone who likes to think he knows a little bit about law and technology I found Karen’s wealth of knowledge and eloquence slightly intimidating the first time I saw her speak (I think at some event in San Francisco), but I’ve subsequently (gratefully) discovered she has an incredible amount of patience (and ability) when trying to explain the nuances of free software legal issues.

08 October, 2019 07:00AM

October 07, 2019

hackergotchi for Julien Danjou

Julien Danjou

Python and fast HTTP clients

Python and fast HTTP clients

Nowadays, it is more than likely that you will have to write an HTTP client for your application that will have to talk to another HTTP server. The ubiquity of REST API makes HTTP a first class citizen. That's why knowing optimization patterns are a prerequisite.

There are many HTTP clients in Python; the most widely used and easy to
work with is requests. It is the de-factor standard nowadays.

Persistent Connections

The first optimization to take into account is the use of a persistent connection to the Web server. Persistent connections are a standard since HTTP 1.1 though many applications do not leverage them. This lack of optimization is simple to explain if you know that when using requests in its simple mode (e.g. with the get function) the connection is closed on return. To avoid that, an application needs to use a Session object that allows reusing an already opened connection.

import requests

session = requests.Session()
session.get("http://example.com")
# Connection is re-used
session.get("http://example.com")
Using Session with requests

Each connection is stored in a pool of connections (10 by default), the size of
which is also configurable:

import requests


session = requests.Session()
adapter = requests.adapters.HTTPAdapter(
    pool_connections=100,
    pool_maxsize=100)
session.mount('http://', adapter)
response = session.get("http://example.org")
Changing pool size

Reusing the TCP connection to send out several HTTP requests offers a number of performance advantages:

  • Lower CPU and memory usage (fewer connections opened simultaneously).
  • Reduced latency in subsequent requests (no TCP handshaking).
  • Exceptions can be raised without the penalty of closing the TCP connection.

The HTTP protocol also provides pipelining, which allows sending several requests on the same connection without waiting for the replies to come (think batch). Unfortunately, this is not supported by the requests library. However, pipelining requests may not be as fast as sending them in parallel. Indeed, the HTTP 1.1 protocol forces the replies to be sent in the same order as the requests were sent – first-in first-out.

Parallelism

requests also has one major drawback: it is synchronous. Calling requests.get("http://example.org") blocks the program until the HTTP server replies completely. Having the application waiting and doing nothing can be a drawback here. It is possible that the program could do something else rather than sitting idle.

A smart application can mitigate this problem by using a pool of threads like the ones provided by concurrent.futures. It allows parallelizing the HTTP requests in a very rapid way.

from concurrent import futures

import requests


with futures.ThreadPoolExecutor(max_workers=4) as executor:
    futures = [
        executor.submit(
            lambda: requests.get("http://example.org"))
        for _ in range(8)
    ]

results = [
    f.result().status_code
    for f in futures
]

print("Results: %s" % results)
Using futures with requests

This pattern being quite useful, it has been packaged into a library named requests-futures. The usage of Session objects is made transparent to the developer:

from requests_futures import sessions


session = sessions.FuturesSession()

futures = [
    session.get("http://example.org")
    for _ in range(8)
]

results = [
    f.result().status_code
    for f in futures
]

print("Results: %s" % results)
Using futures with requests

By default a worker with two threads is created, but a program can easily customize this value by passing the max_workers argument or even its own executor to the FuturSession object – for example like this: FuturesSession(executor=ThreadPoolExecutor(max_workers=10)).

Asynchronicity

As explained earlier, requests is entirely synchronous. That blocks the application while waiting for the server to reply, slowing down the program. Making HTTP requests in threads is one solution, but threads do have their own overhead and this implies parallelism, which is not something everyone is always glad to see in a program.

Starting with version 3.5, Python offers asynchronicity as its core using asyncio. The aiohttp library provides an asynchronous HTTP client built on top of asyncio. This library allows sending requests in series but without waiting for the first reply to come back before sending the new one. In contrast to HTTP pipelining, aiohttp sends the requests over multiple connections in parallel, avoiding the ordering issue explained earlier.

import aiohttp
import asyncio


async def get(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            return response


loop = asyncio.get_event_loop()

coroutines = [get("http://example.com") for _ in range(8)]

results = loop.run_until_complete(asyncio.gather(*coroutines))

print("Results: %s" % results)
Using aiohttp

All those solutions (using Session, threads, futures or asyncio) offer different approaches to making HTTP clients faster.

Performances

The snippet below is an HTTP client sending requests to httpbin.org, an HTTP API that provides (among other things) an endpoint simulating a long request (a second here). This example implements all the techniques listed above and times them.

import contextlib
import time

import aiohttp
import asyncio
import requests
from requests_futures import sessions

URL = "http://httpbin.org/delay/1"
TRIES = 10


@contextlib.contextmanager
def report_time(test):
    t0 = time.time()
    yield
    print("Time needed for `%s' called: %.2fs"
          % (test, time.time() - t0))


with report_time("serialized"):
    for i in range(TRIES):
        requests.get(URL)


session = requests.Session()
with report_time("Session"):
    for i in range(TRIES):
        session.get(URL)


session = sessions.FuturesSession(max_workers=2)
with report_time("FuturesSession w/ 2 workers"):
    futures = [session.get(URL)
               for i in range(TRIES)]
    for f in futures:
        f.result()


session = sessions.FuturesSession(max_workers=TRIES)
with report_time("FuturesSession w/ max workers"):
    futures = [session.get(URL)
               for i in range(TRIES)]
    for f in futures:
        f.result()


async def get(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            await response.read()

loop = asyncio.get_event_loop()
with report_time("aiohttp"):
    loop.run_until_complete(
        asyncio.gather(*[get(URL)
                         for i in range(TRIES)]))
Program to compare the performances of different requests usage

Running this program gives the following output:

Time needed for `serialized' called: 12.12s
Time needed for `Session' called: 11.22s
Time needed for `FuturesSession w/ 2 workers' called: 5.65s
Time needed for `FuturesSession w/ max workers' called: 1.25s
Time needed for `aiohttp' called: 1.19s
Python and fast HTTP clients

Without any surprise, the slower result comes with the dumb serialized version, since all the requests are made one after another without reusing the connection — 12 seconds to make 10 requests.

Using a Session object and therefore reusing the connection means saving 8% in terms of time, which is already a big and easy win. Minimally, you should always use a session.

If your system and program allow the usage of threads, it is a good call to use them to parallelize the requests. However threads have some overhead, and they are not weight-less. They need to be created, started and then joined.

Unless you are still using old versions of Python, without a doubt using aiohttp should be the way to go nowadays if you want to write a fast and asynchronous HTTP client. It is the fastest and the most scalable solution as it can handle hundreds of parallel requests. The alternative, managing hundreds of threads in parallel is not a great option.

Streaming

Another speed optimization that can be efficient is streaming the requests. When making a request, by default the body of the response is downloaded immediately. The stream parameter provided by the requests library or the content attribute for aiohttp both provide a way to not load the full content in memory as soon as the request is executed.

import requests


# Use `with` to make sure the response stream is closed and the connection can
# be returned back to the pool.
with requests.get('http://example.org', stream=True) as r:
    print(list(r.iter_content()))
Streaming with requests
import aiohttp
import asyncio


async def get(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            return await response.content.read()

loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(get("http://example.com"))]
loop.run_until_complete(asyncio.wait(tasks))
print("Results: %s" % [task.result() for task in tasks])
Streaming with aiohttp

Not loading the full content is extremely important in order to avoid allocating potentially hundred of megabytes of memory for nothing. If your program does not need to access the entire content as a whole but can work on chunks, it is probably better to just use those methods. For example, if you're going to save and write the content to a file, reading only a chunk and writing it at the same time is going to be much more memory efficient than reading the whole HTTP body, allocating a giant pile of memory, and then writing it to disk.

I hope that'll make it easier for you to write proper HTTP clients and requests. If you know any other useful technic or method, feel free to write it down in the comment section below!

07 October, 2019 09:30AM by Julien Danjou

Antoine Beaupré

This is why native apps matter

I was just looking a web stream on Youtube today and was wondering why my CPU was so busy. So I fired up top and saw my web browser (Firefox) took up around 70% of a CPU to play the stream.

I thought, "this must be some high resolution crazy stream! how modern! such wow!" Then I thought, wait, this is the web, there must be something insane going on.

So I did a little experiment: I started chromium --temp-profile on the stream, alongside vlc (which can also play Youtube streams!). Then I took a snapshot of the top(1) command after 5 minutes. Here are the results:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
16332 anarcat   20   0 1805160 269684 102660 S  60,2   1,7   3:34.96 chromium
16288 anarcat   20   0  974872 119752  87532 S  33,2   0,7   1:47.51 chromium
16410 anarcat   20   0 2321152 176668  80808 S  22,0   1,1   1:15.83 vlc
 6641 anarcat   20   0   21,1g 520060 137580 S  13,8   3,2  55:36.70 x-www-browser
16292 anarcat   20   0  940340  83980  67080 S  13,2   0,5   0:41.28 chromium
 1656 anarcat   20   0 1970912  18736  14576 S  10,9   0,1   4:47.08 pulseaudio
 2256 anarcat   20   0  435696  93468  78120 S   7,6   0,6  16:03.57 Xorg
16262 anarcat   20   0 3240272 165664 127328 S   6,2   1,0   0:31.06 chromium
  920 message+  20   0   11052   5104   2948 S   1,3   0,0   2:43.37 dbus-daemon
17915 anarcat   20   0   16664   4164   3276 R   1,3   0,0   0:02.07 top

To deconstruct this, you can see my Firefox process (masquerading as x-www-browser) which has been started for a long time. It's taken 55 hours of CPU time, but let's ignore that for now as it's not in the benchmark. What I find fascinating is there are at least 4 chromium processes running here, and they collectively take up over 7 minutes of CPU time.

Compare this a little over one (1!!!11!!!) minute of CPU time for VLC, and you realize why people are so ranty about everything being packaged as web apps these days. It's basically using up an order of magnitudes more processing power (and therefore electric power and slave labor) to watch those silly movies in your web browsers than in a proper video player.

Keep that in mind next time you let Youtube go on a "autoplay Donald Drumpf" playlist...

07 October, 2019 12:15AM

October 06, 2019

Iustin Pop

IronBike Einsiedeln 2019

The previous race I did at the end of August was soo much fun—I forgot how much fun this is—that I did the unthinkable and registered to the IronBike as well, at a short (but not the shortest) distance.

Why unthinkable? Last time I was here, three years ago, I apparently told my wife I will never, ever go to this race again, since it’s not a fun one. Steep up, steep down, or flat, but no flowing sections.

Well, it seems I forgot I said that, I only remembered “it’s a hard race”, so I registered at the 53km distance, where there are no age categories. Just “Herren Fun” ☺

September

The month of September was a pretty good one, overall, and I managed to even lose 2kg (of fat, I’m 100% sure), and continued to increase my overall fitness - 23→34 CTL in Training Peaks.

I also did the Strava Escape challenge and the Sufferfest Yoga challenge, so core fitness improved as well. And I was more rested: Training Peaks form was -8, up from -18 for previous race.

Overall, I felt reasonably confident to complete (compete?) the shorter distance; official distance 53km, 1’400m altitude.

Race (day)

The race for this distance starts very late (around half past ten), so it was an easy morning, breakfast, drive to place, park, get dressed, etc.

The weather was actually better than I feared, it was plain shorts and t-shirt, no need for arm warmers, or anything like that. And it got sunnier as the race went on, which reminded me why on my race prep list it says “Sunscreen (always)”, and not just sunscreen.

And yes, this race as well, even at this “fun” distance, started very strong. The first 8.7km are flat (total altitude gain 28m, i.e. nothing), and I did them at an average speed of 35.5km/h! Again, this is on a full-suspension mountain bike, with thick tires. I actually drafted, a lot, and it was the first time I realised even with MTB, knowing how to ride in a group is useful.

And then, the first climb starts. About 8.5km, gaining 522m altitude (around a third only of the goal…), and which was tiring. On the few places where the climb flattened I saw again that one of the few ways in which I can gain on people is by timing my effort well enough such that when it flattens, I can start strongly and thus gain, sometimes significantly.

I don’t know if this is related to the fact that I’m on the heavier but also stronger side (this is comparing peak 5s power with other people on Sufferfest/etc.), or just I was fresh enough, but I was able to apply this repeatedly over the first two thirds of the race.

At this moment I was still well, and the only negative aspect—my lower back pain, which started very early—was on and off, not continuous, so all good.

First descent, 4.7km, average almost -12%, thus losing almost all the gain, ~450m. And this was an interesting descent: a lot of it was on a quite wide trail, but “paved” with square wood logs. And the distance between the wood logs, about 3-4cm, was not filled well enough with soil, so it was very jittery ride. I’ve seen a lot of people not able to ride there (to my surprise), and, an even bigger surprise, probably 10 or 15 people with flats. I guess they went low enough on pressure to get bites (not sure if correct term) and thus a flat. I had no issues, I was running a bit high pressure (lucky guess), which I took off later. But here it served me very well.

Another short uphill, 1.8km/230m altitude, but I gave up and pushed the bike. Not fancy, but when I got off the bike it was 16%!! And that’s not my favourite place in the world to be. Pushing the bike was not that bad; I was going 4.3km/h versus 5.8km/h beforehand, so I wasn’t losing much time. But yes, I was losing time.

Then another downhill, thankfully not so steep so I could enjoy going down for longer, and then a flat section. Overall almost 8km flat section, which I remembered quite well. Actually I was remembering quite a bit of things from the previous race, despite different route, but especially this flat section I did remember in two ways:

  • that I was already very, very tired back then
  • and by the fact that I don’t remembered at all what was following

So I start on this flat section, expecting good progress, only to find myself going slower and slower. I couldn’t understand why it was so tiring to go straight, until I started approaching the group ahead. I realised then it was a “false flat”; I knew the term before, but didn’t understand well what it means, but now I got it ☺ About 1km, at a mean grade of 4.4%, which is not high when you know it’s a climb, but it is not flat. Once I realised this, I felt better. The other ~6.5km I spent pursuing a large group ahead, which I managed to catch about half a kilometre before the end of the flat.

I enjoy this section, then the road turns and I saw what my brain didn’t want to remember: the last climb, 4km long, only ~180m altitude, but mean grade 10%, but I don’t have energy for all of it. Which is kind of hilarious, 180m altitude is only twice and a bit of my daily one-way commute, so peanuts, but I’m quite tired from both previous climbs and chasing that group, so after 11 minutes I get off the bike, and start pushing.

And the pushing is the same as before, actually even better. I was going 5.2km/h on average here, versus 6.1km/h before, so less than 1km/h difference. This walking speed is more than usual walking, as it was not very steep I was walking fast. Since I was able to walk fast and felt recovered, I get back on the bike before the climb finished, only to feel a solid cramp in the right leg as soon as I try to start pedalling. OK, let me pedal with the left, ouch cramp as well. So I’m going actually almost as slow as walking, just managing the cramps and keeping them below-critical, and after about 5 minutes they go away.

It was the first time I got such cramps, or any cramps, during a race. I’m not sure I understand why, I did drink energy drinks not just water (so I assume not acute lack of some electrolytes). Apparently reading on the internet it seems cramps are not a solved issue, and most likely just due to the hard efforts at high intensity. It was scary about ten seconds as I feared a full cramping of one of both legs (and being clipped in… not funny), but once I realised I can keep it under control, it was just interesting.

I remembered now the rest of the route (ahem, see below), so I knew I was looking at a long (7km), flat-ish section (100m gain), before the last, relatively steep descent. Steep as in -24.4% max gradient, with average -15%. This done, I was on relatively flat roads (paved, even), and my Garmin was showing 1’365m altitude gain since the start of the race. Between the official number of 1’400m and this was a difference of only 35m, which I was sure was due to simple errors; distance now was 50.5km, so I was looking at the rest of 2.5km being a fast run towards the end.

Then in a village we exit the main road and take a small road going up a bit, and then up a bit, and then—I was pretty annoyed here already—the road opens to a climb. A CLIMB!!!! I could see the top of the hill, people going up, but it was not a small climb! It was not 35m, not fewer than that, it was a short but real climb!

I was swearing with all my power in various languages. I completely and totally forgot this last climb, I did not plan for it, and I was pissed off ☺ It turned out to be 1.3km long, mean grade of 11.3% (!), with 123m of elevation gain. It was almost 100m more than I planned… I got off the bike again, especially as it was a bit tricky to pedal and the gradient was 20% (!!!), so I pushed, then I got back on, then I got back off, and then back on. At the end of the climb there was a photographer again, so I got on the bike and hoped I might get a smiling picture, which almost happened:

Pretending I'm enjoying this, and showing how much weight I should get rid of :/ Pretending I’m enjoying this, and showing how much weight I should get rid of :/

And then from here on it was clean, easy downhill, at most -20% grade, back in Einsiedeln, and back to the start with a 10m climb, which I pushed hard, overtook a few people (lol), passed the finish, and promptly found first possible place to get off the bike and lie down. This was all behind me now:

Thanks VeloViewer! Thanks VeloViewer!

And, my Garmin said 1’514m altitude, more than 100m more than the official number. Boo!

Aftermath

It was the first race I actually had to lie down afterwards. First race I got cramps as well ☺ Third 1h30m heart rate value ever, a lot of other year 2019 peaks too. And I was dead tired.

I learned it’s very important to know well the route altitude profile, in order to be able to time your efforts well. I got reminded I suck at climbing (so I need to train this over and over), but I learned that I can push better than other people at the end of a climb, so I need to train this strength as well.

I also learned that you can’t rely on the official race data, since it can be off by more than 100m. Or maybe I can’t rely on my Garmin(s), but with a barometric altimeter, I don’t think the problem was here.

I think I ate a bit too much during the race, which was not optimal, but I was very hungry already after the first climb…

But the biggest thing overall was that, despite no age group, I got a better placement than I thought. I finished in 3h:37m.14s, 1h:21m.56s behind the first place, but a good enough time that I was 325th place out of 490 finishers.

By my calculations, 490 finishers means the thirds are 1-163, 164-326, and 327-490. Which means, I was in the 2nd third, not in the last one!!! Hence the subtitle of this post, moving up, since I usually am in the bottom third, not the middle one. And there were also 11 DNF people, so I’m well in the middle third ☺

Joke aside, this made me very happy, as it’s the first time I feel like efforts (in a bad year like this, even) do amount to something. So good.

Speaking of DNF, I was shocked at the amount of people I saw either trying to fix their bike on the side, or walking their bike (to the next repair post), or just shouting angrily at a broken component. I counted probably towards 20 of such people, a lot after that first descent, but given only 11 DNF, I guess many people do carry spare tubes. I didn’t, hope is a strategy sometimes ☺

Now, with the season really closed, time to start thinking about next year. And how to lose those 10kg of fat I still need to lose…

06 October, 2019 09:00PM

Antoine Beaupré

Calibre replacement considerations

Summary

TL;DR: I'm considering replacing those various Calibre compnents with...

My biggest blocker that don't really have good alternatives are:

  • collection browser
  • metadata editor
  • device sync

See below why and a deeper discussion on all the features.

Problems with Calibre

Calibre is an amazing software: it allows users to manage ebooks on your desktop and a multitude of ebook readers. It's used by Linux geeks as well as Windows power-users and vastly surpasses any native app shipped by ebook manufacturers. I know almost exactly zero people that have an ebook reader that do not use Calibre.

However, it has had many problems over the years:

Update: a previous version of that post claimed that all of Calibre had been removed from Debian. This was inaccurate, as the Debian Calibre maintainer pointed out. What happened was Calibre 4.0 was uploaded to Debian unstable, then broke because of missing Python 2 dependencies, and an older version (3.48) was uploaded in its place. So Calibre will stay around in Debian for the foreseeable future, hopefully, but the current latest version (4.0) cannot get in because it depends on older Python 2 libraries.

The latest issue (Python 3) is the last straw, for me. While Calibe is an awesome piece of software, I can't help but think it's doing too much, and the wrong way. It's one of those tools that looks amazing on the surface, but when you look underneath, it's a monster that is impossible to maintain, a liability that is just bound to cause more problems in the future.

What does Calibre do anyways

So let's say I wanted to get rid of Calibre, what would that mean exactly? What do I actually use Calibre for anyways?

Calibre is...

  • an ebook viewer: Calibre ships with the ebook-viewer command, which allows one to browse a vast variety of ebook formats. I rarely use this feature, since I read my ebooks on a e-reader, on purpose. There is, besides, a good variety of ebook-readers, on different platforms, that can replace Calibre here:

    • Atril, MATE's version of Evince, supports ePUBs (Evince doesn't seem to), but fails to load certain ebooks (book #1459 for example)
    • Bookworm looks very promising, not in Debian (Debian bug #883867), but Flathub. scans books on exit, and can take a loong time to scan an entire library (took 24+ hours here, and had to kill pdftohtml a few times) without a progress bar. but has a nice library browser, although it looks like covers are sorted randomly. search works okay, however. unclear what happens when you add a book, it doesn't end up in the chosen on-disk library.
    • Buka is another "ebook" manager written in Javascript, but only supports PDFs for now.
    • coolreader is another alternative, not yet in Debian (#715470)
    • Emacs (of course) supports ebooks through nov.el
    • fbreader also supports ePUBs, but is much slower than all those others, and turned proprietary so is unmaintained
    • Foliate looks gorgeous and is built on top of the ePUB.js library, not in Debian, but Flathub.
    • GNOME Books is interesting, but relies on the GNOME search engine and doesn't find my books (and instead lots of other garbage). it's been described as "basic" and "the least mature" in this OMG Ubuntu review
    • koreader is a good alternative reader for the Kobo devices and now also has builds for Debian, but no Debian package
    • lucidor is a Firefox extension that can read an organize books, but is not packaged in Debian either (although upstream provides a .deb). It depends on older Firefox releases (or "Pale moon", a Firefox fork), see also the firefox XULocalypse for details
    • MuPDF also reads ePUBs and is really fast, but the user interface is extremely minimal, and copy-paste doesn't work so well (think "Xpdf"). it also failed to load certain books (e.g. 1359) and warns about 3.0 ePUBs (e.g. book 1162)
    • Okular supports ePUBs when okular-extra-backends is installed
    • plato is another alternative reader for Kobo readers, not in Debian
  • an ebook editor: Calibre also ships with an ebook-edit command, which allows you to do all sorts of nasty things to your ebooks. I have rarely used this tool, having found it hard to use and not giving me the results I needed, in my use case (which was to reformat ePUBs before publication). For this purpose, Sigil is a much better option, now packaged in Debian. There are also various tools that render to ePUB: I often use the Sphinx documentation system for that purpose, and have been able to produce ePUBs from LaTeX for some projects.

  • a file converter: Calibre can convert between many ebook formats, to accomodate the various readers. In my experience, this doesn't work very well: the layout is often broken and I have found it's much better to find pristine copies of ePUB books than fight with the converter. There are, however, very few alternatives to this functionality, unfortunately.

  • a collection browser: this is the main functionality I would miss from Calibre. I am constantly adding books to my library, and Calibre does have this incredibly nice functionality of just hitting "add book" and Just Do The Right Thing™ after that. Specifically, what I like is that it:

    • sort, view, and search books in folders, per author, date, editor, etc
    • quick search is especially powerful
    • allows downloading and editing metadata (like covers) easily
    • track read/unread status (although that's a custom field I had to add)

    Calibre is, as far as I know, the only tool that goes so deep in solving that problem. The Liber web server, however, does provide similar search and metadata functionality. It also supports migrating from an existing Calibre database as it can read the Calibre metadata stores. When no metadata is found, it fetches some from online sources (currently Google Books).

    One major limitation of Liber in this context is that it's solely search-driven: it will not allow you to see (for example) the "latest books added" or "browse by author". It also doesn't support "uploading" books although it will incrementally pick up new books added by hand in the library. It somewhat assumes Calibre already exists, in a way, to properly curate the library and is more designed to be a search engine and book sharing system between liber instances.

    This also connects with the more general "book inventory" problem I have which involves an inventory physical books and directory of online articles. See also firefox (Zotero section) and ?bookmarks for a longer discussion of that problem.

  • a metadata editor: the "collection browser" is based on a lot of metadata that Calibre indexes from the books. It can magically find a lot of stuff in the multitude of file formats it supports, something that is pretty awesome and impressive. For example, I just added a PDF file, and it found the book cover, author, publication date, publisher, language and the original mobi book id (!). It also added the book in the right directory and dumped that metadata and the cover in a file next to the book. And if that's not good enough, it can poll that data from various online sources like Amazon, and Google books.

    Maybe the work Peter Keel did could be useful in creating some tool which would do this automatically? Or maybe Sigil could help? Liber can also fetch metadata from Google books, but not interactively.

    I still use Calibre mostly for this.

  • a device synchronization tool : I mostly use Calibre to synchronize books with an ebook-reader. It can also automatically update the database on the ebook with relevant metadata (e.g. collection or "shelves"), although I do not really use that feature. I do like to use Calibre to quickly search and prune books from by ebook reader, however. I might be able to use git-annex for this, however, given that I already use it to synchronize and backup my ebook collection in the first place...

  • an RSS reader: I used this for a while to read RSS feeds on my ebook-reader, but it was pretty clunky. Calibre would be continously generating new ebooks based on those feeds and I would never read them, because I would never find the time to transfer them to my ebook viewer in the first place. Instead, I use a regular RSS feed reader. I ended up writing my own, feed2exec) and when I find an article I like, I add it to Wallabag which gets sync'd to my reader using wallabako, another tool I wrote.

  • an ebook web server : Calibre can also act as a web server, presenting your entire ebook collection as a website. It also supports acting as an OPDS directory, which is kind of neat. There are, as far as I know, no alternative for such a system although there are servers to share and store ebooks, like Trantor or Liber.

Note that I might have forgotten functionality in Calibre in the above list: I'm only listing the things I have used or am using on a regular basis. For example, you can have a USB stick with Calibre on it to carry the actual software, along with the book library, around on different computers, but I never used that feature.

So there you go. It's a colossal task! And while it's great that Calibre does all those things, I can't help but think that it would be better if Calibre was split up in multiple components, each maintained separately. I would love to use only the document converter, for example. It's possible to do that on the commandline, but it still means I have the entire Calibre package installed.

Maybe a simple solution, from Debian's point of view, would be to split the package into multiple components, with the GUI and web servers packaged separately from the commandline converter. This way I would be able to install only the parts of Calibre I need and have limited exposure to other security issues. It would also make it easier to run Calibre headless, in a virtual machine or remote server for extra isoluation, for example.

Update: this post generated some activity on Mastodon, follow the conversation here or on your favorite Mastodon instance.

06 October, 2019 07:27PM

hackergotchi for Norbert Preining

Norbert Preining

RIP (for now) Calibre in Debian

The current purge of all Python2 related packages has a direct impact on Calibre. The latest version of Calibre requires Python modules that are not (anymore) available for Python 2, which means that Calibre >= 4.0 will for the foreseeable future not be available in Debian.

I just have uploaded a version of 3.48 which is the last version that can run on Debian. From now on until upstream Calibre switches to Python 3, this will be the last version of Calibre in Debian.

In case you need newer features (including the occasional security fixes), I recommend switching to the upstream installer which is rather clean (installing into /opt/calibre, creating some links to the startup programs, and installing completions for zsh and bash. It also prepares an uninstaller that reverts these changes.

Enjoy.

06 October, 2019 02:14AM by Norbert Preining

October 05, 2019

Reproducible Builds

Reproducible Builds in September 2019

Welcome to the September 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up over the past month. As a quick refresher of what our project is about, whilst anyone can inspect the source code of free software for malicious changes, most software is distributed to end users or servers as precompiled binaries. The motivation behind the reproducible builds effort is to ensure zero changes have been introduced during these compilation processes. This is achieved by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In September’s report, we cover:

  • Media coverage & eventsmore presentations, preventing Stuxnet, etc.
  • Upstream newskernel reproducibility, grafana, systemd, etc.
  • Distribution workreproducible images in Arch Linux, policy changes in Debian, etc.
  • Software developmentyet more work on diffoscope, upstream patches, etc.
  • Misc news & getting in touchfrom our mailing list how to contribute, etc

If you are interested in contributing to our project, please visit our Contribute page on our website.


Media coverage & events

This month Vagrant Cascadian attended the 2019 GNU Tools Cauldron in Montréal, Canada and gave a presentation entitled Reproducible Toolchains for the Win (video).

In addition, our project was highlighted as part of a presentation by Andrew Martin at the All Systems Go conference in Berlin titled Rootless, Reproducible & Hermetic: Secure Container Build Showdown, and Björn Michaelsen from the Document Foundation presented at the 2019 LibreOffice Conference in Almería in Spain on the status of reproducible builds in the LibreOffice office suite.

In academia, Anastasis Keliris and Michail Maniatakos from the New York University Tandon School of Engineering published a paper titled ICSREF: A Framework for Automated Reverse Engineering of Industrial Control Systems Binaries (PDF) that speaks to concerns regarding the security of Industrial Control Systems (ICS) such as those attacked via Stuxnet. The paper outlines their ICSREF tool for reverse-engineering binaries from such systems and furthermore demonstrates a scenario whereby a commercial smartphone equipped with ICSREF could be easily used to compromise such infrastructure.

Lastly, It was announced that Vagrant Cascadian will present a talk at SeaGL in Seattle, Washington during November titled There and Back Again, Reproducibly.


2019 Summit

Registration for our fifth annual Reproducible Builds summit that will take place between 1st → 8th December in Marrakesh, Morocco has opened and personal invitations have been sent out.

Similar to previous incarnations of the event, the heart of the workshop will be three days of moderated sessions with surrounding “hacking” days and will include a huge diversity of participants from Arch Linux, coreboot, Debian, F-Droid, GNU Guix, Google, Huawei, in-toto, MirageOS, NYU, openSUSE, OpenWrt, Tails, Tor Project and many more. If you would like to learn more about the event and how to register, please visit our our dedicated event page.


Upstream news

Ben Hutchings added documentation to the Linux kernel regarding how to make the build reproducible. As he mentioned in the commit message, the kernel is “actually” reproducible but the end-to-end process was not previously documented in one place and thus Ben describes the workflow and environment needed to ensure a reproducible build.

Daniel Edgecumbe submitted a pull request which was subsequently merged to the logging/journaling component of systemd in order that the output of e.g. journalctl --update-catalog does not differ between subsequent runs despite there being no changes in the input files.

Jelle van der Waa noticed that if the grafana monitoring tool was built within a source tree devoid of Git metadata then the current timestamp was used instead, leading to an unreproducible build. To avoid this, Jelle submitted a pull request in order that it use SOURCE_DATE_EPOCH if available.

Mes (a Scheme-based compiler for our “sister” bootstrappable builds effort) announced their 0.20 release.


Distribution work

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution. Thunderbird and kernel-vanilla packages will be among the larger ones to become reproducible soon and there were additional Python patches to help reproducibility issues of modules written in this language that have C bindings.

OpenWrt is a Linux-based operating system targeting embedded devices such as wireless network routers. This month, Paul Spooren (aparcar) switched the toolchain the use the GCC version 8 by default in order to support the -ffile-prefix-map= which permits a varying build path without affecting the binary result of the build []. In addition, Paul updated the kernel-defaults package to ensure that the SOURCE_DATE_EPOCH environment variable is considered when creating the the /init directory.

Alexander “lynxis” Couzens began working on a set of build scripts for creating firmware and operating system artifacts in the coreboot distribution.

Lukas Pühringer prepared an upload which was sponsored by Holger Levsen of python-securesystemslib version 0.11.3-1 to Debian unstable. python-securesystemslib is a dependency of in-toto, a framework to protect the integrity of software supply chains.

Arch Linux

The mkinitcpio component of Arch Linux was updated by Daniel Edgecumbe in order that it generates reproducible initramfs images by default, meaning that two subsequent runs of mkinitcpio produces two files that are identical at the binary level. The commit message elaborates on its methodology:

Timestamps within the initramfs are set to the Unix epoch of 1970-01-01. Note that in order for the build to be fully reproducible, the compressor specified (e.g. gzip, xz) must also produce reproducible archives. At the time of writing, as an inexhaustive example, the lzop compressor is incapable of producing reproducible archives due to the insertion of a runtime timestamp.

In addition, a bug was created to track progress on making the Arch Linux ISO images reproducible.

Debian

In July, Holger Levsen filed a bug against the underlying tool that maintains the Debian archive (“dak”) after he noticed that .buildinfo metadata files were not being automatically propagated in the case that packages had to be manually approved in “NEW queue”. After it was pointed out that the files were being retained in a separate location, Benjamin Hof proposed a patch for the issue that was merged and deployed this month.

Aurélien Jarno filed a bug against the Debian Policy (#940234) to request a section be added regarding the reproducibility of source packages. Whilst there is already a section about reproducibility in the Policy, it only mentions binary packages. Aurélien suggest that it:

… might be a good idea to add a new requirement that repeatedly building the source package in the same environment produces identical .dsc files.

In addition, 51 reviews of Debian packages were added, 22 were updated and 47 were removed this month adding to our knowledge about identified issues. Many issue types were added by Chris Lamb including buildpath_in_code_generated_by_bison, buildpath_in_postgres_opcodes and ghc_captures_build_path_via_tempdir.

Software development

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb uploaded versions 123, 124 and 125 and made the following changes:

  • New features:

    • Add /srv/diffoscope/bin to the Docker image path. (#70)
    • When skipping tests due to the lack of installed tool, print the package that might provide it. []
    • Update the “no progressbar” logging message to match the parallel missing tlsh module warnings. []
    • Update “requires foo” messages to clarify that they are referring to Python modules. []
  • Testsuite updates:

    • The test_libmix_differences ELF binary test requires the xxd tool. (#940645)
    • Build the OCaml test input files on-demand rather than shipping them with the package in order to prevent test failures with OCaml 4.08. (#67)
    • Also conditionally skip the identification and “no differences” tests as we require the Ocaml compiler to be present when building the test files themselves. (#940471)
    • Rebuild our test squashfs images to exclude the character device as they requires root or fakeroot to extract. (#65)
  • Many code cleanups, including dropping some unnecessary control flow [], dropping unnecessary pass statements [] and dropping explicitly inheriting from object class as it unnecessary in Python 3 [].

In addition, Marc Herbert completely overhauled the handling of ELF binaries particularly around many assumptions that were previously being made via file extensions, etc. [][][] and updated the testsuite to support a newer version of the coreboot utilities. []. Mattia Rizzolo then ensured that diffoscope does not crash when the progress bar module is missing but the functionality was requested [] and made our version checking code more lenient []. Lastly, Vagrant Cascadian not only updated diffoscope to versions 123 and 125, he enabled a more complete test suite in the GNU Guix distribution. [][][][][][]

Project website

There was yet more effort put into our our website this month, including:

In addition, Cindy Kim added in-toto to our “Who is Involved?” page, James Fenn updated our homepage to fix a number of spelling and grammar issues [] and Peter Conrad added BitShares to our list of projects interested in Reproducible Builds [].

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from successful builds. This month, Marc Herbert made a huge number of changes including:

  • GNU ar handler:
    • Don’t corrupt the pseudo file mode of the symbols table.
    • Add test files for “symtab” (/) and long names (//).
    • Don’t corrupt the SystemV/GNU table of long filenames.
  • Add a new $File::StripNondeterminism::verbose global and, if enabled, tell the user that ar(1) could not set the symbol table’s mtime.

In addition, Chris Lamb performed some issue investigation with the Debian Perl Team regarding issues in the Archive::Zip module including a problem with corruption of members that use bzip compression as well as a regression whereby various metadata fields were not being updated that was reported in/around Debian bug #940973.

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org.

  • Alexander “lynxis” Couzens:
    • Fix missing .xcompile in the build system. []
    • Install the GNAT Ada compiler on all builders. []
    • Don’t install the iasl ACPI power management compiler/decompiler. []
  • Holger Levsen:
    • Correctly handle the $DEBUG variable in OpenWrt builds. []
    • Fefactor and notify the #archlinux-reproducible IRC channel for problems in this distribution. []
    • Ensure that only one mail is sent when rebooting nodes. []
    • Unclutter the output of a Debian maintenance job. []
    • Drop a “todo” entry as we vary on a merged /usr for some time now. []

In addition, Paul Spooren added an OpenWrt snapshot build script which downloads .buildinfo and related checksums from the relevant download server and attempts to rebuild and then validate them for reproducibility. []

The usual node maintenance was performed by Holger Levsen [][][], Mattia Rizzolo [] and Vagrant Cascadian [][].

reprotest

reprotest is our end-user tool to build same source code twice in different environments and then check the binaries produced by each build for differences. This month, a change by Dmitry Shachnev was merged to not use the faketime wrapper at all when asked to not vary time [] and Holger Levsen subsequently released this as version 0.7.9 as dramatically overhauling the packaging [][].


Misc news & getting in touch

On our mailing list Rebecca N. Palmer started a thread titled Addresses in IPython output which points out and attempts to find a solution to a problem with Python packages, whereby objects that don’t have an explicit string representation have a default one that includes their memory address. This causes problems with reproducible builds if/when such output appears in generated documentation.

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa, Mattia Rizzolo and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

05 October, 2019 08:43PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Setting a Lotus Pot

Experiences setting up a Lotus Pond Pot

A novice’s first time experience setting up a Lotus pond and germinating the Lotus seeds to a full plant

The trigger

Our neighbors have a very nice Lotus setup in the front of their garden, with flowers blooming in it. It is really a pleasing experience to see it. With lifestyles limiting to specifics, I’m glad to be around with like minded people. So we decided to set up a Lotus Pond Pot in our garden too.

Hunting the pot

About 2/3rd of our garden has been laid out with Mexican Grass, with the exception of some odd spots where there are 2 large concrete tanks in the ground and other small tanks. The large one is fairly big, with a dimension around 3.5 x 3.5 ft. We wanted to make use of that spot. I checked out some of the available pots and found a good looking circular pot carved in stone, of around 2 ft, but it was quite expensive and not fitting my budget.

With the stone pot out of my budget, the other options left out were

  • One molded in cement
  • Granite

We looked at another pot maker who made pots by molds in cement. They had some small size pot samples of around 1x1 feet but the finished product samples didn’t look very good. From the available samples, we weren’t sure how well would the pot what we wanted, would look like. Also, they weren’t having a proportionate price difference. The vendor would take a months time to make one. With no ready made sample of that size in place, this was something we weren’t very enthused to explore.

We instead chose to explore the possibility of building one with granite slabs. First reason being, granite vendors were more easily available. But second and most important, given the spot I had chosen, I felt a square granite based setup will be an equally good fit. Also, the granite based pot was coming in budget friendly. Finally, we settled with a 3 x 3 ft granite pot.

And this is what our initial Lotus Pot looked like.

Granite Lotus Pot

Note: The granite pot was quite heavy, close to around 200 Kilograms. It took us 3 people to place it at the designated spot

Actual Lotus pot

As you can see from the picture above, there’s another pot in the larger granite pot itself. Lotus’ grow in water and sludge. So we needed to provide it with the same setup. We used one of the Biryani Pots to prepare for the sludge. This is where the Lotus’ roots, tuber, will live and grow, inside the water and sludge.

With an open pot, when you have water stored, you have other challenges to take care of.

  • Aeration of the water
  • Mosquitoes

We bought a solar water fountain to take care of the water. It is a nice device that works very well under sunlight. Remember, for your Lotus, you need a well sun-lit area. So, the solar water fountain was a perfect fit for this scenario.

But, with water, breed mosquitoes. As most would do, we chose to put in some fish into the pond to take care of that problem. We put in some Mollies, which we hope would produce more. Also the waste they produce, should work as a supplement fertilizer for the Lotus plant.

Aeration is very important for the fish and so far our Solar Water Fountain seems to be doing a good job there, keeping the fishes happy and healthy.

So in around a week, we had the Lotus Pot in place along with a water fountain and some fish.

The Lotus Plant itself

With the setup in place, it was now time to head to the nearby nursery and get the Lotus plant. It was an utter surprise to us to not find the Lotus plant in any of the nurseries. After quite a lot of searching around, we came across one nursery that had some lotus plants. They didn’t look in good health but that was the only option we had. But the bigger disappointment was the price, which was insanely high for a plant. We returned back home without the Lotus, a little disheartened.

Thank you Internet

The internet has connected the world so well. I looked up the internet and was delighted to see so many people sharing experiences in the form of articles and YouTube videos about Lotus. You can find all the necessary information about. With it, we were all charged up to venture into the next step, to grow the lotus from scratch seeds instead.

The Lotus seeds

First, we looked up the internet and placed an order for 15 lotus seeds. Soon, they were delivered. And this is what Lotus seeds look like.

Lotus Seeds

Lotus Seeds

I had, never in my life before, seen lotus seeds. The shell is very very hard. An impressive remnant of the Lotus plant.

Germinating the seed

Germinating the seed is an experience of its own, given how hard the lotus’ shell is. There are very good articles and videos on the internet explaining on the steps on how to germinate a seed. In a gist, you need to scratch the pointy end of the seed’s shell enough to see the inner membrane. And then you need to submerge it in water for around 7-10 days. Every day, you’ll be able to witness the germination process.

Lotus seed with the pointy end

Lotus seed with the pointy end

Make sure to scratch the pointy end well enough, while also ensuring to not damage the inner membrane of the seed. You also need to change the water in the container on a daily basis and keep it in a decently lit area.

Here’s an interesting bit, specific to my own experience. Turned out that the seeds we purchased online wasn’t of good quality. Except for one, none of the other seeds germinated. And the one that did, did not sprout out proper. It popped a single shoot but the shoot did not grow much. In all, it didn’t work.

But while we were waiting for the seeds to be delivered, my wife looked at the pictures of the seeds that I was studying about online, realized that we already had the lotus seeds at home. Turns out, these seeds are used in our Hindu Puja Rituals. The seeds are called कऎल ŕ¤—ŕ¤ŸŕĽŕ¤Ÿŕ¤ž in Hindi. We had some of the seeds from the puja remaining. So we had in parallel, used those seeds for germination and they sprouted out very well.

Unfortunately, I had not taken any pictures of them during the germination phase Update: Sat 12 Oct 2019

The sprouting phase should be done in a tall glass. This will allow for the shoots to grow long, as eventually, it needs to be set into the actual pot. During the germination phase, the shoot will grow daily around an inch or so, ultimately targeting to reach the surface of the water. Once it reaches the surface, eventually, at the base, it’ll start developing the roots.

Now is time to start the preparation to sow the seed into the sub-pot which has all the sludge

Sowing your seeds

Initial Lotus growing in the sludge

Initial Lotus growing in the sludge

This picture is from a couple of days after I sowed the seeds. When you transfer the shoots into the sludge, use your finger to submerge into the sludge making room for the seed. Gently place the seed into the sludge. You can also cover the sub-pot with some gravel just to keep the sludge intact and clean.

Once your shoots get accustomed to the new environment, they start to grow again. That is what you see in the picture above, where the shoot reaches the surface of the water and then starts developing into the beautiful lotus leaf

Lotus leaf

It is a pleasure to see the Lotus leaves floating on the water. The flowering is going to take well over a year, from what I have read on the internet. But the Lotus Leaves, its veins and the water droplets on it, for now itself, are very soothing to watch

Lotus Closeup

Lotus leaf veins

Lotus leaf veins and water droplets

Lotus leaf veins and water droplets

Final Result

As of today, this is what the final setup looks like. Hopefully, in a year’s time, there’ll be flowers

Lotus Pot setup now

Lotus pot setup now

Update Sat 12 Oct 2019

So I was able to capture some pictures from the Lotus pot, wherein you can see the Lotus Shoot while inside the water, then Lotus Shoot when it reaches the surface, and finally Lotus Shoot developing its leaf

Lotus Shoot Inside Water

Lotus Shoot Inside Water

Lotus Shoot Reaching Surface

Lotus Shoot Reaching Surface

Lotus Shoot Developing Leaf

Lotus Shoot Developing Leaf

05 October, 2019 04:26PM by Ritesh Raj Sarraf (rrs@researchut.com)

October 04, 2019

John Goerzen

Resurrecting Ancient Operating Systems on Debian, Raspberry Pi, and Docker

I wrote recently about my son playing Zork on a serial terminal hooked up to a PDP-11, and how I eventually bought a vt420 (ok, some vt420s and vt510s, I couldn’t stop at one) and hooked it up to a Raspberry Pi.

This led me down another path: there is a whole set of hardware and software that I’ve never used. For some, it fell out of favor before I could read (and for others, before I was even born).

The thing is – so many of these old systems have a legacy that we live in today. So much so, in fact, that we are now seeing articles about how modern CPUs are fast PDP-11 emulators in a sense. The PDP-11, and its close association with early Unix, lives on in the sense that its design influenced microprocessors and operating systems to this day. The DEC vt100 terminal is, nowadays, known far better as that thing that is emulated, but it was, in fact, a physical thing. Some goes back into even mistier times; Emacs, for instance, grew out of the MIT ITS project but was later ported to TOPS-20 before being associated with Unix. vi grew up in 2BSD, and according to wikipedia, was so large it could barely fit in the memory of a PDP-11/70. Also in 2BSD, a buggy version of Zork appeared — so buggy, in fact, that the save game option was broken. All of this happened in the late 70s.

When we think about the major developments in computing, we often hear of companies like IBM, Microsoft, and Apple. Of course their contributions are undeniable, and emulators for old versions of DOS are easily available for every major operating system, plus many phones and tablets. But as the world is moving heavily towards Unix-based systems, the Unix heritage is far more difficult to access.

My plan with purchasing and setting up an old vt420 wasn’t just to do that and then leave. It was to make it useful for modern software, and also to run some of these old systems under emulation.

To that end, I have released my vintage computing collection – both a script for setting up on a system, and a docker image. You can run Emacs and TECO on TOPS-20, zork and vi on 2BSD, even Unix versions 5, 6, and 7 on a PDP-11. And for something particularly rare, RDOS on a Data General Nova. I threw in some old software compiled for modern systems: Zork, Colossal Cave, and Gopher among them. The bsdgames collection and some others are included as well.

I hope you enjoy playing with the emulated big-iron systems of the 70s and 80s. And in a dramatic turnabout of scale and cost, those machines which used to cost hundreds of thousands of dollars can now be run far faster under emulation on a $35 Raspberry Pi.

04 October, 2019 10:09PM by John Goerzen

hackergotchi for Ben Hutchings

Ben Hutchings

Kernel Recipes 2019, part 2

This conference only has a single track, so I attended almost all the talks. This time I didn't take notes but I've summarised all the talks I attended. This is the second and last part of that; see part 1 if you missed it.

XDP closer integration with network stack

Speaker: Jesper Dangaard Brouer

Details and slides: https://kernel-recipes.org/en/2019/xdp-closer-integration-with-network-stack/

Video: Youtube

The speaker introduced XDP and how it can improve network performance.

The Linux network stack is extremely flexible and configurable, but this comes at some performance cost. The kernel has to generate a lot of metadata about every packet and check many different control hooks while handling it.

The eXpress Data Path (XDP) was introduced a few years ago to provide a standard API for doing some receive packet handling earlier, in a driver or in hardware (where possible). XDP rules can drop unwanted packets, forward them, pass them directly to user-space, or allow them to continue through the network stack as normal.

He went on to talk about how recent and proposed future extensions to XDP allow re-using parts of the standard network stack selectively.

This talk was supposed to be meant for kernel developers in general, but I don't think it would be understandable without some prior knowledge of the Linux network stack.

Faster IO through io_uring

Speaker: Jens Axboe

Details and slides: https://kernel-recipes.org/en/2019/talks/faster-io-through-io_uring/

Video: Youtube. (This is part way through the talk, but the earlier part is missing audio.)

The normal APIs for file I/O, such as read() and write(), are blocking, i.e. they make the calling thread sleep until I/O is complete. There is a separate kernel API and library for asynchronous I/O (AIO), but it is very restricted; in particular it only supports direct (uncached) I/O. It also requires two system calls per operation, whereas blocking I/O only requires one.

Recently the io_uring API was introduced as an entirely new API for asynchronous I/O. It uses ring buffers, similar to hardware DMA rings, to communicate operations and completion status between user-space and the kernel, which is far more efficient. It also removes most of the restrictions of the current AIO API.

The speaker went into the details of this API and showed performance comparisons.

The Next Steps toward Software Freedom for Linux

Speaker: Bradley Kuhn

Details: https://kernel-recipes.org/en/2019/talks/the-next-steps-toward-software-freedom-for-linux/

Slides: http://ebb.org/bkuhn/talks/Kernel-Recipes-2019/kernel-recipes.html

Video: Youtube

The speaker talked about the importance of the GNU GPL to the development of Linux, in particular the ability of individual developers to get complete source code and to modify it to their local needs.

He described how, for a large proportion of devices running Linux, the complete source for the kernel is not made available, even though this is required by the GPL. So there is a need for GPL enforcement—demanding full sources from distributors of Linux and other works covered by GPL, and if necessary suing to obtain them. This is one of the activities of his employer, Software Freedom Conservancy, and has been carried out by others, particularly Harald Welte.

In one notable case, the Linksys WRT54G, the release of source after a lawsuit led to the creation of the OpenWRT project. This is still going many years later and supports a wide range of networking devices. He proposed that the Conservancy's enforcement activity should, in the short term, concentrate on a particular class of device where there would likely be interest in creating a similar project.

Suricata and XDP

Speaker: Eric Leblond

Details and slides: https://kernel-recipes.org/en/2019/talks/suricata-and-xdp/

Video: Youtube

The speaker described briefly how an Intrusion Detection System (IDS) interfaces to a network, and why it's important to be able to receive and inspect all relevant packets.

He then described how the Suricata IDS uses eXpress Data Path (XDP, explained in an earlier talk) to filter and direct packets, improving its ability to handle very high packet rates.

CVEs are dead, long live the CVE!

Speaker: Greg Kroah-Hartman

Details and slides: https://kernel-recipes.org/en/2019/talks/cves-are-dead-long-live-the-cve/

Video: Youtube

Common Vulnerabilities and Exposures Identifiers (CVE IDs) are a standard, compact way to refer to specific software and hardware security flaws.

The speaker explained problems with the way CVE IDs are currently assigned and described, including assignments for bugs that don't impact security, lack of assignment for many bugs that do, incorrect severity scores, and missing information about the changes required to fix the issue. (My work on CIP's kernel CVE tracker addresses some of these problems.)

The average time between assignment of a CVE ID and a fix being published is apparently negative for the kernel, because most such IDs are being assigned retrospectively.

He proposed to replace CVE IDs with "change IDs" (i.e. abbreviated git commit hashes) identifying bug fixes.

Driving the industry toward upstream first

Speaker: Enric Balletbo i Serra

Details snd slides: https://kernel-recipes.org/en/2019/talks/driving-the-industry-toward-upstream-first/

Video: Youtube

The speaker talked about how the Chrome OS developers have tried to reduce the difference between the kernels running on Chromebooks, and the upstream kernel versions they are based on. This has succeeded to the point that it is possible to run a current mainline kernel on at least some Chromebooks (which he demonstrated).

Formal modeling made easy

Speaker: Daniel Bristot de Oliveira

Details and slides: https://kernel-recipes.org/en/2019/talks/formal-modeling-made-easy/

Video: Youtube

The speaker explained how formal modelling of (parts of) the kernel could be valuable. A formal model will describe how some part of the kernel works, in a way that can be analysed and proven to have certain properties. It is also necessary to verify that the model actually matches the kernel's implementation.

He explained the methodology he used for modelling the real-time scheduler provided by the PREEMPT_RT patch set. The model used a number of finite state machines (automata), with conditions on state transitions that could refer to other state machines. He added (I think) tracepoints for all state transitions in the actual code and a kernel module that verified that at each such transition the model's conditions were met.

In the process of this he found a number of bugs in the scheduler.

Kernel documentation: past, present, and future

Speaker: Jonathan Corbet

Details and slides: https://kernel-recipes.org/en/2019/kernel-documentation-past-present-and-future/

Video: Youtube

The speaker is the maintainer of the Linux kernel's in-tree documentation. He spoke about how the documentation has been reorganised and reformatted in the past few years, and what work is still to be done.

GNU poke, an extensible editor for structured binary data

Speaker: Jose E Marchesi

Details and slides: https://kernel-recipes.org/en/2019/talks/gnu-poke-an-extensible-editor-for-structured-binary-data/

Video: Youtube

The speaker introduced and demonstrated his project, the "poke" binary editor, which he thinks is approaching a first release. It has a fairly powerful and expressive language which is used for both interactive commands and scripts. Type definitions are somewhat C-like, but poke adds constraints, offset/size types with units, and types of arbitrary bit width.

The expected usage seems to be that you write a script ("pickle") that defines the structure of a binary file format, use poke interactively or through another script to map the structures onto a specific file, and then read or edit specific fields in the file.

04 October, 2019 01:30PM

hackergotchi for Norbert Preining

Norbert Preining

TensorFlow 2.0 with GPU on Debian/sid

Some time ago I have been written about how to get Tensorflow (1.x) running on current Debian/sid back then. It turned out that this isn’t correct anymore and needs an update, so here it is, getting the most uptodate TensorFlow 2.0 running with nVidia support running on Debian/sid.

Step 1: Install CUDA 10.0

Follow more or less the instructions here and do

wget -O- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | sudo tee /etc/apt/trusted.gpg.d/nvidia-cuda.asc
echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list
sudo apt-get update
sudo apt-get install cuda-libraries-10-0

Warning! Don’t install the 10-1 version since the TensorFlow binaries need 10.0.

This will install lots of libs into /usr/local/cuda-10.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-10-0.conf.

Step 2: Install CUDA 10.0 CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 10.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download cuDNN v7.N.N (xxxx NN, YYYY), for CUDA 10.0 and then cuDNN Runtime Library for Ubuntu18.04 (Deb).

At the moment (as of today) this will download a file libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb.

Step 3: Install Tensorflow for GPU

This is the easiest one and can be done as explained on the TensorFlow installation page using

pip3 install --upgrade tensorflow-gpu

This will install several other dependencies, too.

Step 4: Check that everything works

Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

$ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
....(lots of output)
2019-10-04 17:29:26.020013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3390 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
tf.Tensor(444.98087, shape=(), dtype=float32)
$

I haven’t tried to get R working with the newest TensorFlow/Keras combination, though. Hope the above helps.

04 October, 2019 08:37AM by Norbert Preining

hackergotchi for Matthew Garrett

Matthew Garrett

Investigating the security of Lime scooters

(Note: to be clear, this vulnerability does not exist in the current version of the software on these scooters. Also, this is not the topic of my Kawaiicon talk.)

I've been looking at the security of the Lime escooters. These caught my attention because:
(1) There's a whole bunch of them outside my building, and
(2) I can see them via Bluetooth from my sofa
which, given that I'm extremely lazy, made them more attractive targets than something that would actually require me to leave my home. I did some digging. Limes run Linux and have a single running app that's responsible for scooter management. They have an internal debug port that exposes USB and which, until this happened, ran adb (as root!) over this USB. As a result, there's a fair amount of information available in various places, which made it easier to start figuring out how they work.

The obvious attack surface is Bluetooth (Limes have wifi, but only appear to use it to upload lists of nearby wifi networks, presumably for geolocation if they can't get a GPS fix). Each Lime broadcasts its name as Lime-12345678 where 12345678 is 8 digits of hex. They implement Bluetooth Low Energy and expose a custom service with various attributes. One of these attributes (0x35 on at least some of them) sends Bluetooth traffic to the application processor, which then parses it. This is where things get a little more interesting. The app has a core event loop that can take commands from multiple sources and then makes a decision about which component to dispatch them to. Each command is of the following form:

AT+type,password,time,sequence,data$

where type is one of either ATH, QRY, CMD or DBG. The password is a TOTP derived from the IMEI of the scooter, the time is simply the current date and time of day, the sequence is a monotonically increasing counter and the data is a blob of JSON. The command is terminated with a $ sign. The code is fairly agnostic about where the command came from, which means that you can send the same commands over Bluetooth as you can over the cellular network that the Limes are connected to. Since locking and unlocking is triggered by one of these commands being sent over the network, it ought to be possible to do the same by pushing a command over Bluetooth.

Unfortunately for nefarious individuals, all commands sent over Bluetooth are ignored until an authentication step is performed. The code I looked at had two ways of performing authentication - you could send an authentication token that was derived from the scooter's IMEI and the current time and some other stuff, or you could send a token that was just an HMAC of the IMEI and a static secret. Doing the latter was more appealing, both because it's simpler and because doing so flipped the scooter into manufacturing mode at which point all other command validation was also disabled (bye bye having to generate a TOTP). But how do we get the IMEI? There's actually two approaches:

1) Read it off the sticker that's on the side of the scooter (obvious, uninteresting)
2) Take advantage of how the scooter's Bluetooth name is generated

Remember the 8 digits of hex I mentioned earlier? They're generated by taking the IMEI, encrypting it using DES and a static key (0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88), discarding the first 4 bytes of the output and turning the last 4 bytes into 8 digits of hex. Since we're discarding information, there's no way to immediately reverse the process - but IMEIs for a given manufacturer are all allocated from the same range, so we can just take the entire possible IMEI space for the modem chipset Lime use, encrypt all of them and end up with a mapping of name to IMEI (it turns out this doesn't guarantee that the mapping is unique - for around 0.01%, the same name maps to two different IMEIs). So we now have enough information to generate an authentication token that we can send over Bluetooth, which disables all further authentication and enables us to send further commands to disconnect the scooter from the network (so we can't be tracked) and then unlock and enable the scooter.

(Note: these are actual crimes)

This all seemed very exciting, but then a shock twist occurred - earlier this year, Lime updated their authentication method and now there's actual asymmetric cryptography involved and you'd need to engage in rather more actual crimes to obtain the key material necessary to authenticate over Bluetooth, and all of this research becomes much less interesting other than as an example of how other companies probably shouldn't do it.

In any case, congratulations to Lime on actually implementing security!

comment count unavailable comments

04 October, 2019 06:04AM

October 03, 2019

hackergotchi for Joey Hess

Joey Hess

Project 62 Valencia Floor Lamp review

From Target, this brass finish floor lamp evokes 60's modernism, updated for the mid-Anthropocene with a touch plate switch.

The integrated microcontroller consumes a mere 2.2 watts while the lamp is turned off, in order to allow you to turn the lamp on with a stylish flick. With a 5 watt LED bulb (sold separately), the lamp will total a mere 7.2 watts while on, making it extremely energy efficient. While off, the lamp consumes a mere 19 kilowatt-hours per year.

clamp multimeter reading 0.02 amps AC, connected to a small circuit board with a yellow capacitor, a coil, and a heat sinked IC visible lamp from rear; a small round rocker switch has been added to the top of its half-globe shade

Though the lamp shade at first appears perhaps flimsy, while you are drilling a hole in it to add a physical switch, you will discover metal, though not brass all the way through. Indeed, this lamp should last for generations, should the planet continue to support human life for that long.

As an additional bonus, the small plastic project box that comes free in this lamp will delight any electrical enthusiast. As will the approximately 1 hour conversion process to delete the touch switch phantom load. The 2 cubic foot of syrofoam packaging is less delightful.

Two allen screws attach the pole to the base; one was missing in my lamp. Also, while the base is very heavily weighted, the lamp still rocks a bit when using the aftermarket switch. So I am forced to give it a mere 4 out of 5 stars.

front view of lit lamp beside a bookcase

03 October, 2019 08:30PM

Molly de Blanc

Free software activities (September 2019)

September marked the end of summer and the end of my summer travel.  Paid and non-paid activities focused on catching up with things I fell behind on while traveling. Towards the middle of September, the world of FOSS blew up, and then blew up again, and then blew up again.

A photo of a river with the New York skyline in the background.

Free software activities: Personal

  • I caught up on some Debian Community Team emails I’ve been behind on. The CT is in search of new team members. If you think you might be interested in joining, please contact us.
  • After much deliberation, the OSI decided to appoint two directors to the board. We will decide who they will be in October, and are welcoming nominations.
  • On that note, the OSI had a board meeting.
  • Wrote a blog post on rights and freedoms to create a shared vocabulary for future writing concerning user rights. I also wrote a bit about leadership in free software.
  • I gave out a few pep talks. If you need a pep talk, hmu.

Free software activities: Professional

  • Wrote and published the September Friends of GNOME Update.
  • Interviewed Sammy Fung for the GNOME Engagement Blog.
  • Did a lot of behind the scenes work for GNOME, that you will hopefully see more of soon!
  • I spent a lot of time fighting with CiviCRM.
  • I attended GitLab Commit on behalf of GNOME, to discuss how we implement GitLab.

 

03 October, 2019 04:49PM by mollydb

Thorsten Alteholz

My Debian Activities in September 2019

FTP master

This month I accepted 246 packages and rejected 28. The overall number of packages that got accepted was 303.

Debian LTS

This was my sixty third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

    [DLA 1911-1] exim4 security update for one CVE
    [DLA 1936-1] cups security update for one CVE
    [DLA 1935-1] e2fsprogs security update for one CVE
    [DLA 1934-1] cimg security update for 8 CVEs
    [DLA 1939-1] poppler security update for 3 CVEs

I also started to work on opendmarc and spip but did not finish testing yet.
Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the sixteenth ELTS month.

During my allocated time I uploaded:

  • ELA-160-1 of exim4
  • ELA-166-1 of libpng
  • ELA-167-1 of cups
  • ELA-169-1 of openldap
  • ELA-170-1 of e2fsprogs

I also did some days of frontdesk duties.

Other stuff

This month I uploaded new packages of …

I also uploaded new upstream versions of …

I improved packaging of …

On my Go challenge I uploaded golang-github-rivo-uniseg, golang-github-bruth-assert, golang-github-xlab-handysort, golang-github-paypal-gatt.

I also sponsored the following packages: golang-gopkg-libgit2-git2go.v28.

03 October, 2019 03:08PM by alteholz

Birger Schacht

Installing and running Signal on Tails

Because the topic comes up every now and then, I thought I’d write down how to install and run Signal on Tails. These instructions are based on the 2nd Beta of Tails 4.0 - the 4.0 release is scheduled for October 22nd. I’m not sure if these steps also work on Tails 3.x, I seem to remember having some problems with installing flatpaks on Debian Stretch.

The first thing to do is to enable the Additional Software feature of Tails persistence (the Personal Data feature is also required, but that one is enabled by default when configuring persistence). Don’t forget to reboot afterwards. When logging in after the reboot, please set an Administration Password.

The approach I use to run Signal on Tails is using flatpak, so install flatpak either via Synaptic or via commandline:

sudo apt install flatpak

Tails then asks if you want to add flatpak to your additional software and I recommend doing so. The list of additional software can be checked via Applications → System Tools → Additional Software. The next thing you need to do is set up the directories- flatpak installs the software packages either system-wide in $prefix/var/lib/flatpak/[1] or per user in $HOME/.local/share/flatpak/ (the latter lets you manage your flatpaks without having to use elevated permissions). User specific data of the apps goes into $HOME/.var/app. This means we have to create directories on our Peristent folder for those two locations and then link them to their targets in /home/amnesia.

I recommend putting these commands into a script (i.e. /home/amnesia/Persistent/flatpak-setup.sh) and making it executable (chmod +x /home/amnesia/Persistent/flatpak-setup.sh):

#!/bin/sh

mkdir -p /home/amnesia/Persistent/flatpak
mkdir -p /home/amnesia/.local/share
ln -s /home/amnesia/Persistent/flatpak /home/amnesia/.local/share/flatpak
mkdir -p /home/amnesia/Persistent/app
mkdir -p /home/amnesia/.var
ln -s /home/amnesia/Persistent/app /home/amnesia/.var/app

Now you need to add a flatpak remote and install signal:

amnesia@amnesia:~$ torify flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
amnesia@amnesia:~$ torify flatpak install flathub org.signal.Signal

This will take a couple of minutes.

To show Signal the way to the next whiskey bar through Tor the HTTP_PROXY and HTTPS_PROXY environment variables have to be set. I recommend again to put this into a script (i.e. /home/amnesia/Persistent/signal.sh)

#!/bin/sh

export HTTP_PROXY=socks://127.0.0.1:9050
export HTTPS_PROXY=socks://127.0.0.1:9050
flatpak run org.signal.Signal

Screenshot of Signal on Tails 4 Yay it works!

To update signal you have to run

amnesia@amnesia:~$ torify flatpak update

To make the whole thing a bit more comfortably, the folder softlinks can be automatically created on login using a Gnome autostart script. For that to work you have to have the Dotfiles feature of Tails enabled. Then you can create a /live/persistence/TailsData_unlocked/dotfiles/.config/autostart/FlatpakSetup.desktop file:

[Desktop Entry]
Name=FlatpakSetup
GenericName=Setup Flatpak on Tails
Comment=This script runs the flatpak-setup.sh script on start of the user session
Exec=/live/persistence/TailsData_unlocked/Persistent/flatpak-setup.sh
Terminal=false
Type=Application

By adding /live/persistence/TailsData_unlocked/dotfiles/.local/share/applications/Signal.desktop file to the dotfiles folder, Signal also shows as part of the Gnome applications with a nice Signal icon:

[Desktop Entry]
Name=Signal
GenericName=Signal Desktop Messenger
Exec=/home/amnesia/Persistent/signal.sh
Terminal=false
Type=Application
Icon=/home/amnesia/.local/share/flatpak/app/org.signal.Signal/current/active/files/share/icons/hicolor/128x128/apps/org.signal.Signal.png

Screenshot of Signal Application Icon Tails 4


  1. It is also possible to configure additional system wide installation locations, details are documented in flatpak-installation(5) [return]

03 October, 2019 08:52AM

hackergotchi for Mike Gabriel

Mike Gabriel

Debian Edu FAI

Over the past month I worked on re-scripting the installation process of a Debian Edu system (minimal installation profile and workstation installation profile for now) by utilizing FAI [1].

My goal on this is to get the Debian Edu FAI config space into Debian bullseye (as package: debian-edu-fai) and provide an easy setup method for the FAI installation server on an existing Debian Edu site.

Note: I do not intend to bootstrap a complete Debian Edu site via FAI. The use case is: get your Debian Edu main server up and running, add host faiserver.intern and install all your site's client systems via this FAI installation server.

Debian Edu Installation Methods (until today)

Currently, we only have a D-I based installation method (over PXE or ISO image) at hand with several disadvantages:

  • requires interaction
  • not really customizable
  • comparingly slow (now that I have seen FAI do these things)

All of the above problems can be solved by installing Debian Edu via a FAI configuration.

Debian Edu Installation via FAI ( This rocks so much!!! )

As you may guess, but I need to repeat the above (because I am so excited about it), here are the advantages of installing Debian Edu via FAI:

  • Debian Edu installation via FAI is incredibly fast
  • Customization: drop in some more files into the FAI config space and you have a customized setup. [2]
  • FAI supports zero-click installs, so no more interaction is required except from booting via PXE
  • FAI supports stuffing the FAI installation bootstrap system into a bootable ISO image

Get it!

The whole setup process of a FAI server on a Debian Edu network still requires some documentation and testing, but the config space for FAI, I have already provided on Debian's GitLab server:

     https://salsa.debian.org/debian-edu/debian-edu-fai/

Have fun with this and provide feedback, if you try this out. Thanks!

light+love
Mike

References and Footnotes

  • [1] https://fai-project.org
  • [2] For our local "IT-Zukunft Schule" project I added several FAI config extensions without having to touch the Debian Edu FAI configuration files.

03 October, 2019 07:24AM by sunweaver

October 02, 2019

hackergotchi for Gunnar Wolf

Gunnar Wolf

Presenting a webinar: Privacy and anonymity: Requisites for individuals' security online

I was invited by the Mexican Chapter of the Internet Society (ISOC MX) to present a webinar session addressing the topics that motivated the project I have been involved for the past two years — And presenting some results, what we are doing, where we are heading.

ISOC's webinars are usually held via the Zoom platform. However, I felt it directly adversarial to what we are doing; we don't need to register with a videoconference provider if we can use Jitsi! So, the webinar will be held at https://meet.jit.si/WebinarISOC. Of course, I am aware that if we reach a given threshold, Jitsi will stop giving a quality service — So I will also mirror it to a "YouTube live" thingy. I am not sure if this will be the right URL, but I think it will be here.

Of course, I will later download the video and publish it in a site that tracks users less than YouTube :-]

So, if you are interested — See you there on 2019.10.16, 19:00 (GMT-5).

AttachmentSize
webinario1609FINAL.jpg134.71 KB

02 October, 2019 04:55PM by gwolf

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS/ELTS (September 2019)

In September 2019, I have worked on the Debian LTS project for 11 hours (of 12 hours planned) and on the Debian ELTS project for another 2 hours (of 12 hours planned) as a paid contributor. I have given back the 10 ELTS hours, but will keep the 1 LTS hour and move it over to October. As I will be gone on family vacation during two weeks of Octobre I have reduced my workload for the coming months accordingly (10 hours LTS, 5 hours ELTS).

LTS Work

  • Patch review on qemu (regarding DLA-1927-1)
  • Perform regression tests on previous LTS uploads of 389-ds-base (see [1,2] for results/statements)
  • Upload netty 3.2.6.Final-2+deb8u1 to jessie-security (DLA-1941-1 [3]), fixing 1 CVE
  • Triage nghttp2, probably not affected by CVE-2019-9511 and CVE-2019-9513. The code base is really different around the passages where the fixing patches have been applied by upstream. I left a comment in dla-needed.txt plus asked for a second opinion. [4]
  • Go over all 2019 LTS announcements in the webwml.git repository and ping LTS team members (including myself) on missing webwml DLAs.
  • Upload phpbb3 3.0.12-5+deb8u4 to jessie-security (DLA-1942-1 [5]), fixing 1 (or 2) CVE(s). Regarding the phpbb3 upload, Sylvain Beucler and I are currently discussing [6] whether CVE-2019-13376 got actually fixed with this upload or not. There will be some sort of follow-up announcement on this matter soon.

ELTS Work

  • Upload netty 3.2.6.Final-2+deb7u1 to wheezy-lts (ELA-168-1 [7]), fixing 1 CVE

References

02 October, 2019 02:23PM by sunweaver

October 01, 2019

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, September 2019

I was assigned 20 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

I prepared and, after review, released Linux 3.16.74, including various security and other fixes. I then rebased the Debian package onto that. I uploaded that with a small number of other fixes and issued DLA-1930-1.

I backported the latest security update for Linux 4.9 from stretch to jessie and issued DLA-1940-1 for that.

01 October, 2019 02:00PM

hackergotchi for Mike Gabriel

Mike Gabriel

Install ActivInspire Smart Board Software on Debian 10

From one of my customers, I received the request to figure out an installation pathway for ActivInspire, the Promethean smart board software suite. ActivInspire is offered as DEB builds for Ubuntu 18.04. On a Debian 10 (aka buster) system the installation requires some hack-around (utilizing packages from Debian jessie LTS).

Here is the quick-n-dirty recipe:

APT Key for "Promethean Ltd <support@prometheanworld.com>"

The APT key you need for downloading packages from Promethean's package archive can be obtained like this:

$ gpg --search-keys 0x300035F2484C6FED
$ gpg --export -a 0x300035F2484C6FED | sudo apt-key add -

Afterwards, you should find the key added to APT's GnuPG keyring. Verify that:

$ sudo apt-key adv --fingerprint D3CDA26CC37F568DD4A8DE68300035F2484C6FED
Executing: /tmp/user/0/apt-key-gpghome.HMo8gCMGUG/gpg.1.sh --fingerprint D3CDA26CC37F568DD4A8DE68300035F2484C6FED
pub   rsa4096 2017-03-02 [SC]
      D3CD A26C C37F 568D D4A8  DE68 3000 35F2 484C 6FED
uid        [ unbekannt ] PrometheanLtd <support@prometheanworld.com>
sub   rsa4096 2017-03-02 [E]

Tweak APT's Installation Sources

Next, add the below lines to a new file called /etc/apt/sources.list.d/promethean.list. The software will require to grab some packages (e.g. libssl1.0.0) from Debian jessie:

deb http://deb.debian.org/debian/ jessie main non-free contrib
deb http://security.debian.org/ jessie/updates main contrib non-free
deb http://activsoftware.co.uk/linux/repos/driver/ubuntu/ bionic non-oss oss

Note that security support for Debian jessie LTS will end on 23rd June 2020. Until then, you should be safe with package dependencies from Debian jessie LTS, after that you are on your own. (One might try to grab libssl1.0.0 from Ubuntu 18.04, which should receive security support until April 2023).

Install ActivDriver and ActivTools

Now you can install the ActivInspire smart board sofware:

$ sudo apt install activdriver activtools

Optional: Disable jessie Package Source again

If you are scared of more packages pouring in from Debian jessie LTS, you can safely comment out the lines in /etc/apt/sources.lists.d/promethean.list again now that the smart board software has been installed. (You will not get security updates then anymore for packages that activdriver and activtools pulled in from Debian jessie LTS, though).

Disclaimer

That things worked out here does not mean that they will work for you. Neither is this an official Promethean post / documentation. Don't ping me for support on this, unless you are ready to book me for commercial support.

Have fun!
Mike Gabriel (aka sunweaver at debian.org)

01 October, 2019 01:17PM by sunweaver

hackergotchi for Junichi Uekawa

Junichi Uekawa

From today, value added tax rate increased in Japan.

From today, value added tax rate increased in Japan. First time with variable tax rate depending on how you consume it, inside the restaurant or outside.

01 October, 2019 11:01AM by Junichi Uekawa

Abhijith PA

Debian packaging session

Hello web,

Group photo

Last week I conducted a workshop on Debian packaging at MES College of Engineering, Kuttipuram in accordance with Frisbee 19, yearly conference by IEEE cell of this college. Thanks to Anupa from ICFOSS who contacted and arranged me to take this session. I was accompanied by Subin and Abhijith from FOSSers. The time span was from 9:30 AM to 04:30 PM. Since it was a big time slot we took from the Free software evangelism –> GNU/Linux –> Debian –> how contributing to community projects can help your career.

Subin introduced Debian history, philosophy and release processes to the students. I started with a hello world program packaging and later to ruby gem packaging with gem2deb. Abhijith helped students who got stuck while packaging. At the end of the session we did a small quiz and gifted them with debian stickers and conference merchandises.

Thanks to the volunteers for setting up the prerequisites.

01 October, 2019 04:37AM

hackergotchi for Norbert Preining

Norbert Preining

10 years in Japan

Exactly 10 years ago, on October 1, 2009, I started my work at the Japan Advanced Institute of Science and Technology (JAIST), arriving the previous day in a place not completely unknown, but with a completely different outlook: I had a position as Associate Professor, and somehow was looking forward to an interesting and challenging time. Much has changed since then, and I thought a bit of reflection is necessary

Four years ago I wrote a similar blog, 6 years in Japan. Rereading it today it, there is a considerable overlap:

6 years later I am still here at the JAIST, but things have changed considerably, and my future is even less clear than 6 years ago.

How true it was back then, what did I know that within a few months after posting this, the JAIST, in a move to promote internationalization, has purged all but one western foreigner from the faculty (outside the English department), and I found myself unemployed, with a new-born child, not knowing what to do and where to go. It relates cleanly to the paragraph on The biggest disappointment. How much can I laugh now looking at what I considered my biggest disappointment back then, and how I felt half a year later.

The biggest disappoinment

Asked today about the biggest disappointment, it would be clearly the Japanese academic environment. I have never seen such selfish and reckless scientist – maybe better careless – having no interest in the fate of colleagues with whom they have worked for years. Having found myself with a new-born child in unemployment in Japan, guess how many of my colleagues dared to even once ask how I am doing!? The answer is an impressive zero, naught.

Comparing this with the academic environment in which I have grown up in Vienna, I was left dumbfounded: Till now I try to search for work places for those that have been employed in my projects, the group in Vienna always tried to help each other even in hard times, bridging over holes by shifting between projects. I can’t imagine any of my colleagues from my home university to not even ask a colleague in troubles.

Well that is Japan academics, I lost every trust and faith in them.

The happiest thing

Back then I wrote that despite many hardships, the happiest thing was that I found a lovely, beautiful, and caring wife. To topple that, we got a lovely (and lively, but also challenging, at times nasty, etc etc) daughter that changed our life considerably. The three+ years since she is with us, many things got considerably more difficult, and bringing up a child brings out cultural differences and disagreements much more than living in two. But the love and fun we are receiving from our time together is for sure the happiest thing (for now, until I write another blog in 10 years?).

Present and future

After loosing my job at JAIST, and six months of unemployment, a lucky coincidence gifted me with a great job at an IT company in Tokyo, that allows me to work remotely from my home. I am incredibly thankful to everyone there who helped made this happen. It is a complete new world for me. After 25 years in academics being thrown into a Japanese company (all Japanese, I am the only foreigner), with business meetings, client support, etc was something unexpected for me. Maybe I count it as one of the big achievements that I manage to function properly in this kind of environment.

I still try to keep up my research work, publishing articles every year, and as far as possible attending conferences. My OSS activities haven’t changed a lot, and I try to keep up with the projects for which I am responsible.

What the future brings is even less unclear: Now that we have to think about the education of our daughter, moving is getting more and more a point of discussion. I really detest Japanese education system, in particular junior high school which I consider a childhood and personality killer. OTOH, we have settled into a very nice place here in Ishikawa, and at my age moving is getting more and more burdensome, not to speak of another job change. So I feel torn between returning to Europe, or remaining here in Japan. Let us see what the future brings.

01 October, 2019 04:06AM by Norbert Preining

Russ Allbery

Review: This Is How You Lose the Time War

Review: This Is How You Lose the Time War, by Amal El-Mohtar & Max Gladstone

Publisher: Saga
Copyright: 2019
ISBN: 1-5344-3101-2
Format: Kindle
Pages: 200

Red is the most effective operative of the Agency. She darts through time's threads, finds threats to the future, eliminates them, and delights in the work. She rarely encounters the operatives of her enemy directly; they prefer painstaking work in the shadows. But there is one opponent who has a different style. Audacious. Risky.

In the midst of a dead battlefield, Red finds a letter.

Blue is Garden's operative, moving from mission to mission, exerting exactly the right pressure or force at a critical moment to shift the strands of the future. She decided to leave a letter taunting her adversary, but also expressing gratitude at the challenge, the requirement that she give the war her full attention, the relief from boredom. She wasn't sure whether to expect a reply, but she received one.

This Is How You Lose the Time War is an epistolary novel, told in short action sequences by Red or Blue followed by the inevitable discovered letter. At first, they taunt each other and delight in their victories while expressing admiration of their opponent. Blue has the smoother and more comfortable writing style. Red has to research the form of letters and writes like a conversation, sharp and informal. Both threaten and tease the other with the consequences if their superiors discover this exchange.

In word play, cultural references, sincerely-shared preferences, open curiosity, and audacious puns, the letters turn into something more than a taunting game.

The time war is a long-standing SF trope. This one reminds me the most of Fritz Leiber's The Big Time: a two-sided war between far-future civilizations, neither of which are clearly superior in either capabilities or morality. Unlike Leiber's Spiders and Snakes, though, El-Mohtar and Gladstone's Agency and Garden have some solid world-building behind them. Red's Agency is technological, cybernetic, and run by what feels like machine intelligence. Blue's Garden is the biological flip-side, a timeline of crafted life culminating in stars with eyes and a living universe, focus on growth and poison, absorbing and reshaping. To the reader, they alternate between incomprehensible and awful, although Red and Blue are comfortable with their sides at the start. Don't expect detailed or believable descriptions of the technology of either side; this is well into "indistinguishable from magic" territory throughout.

Despite its nature as a time travel story, the plot structure of this story is straightforward and somewhat predictable. You're unlikely to be surprised by the outcome; the enjoyment is in how the story gets there. The relationship didn't quite ring true to me, mostly because it develops so quickly, although some of that has to be forgiven for the format. (I have some experience with epistolary relationships; they're much more rambling and involve far, far more words than this one does.) But the letters themselves are playful, delightful, and occasionally moving, and the resolution, although expected, delivers on the emotional hooks the story was setting up.

I wasn't blown away by this, partly I think because it's too tight, focused, and stylized. Red and Blue are the only true characters in the story and the only people who feel real, which undermines the world-building and means the story can't sprawl into its surroundings or let the reader imagine other ways of living in this world. At 200 pages, it's more of a novella than a novel, and it's structured with the single-minded thrust of short fiction. The dynamic between the two characters is well-done, but there is a limit to how much characterization one can do with only a single other character to interact with. Since Red and Blue can define themselves only in relation to each other, they felt two-dimensional and I was unable to fully embrace either of them as a character.

That said, I read the whole story in an afternoon and did not regret it. I have a weakness for epistolary stories that this satisfied nicely. It hit, at least for me, the sweet spot of recognizing most of the cultural references while being surprised I recognized them, which was oddly satisfying. And the whole book is worth it for the growing tendency they both have for seeing and writing about each other's colors in everything.

I think this is more of an afternoon's entertainment than something you'll remember for a long time, but if you like time travel stories or characters writing letters to each other, recommended.

Rating: 7 out of 10

01 October, 2019 03:06AM

September 30, 2019

John Goerzen

Connecting A Physical DEC vt420 to Linux

John and Oliver trip to Vintage Computer Festival Midwest 2019. Oliver playing Zork on the Micro PDP-11

Inspired by a weekend visit to Vintage Computer Festival Midwest at which my son got to play Zork on an amber console hooked up to a MicroPDP-11 running 2BSD, I decided it was time to act on my long-held plan to get a real old serial console hooked up to Linux.

Not being satisfied with just doing it for the kicks, I wanted to make it actually usable. 30-year-old DEC hardware meets Raspberry Pi. I thought this would be pretty easy, but it turns out is was a lot more complicated than I realized, involving everything from nonstandard serial connectors to long-standing kernel bugs!

Selecting a Terminal — And Finding Parts

I wanted something in amber for that old-school feel. Sadly I didn’t have the forethought to save any back in the 90s when they were all being thrown out, because now they’re rare and can be expensive. Search eBay and pretty soon you find a scattering of DEC terminals, the odd Bull or Honeywell, some Sperrys, and assorted oddballs that don’t speak any kind of standard protocol. I figured, might as well get a vt, since we’re still all emulating them now, 40+ years later. Plus, my old boss from my university days always had stories about DEC. I wish he were still around to see this.

I selected the vt420 because I was able to find them, and it has several options for font size, letting more than 24 lines fit on a screen.

Now comes the challenge: most of the vt420s never had a DB25 RS-232 port. The VT420-J, an apparently-rare international model, did, but it is exceptionally rare. The rest use a DEC-specific port called the MMJ. Thankfully, it is electrically compatible with RS-232, and I managed to find the DEC H8571-J adapter as well as a BC16E MMJ cable that I need.

I also found a vt510 (with “paperwhite” instead of amber) in unknown condition. I purchased it, and thankfully it is also working. The vt510 is an interesting device; for that model, they switched to using a PS/2 keyboard connector, and it can accept either a DEC VT keyboard or a PC keyboard. It also supports full key remapping, so Control can be left of A as nature intended. However, there’s something about amber that is just so amazing to use again.

Preparing the Linux System

I thought I would use a Raspberry Pi as a gateway for this. With built-in wifi, that would let me ssh to other machines in my house without needing to plug in a serial cable – I could put the terminal wherever. Alternatively, I can plug in a USB-to-serial adapter to my laptop and just plug the terminal into it when I want. I wound up with a Raspberry Pi 4 kit that included some heatsinks.

I had two USB-to-serial adapters laying around: a Keyspan USA-19HS and a Digi I/O Edgeport/1. I started with the Keyspan on a Raspberry Pi 4 on the grounds that I didn’t have the needed Edgeport/1 firmware file laying about already. The Raspberry Pi does have serial capability integrated, but it doesn’t use RS-232 voltages and there have been reports of it dropping characters sometimes, so I figured the easy path would be a USB adapter. That turned out to be only partially right.

Serial Terminals with systemd

I have never set up a serial getty with systemd — it has, in fact, been quite a long while since I’ve done anything involving serial other than the occasional serial console (which is a bit different purpose).

It would have taken a LONG time to figure this out, but thanks to an article about the topic, it was actually pretty easy in the end. I didn’t set it up as a serial console, but spawning a serial getty did the trick. I wound up modifying the command like this:

ExecStart=-/sbin/agetty -8 -o '-p -- \\u' %I 19200 vt420

The vt420 supports speeds up to 38400 and the vt510 supports up to 115200bps. However, neither can process plain text at faster than 19200 so there is no point to higher speeds. And, as you are about to see, they can’t necessarily even muster 19200 all the time.

Flow Control: Oh My

The unfortunate reality with these old terminals is that the processor in them isn’t actually able to keep up with line speeds. Any speed above 4800bps can exceed processor capabilities when “expensive” escape sequences are sent. That means that proper flow control is a must. Unfortunately, the vt420 doesn’t support any form of hardware flow control. XON/XOFF is all it’ll do. Yeah, that stinks.

So I hooked the thing up to my desktop PC with a null-modem cable, and started to tinker. I should be able to send a Ctrl-S down the line and the output from the pi should immediately stop. It didn’t. Huh. I verified it was indeed seeing the Ctrl-S (open emacs, send Ctrl-S, and it goes into search mode). So something, somehow, was interfering.

After a considerable amount of head scratching, I finally busted out the kernel source. I discovered that the XON/XOFF support is part of the serial driver in Linux, and that — ugh — the keyspan serial driver never actually got around to implementing it. Oops. That’s a wee bit of a bug. I plugged in the Edgeport/1 instead of the Keyspan and magically XON/XOFF started working.

Well, for a bit.

You see, flow control is a property of the terminal that can be altered by programs on a running system. It turns out that a lot of programs have opinions about it, and those opinions generally run along the lines of “nobody could possibly be using XON/XOFF, so I’m going to turn it off.” Emacs is an offender here, but it can be configured. Unfortunately, the most nasty offender here is ssh, which contains this code that is ALWAYS run when using a pty to connect to a remote system (which is for every interactive session):

tio.c_iflag &= ~(ISTRIP | INLCR | IGNCR | ICRNL | IXON | IXANY | IXOFF);

Yes, so when you use ssh, your local terminal no longer does flow control. If you are particularly lucky, the remote end may recognize your XON/XOFF characters and process them. Unfortunately, the added latency and buffering in going through ssh and the network is likely to cause bursts of text to exceed the vt420’s measly 100-ish-byte buffer. You just can’t let the remote end handle flow control with ssh. I managed to solve this via GNU Screen; more on that later.

The vt510 supports hardware flow control! Unfortunately, it doesn’t use CTS/RTS pins, but rather DTR/DSR. This was a reasonably common method in the day, but appears to be totally unsupported in Linux. Bother. I see some mentions that FreeBSD supports DTR/DSR flow (dtrflow and dsrflow in stty outputs). It definitely looks like the Linux kernel has never plumbed out the reaches of RS-232 very well. It should be possible to build a cable to swap DTR/DSR over to CTS/RTS, but since the vt420 doesn’t support any of this anyhow, I haven’t bothered.

Character Sets

Back when the vt420 was made, it was pretty hot stuff that it was one of the first systems to support the new ISO-8859-1 standard. DEC was rather proud of this. It goes without saying that the terminal knows nothing of UTF-8.

Nowadays, of course, we live in a Unicode world. A lot of software crashes on ISO-8859-1 input (I’m looking at you, Python 3). Although I have old files from old systems that have ISO-8859-1 encoding, they are few and far between, and UTF-8 rules the roost now.

I can, of course, just set LANG=en_US and that will do — well, something. man, for instance, renders using ISO-8859-1 characters. But that setting doesn’t imply that any layer of the tty system actually converts output from UTF-8 to ISO-8859-1. For instance, if I have a file with a German character in it and use ls, nothing is going to convert it from UTF-8 to ISO-8859-1.

GNU Screen also, as it happens, mostly solves this.

GNU Screen to the rescue, somewhat

It turns out that GNU Screen has features that can address both of these issues. Here’s how I used it.

First, in my .bashrc, I set this:


if [ `tty` = "/dev/ttyUSB0" ]; then
stty -iutf8
export LANG=en_US
export MANOPT="-E ascii"
fi

Then, in my .screenrc, I put this:


defflow on
defencoding UTF-8

This tells screen that the default flow control mode is on, and that the default encoding for the pty that screen creates is UTF-8. It determines the encoding for the physical terminal for the environment, and correctly figures it to be ISO-8859-1. It then maps between the two! Yes!

My little ssh connecting script then does just this:

exec screen ssh "$@"

Which nicely takes care of the flow control issue and (most of) the encoding issue. I say “most” because now things like man will try to render with fancy em-dashes and the like, which have no representation in iso8859-1, so they come out as question marks. (Setting MANOPT=”-E ascii” fixes this) But no matter, it works to ssh to my workstation and read my email! (mu4e in emacs)

What screen doesn’t help with are things that have no ISO-8859-1 versions; em-dashes are the most frequent problems, and are replaced with unsightly question marks.

termcaps, terminfos, and weird things

So pretty soon you start diving down the terminal rabbit hole, and you realize there’s a lot of weird stuff out there. For instance, one solution to the problem of slow processors in terminals was padding: ncurses would know how long it would take the terminal to execute some commands, and would send it NULLs for that amount of time. That calculation, of course, requires knowledge of line speed, which one wouldn’t have in this era of ssh. Thankfully the vt420 doesn’t fall into that category.

But it does have a ton of modes. The Emacs On Terminal page discusses some of the interesting bits: 7-bit or 8-bit control characters, no ESC key, Alt key not working, etc, etc. I believe some of these are addressed by the vt510 (at least in PC mode). I wonder whether Emacs or vim keybindings would be best here…

Helpful Resources

30 September, 2019 09:59PM by John Goerzen

hackergotchi for Jonathan McDowell

Jonathan McDowell

Life with a Yubikey

Vertically mounted 2U server

At the past two DebConfs Thomas Goirand of infomaniak has run a workshop on using a Yubikey, and been generous enough to provide a number of devices for Debian folk. Last year I was fortunate enough to get hold of one of the devices on offer.

My primary use for the device is to hold my PGP key. Generally my OpenPGP hardware token of choice is the Gnuk, which features a completely Free software stack and an open hardware design, but the commonly available devices suffer from being a bit more fragile than I’d like to regularly carry around with me. The Yubikey has a much more robust design, being a slim plastic encapsulated device. I finally set it up properly with my PGP key last November, and while I haven’t attached it to my keyring I’ve been carrying it with me regularly.

Firstly, it’s been perfectly fine from a physical robustness point of view. I don’t worry about it being in my pocket with keys or change, it gets thrown into my bag at the end of the day when I go home, it kicks around my desk and occasionally gets stuff dropped on it. I haven’t tried to break it deliberately and I’m not careless with it, but it’s not treated with kid gloves. And it’s still around nearly a year later. So that’s good.

Secondly, I find my initial expected use case (holding my PGP subkeys and using the auth subkey for SSH access) is the major use I have for the key. I occasionally use the signing subkey for doing Debian uploads, I rarely use the encryption subkey, but I use the auth subkey most days. I’ve also setup U2F for any site I use that supports it, but generally once I’m logged in there on trusted machines I don’t need to regularly re-use it. It’s nice to have though, and something the Gnuk doesn’t offer.

On the down side, I still want a device that requires a physical key press for any signing operation. My preferred use case is leaving the key plugged into the machine to handle SSH logins, but the U2F use case seems to be to insert the key only when needed, and then press the key. OpenPGP operation with the Yubikey doesn’t require a physical touch. I get round some of this by enabling the confirm option with gpg-agent, but I’d still be happier with something on the token itself. The Yubikey also doesn’t do ECC keys, but it does do 4096-bit RSA so it’s not terrible, just results in larger keys than ideal.

Overall I’m happy with the device, and grateful to Thomas and infomaniak for providing me with it, though I’m hopeful about a new version of the Gnuk with a more robust form factor/casing. (If you’re looking for discussion on how to setup the token with GPG subkeys then I recommend Thomas’ presentation from 2018, which covers all the steps required.)

Update: It’s been pointed out to me by several people that the Yubikey can be configured to require a touch for OpenPGP usage; either using ykman or yubitouch.

30 September, 2019 07:33PM

hackergotchi for Ben Hutchings

Ben Hutchings

Kernel Recipes 2019, part 1

This conference only has a single track, so I attended almost all the talks. This time I didn't take notes but I've summarised all the talks I attended.

Updated: Noted slides are available for all talks. Added links to the video streams.

ftrace: Where modifying a running kernel all started

Speaker: Steven Rostedt

Details and slides: https://kernel-recipes.org/en/2019/talks/ftrace-where-modifying-a-running-kernel-all-started/

Video: Youtube

This talk explains how the kernel's function tracing mechanism (ftrace) works, and describes some of its development history.

It was quite interesting, but you probably don't need to know this stuff unless you're touching the ftrace implementation.

Analyzing changes to the binary interface exposed by the Kernel to its modules

Speakers: Dodji Seketeli, Jessica Yu, Matthias Männich

Details and slides: https://kernel-recipes.org/en/2019/talks/analyzing-changes-to-the-binary-interface-exposed-by-the-kernel-to-its-modules/

Video: Youtube

The upstream kernel does not have a stable ABI (or API) for use by modules, but OS distributors often want to support the use of out-of-tree modules by ensuring that at least some subset of the kernel ABI remains stable within a given OS release.

Currently the kernel build process generates a "version" or "CRC" for each exported symbol by parsing the relevant type definitions. There is a load-time ABI check based on comparing these, and distributors can compare them at build time to detect ABI breaks. However this doesn't work that well and it's hard to work out what caused a change.

The speaker develops the "libabigail" library and tools. These can extract ABI definitions from standard debug information (DWARF), and then analyse and compare ABIs for different versions of a shared libraries, or of the Linux kernel and modules. They are likely to replace the kernel's current symbol versioning approach at some point. He talked about the capabilities of libabigail, plans for improving it, and some limitations of C ABI checkers.

BPF at Facebook

Speaker: Alexei Starovoitov

Details and slides: https://kernel-recipes.org/en/2019/talks/bpf-at-facebook/

Video: Youtube

The Berkeley Packet Filter (BPF) is a simple virtual machine implemented by several kernels. It allows user-space to add code that runs in kernel context, without compromising the integrity of the kernel.

In recent years Linux has extended this virtual machine architecture to create eBPF, which is expressive enough to be targeted by general-purpose compilers such as Clang and (in the near future) gcc. eBPF can be used for filtering network packets (the original purpose of BPF), tracing events, and many other purposes.

The speaker talked about practical experiences using eBPF with tracing at Facebook. These mainly involved investigating performance problems. He also talked about the difficulties of doing this on production servers without developer tools installed, and how this is being addressed.

Kernel hacking behind closed doors

Speaker: Thomas Gleixner

Details and slides: https://kernel-recipes.org/en/2019/talks/kernel-hacking-behind-closed-doors/

Video: Youtube

The speaker talked about how kernel developers and hardware vendors have been handling speculative execution vulnerabilities, and the friction between how the vendors' preferred process and the usual kernel development processes.

He described the mailing list manager he wrote to support discussion of security issues with a long embargo period, which sends and receives encrypted messages in both S/MIME and PGP/MIME formats (depending on the subscriber).

Finally he talked about the process that has been settled on for handling future issues of this time with minimal legal paperwork.

This was somewhat marred by a lawyer joke and a generally combative attitude to hardware vendors.

What To Do When Your Device Depends on Another One

Speaker: Rafael Wysocki

Details and slides: https://kernel-recipes.org/en/2019/talks/what-to-do-when-your-device-depends-on-another-one/

Video: Youtube

The Linux device model represents all devices as a simple hierarchy. Driver binding and unbinding (probe/remove), and power management operations, are sequenced based on the assumption that a device only depends on its parent in the device model.

On PCs, additional dependencies are often hidden behind abstractions such as ACPI, so that Linux does not need to be aware of them. On most embedded systems, however, such abstractions are usually missing and Linux does need to be aware of additional dependencies.

(A few years ago, the device driver core gained support for an error code from probe (-EPROBE_DEFER) that indicates that some dependency is not yet bound, and causes the device to be re-probed later. But this is an incomplete, stop-gap solution.)

The speaker described the new "device links" API which provides a way to record additional dependencies in the device model. The device driver core will use this information to sequence operations on multiple devices correctly.

Metrics are money

Speaker: Aurélien Rougemont

Details and slides: https://kernel-recipes.org/en/2019/metrics-are-money/

Video: Youtube

The speaker talked about several instances from his experience where system metrics were used to justify buying or rejecting new hardware. In some cases, these metrics were not accurate or consistent, which could lead to bad decisions. He made a plea for better documentation of metrics reported by the Linux kernel.

No NMI? No Problem! – Implementing Arm64 Pseudo-NMI

Speaker: Julien Thierry

Details and slides: https://kernel-recipes.org/en/2019/talks/no-nmi-no-problem-implementing-arm64-pseudo-nmi/

Video: Youtube

Linux typically uses Non-Maskable Interrupts (NMIs) for Performance Monitoring Unit (PMU) interrupts. NMIs are (almost) never disabled, so this allows interrupt handlers and other code that runs with interrupts disabled to be profiled accurately. On architectures that do not have NMIs, typically Linux can use the highest interrupt priority for this instead, and only mask the lower priorities.

On the Arm architecture, there is no NMI but there are two architectural interrupt priority levels (IRQ and FIQ). However on 64-bit Arm systems FIQ is typically reserved to system firmware so Linux only uses IRQ. This results in inaccurate profiling.

The speaker described the implementation of a pseudo-NMI for 64-bit Arm. This is done by leaving IRQs enabled on the CPU and masking them selectively on the Arm generic interrupt controller (GIC), which supports many more priority levels. However this effectively requires GIC v3 or v4 because these operations are prohibitively slow on earlier versions.

Marvels of Memory Auto-configuration (SPD)

Speaker: Jean Delvare

Details and slides: https://kernel-recipes.org/en/2019/marvels-of-memory-auto-configuration-also-known-as-spd/

Video: Youtube

The speaker talked about the history of standardised DRAM modules (SIMMs and DIMMs) and how system firmware can detect them and find out their size and timing requirements.

DIMMs expose this information through Serial Presence Detect (SPD) which until recently used standard 256-byte I²C EEPROMs.

For the latest generation of DIMMs (DDR4), the configuration information can be larger than 256 bytes and a new interface was required. Jean described and criticised this interfaces.

He also talked about the Linux drivers and utilities that can be used to read the SPD EEPROMs.

30 September, 2019 05:28PM

Sylvain Beucler

RenPyWeb - one year

One year ago I posted a little entry in Ren'Py Jam 2018, which was the first-ever Ren'Py game directly playable in the browser :)

The Question Tutorial

Big thanks to Ren'Py's author who immediately showed full support for the project, and to all the other patrons who joined the effort!

One year later, RenPyWeb is officially integrated in Ren'Py with a one-click build, performances improved, countless little fixes to the Emscripten technology stack provided stability, and more than 60 games of all sizes were published for the web.

RenPyWeb

What's next? I have plans to download resources on-demand (rather than downloading the whole game on start-up), to improve support for mobile browsers, and of course to continue the myriad of little changes that make RenPyWeb more and more robust. I'm also wondering about making our web stack more widely accessible to Pygame, so as to bring more devs in the wonderful world of python-in-the-browser and improve the tech ecosystem - let me know if you're interested.

Hoping to see great new Visual Novels on the web this coming year :)

30 September, 2019 05:01PM

Scarlett Gately Moore

Akademy! 2019 Edition

KDE Akademy 2019KDE Akademy 2019

 

I am happy to report yet another successful KDE Akademy! This will make my 5th Akademy 🙂 This year akademy was held in beautiful Milan, Italy. As usual we had so many great talks, you can read all about them here:

https://dot.kde.org/2019/09/10/akademy-2019-talks-heres-what-you-missed

My trip was shortened again due to flight availability, but I still got in some great BoF sessions. We were able to achieve some tasks and goals with the Fundraising Working Group. I hung out with the Neon team for a few, and it was decided I will continue the Debian merge and continue to keep the delta between Debian and neon as minimal as possible. This helps all deb based distributions in the end. I was also happy to see snaps are coming along nicely! There was a great BoF on user support, where we discussed trying to get users connected with the people that can answer questions. I believe we landed on Discourse, we are on the technical stage of making that happen.

The core of what makes Akademy so important is the networking of course. I was able to see many old friends and meet many new ones. I was so happy to see so many new faces this year! With each year our bunch has become more and more diverse, which is always a good thing. Face to face collaboration is very important in an environment where we mostly see text all day.

Until next year! Happy hacking and see you all around in the interwebs.

Scarlett

P.S. Stay tuned and I will have another post with everything I have been up to in the last year.

 

30 September, 2019 04:12PM by Scarlett Moore

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in September 2019

Here is my monthly update covering what I have been doing in the free software world during September 2019 (previous month):

  • Attended the launch event of OpenUK, a new organisation with the purpose of supporting the growth of free software, hardware and data. It was hosted at the House of Commons of the United Kingdom and turned out to be quite the night to be attending Parliament.

  • As part of my duties of being on the board of directors of the Open Source Initiative and Software in the Public Interest I attended their respective monthy meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions regarding logistics, policy etc.

  • Made a number of changes to my tickle-me-email library to implement Gettings Things Done-like behaviours in IMAP inboxes including:

    • Add support for a sendmail-like command. [...]
    • Don't require specifying the target of sent items in the send-later command [...] and decode messages correctly for the same command [...].
  • Opened pull requests to make the build reproducible in:

  • Opened a pull request for the memcached distributed memory object caching system to... correct the spelling of "ensure". [...]

  • More work on the Lintian static analysis tool for Debian packages, releasing versions 2.20.0, 2.21.0, 2.22.0, 2.23.0 & 2.24.0 as well as:


Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month I:


I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:

    • Add /srv/diffoscope/bin to the Docker image path. (#70)
    • When skipping tests due to the lack of installed tool, print the package that might provide it. [...]
    • Update the "no progressbar" logging message to match the parallel "missing tlsh module" warnings. [...]
    • Update "requires foo" messages to clarify that they are referring to Python modules. [...]
  • Testsuite updates

    • The test_libmix_differences ELF binary test requires the xxd tool. (#940645)
    • Build the OCaml test input files on-demand rather than shipping them with the package in order to prevent test failures with OCaml 4.08. (#67)
    • Also conditionally skip the identification and "no differences" tests as we require the Ocaml compiler to be present when building the test files themselves. (#940471)
    • Rebuild our test squashfs images to exclude the character device as they requires root or fakeroot to extract. (#65) [...]
  • Code cleanups, including dropping some unnecessary control flow [...], dropping unnecessary pass statements [...] and dropping explicitly inheriting from object class as it unnecessary in Python 3 [...].



Debian


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

You can find out more about the projects via the following video:


Uploads

  • redis (5.0.6-1) — New upstream release

  • python-django:

  • aptfs:

    • 1.0.0:
      • Port to Python 3.x. (#936131)
      • Move to a native package and import external Debian packaging from into this repository.
      • Add a pyproject.toml and apply the black source code formatter to the source tree.
      • Drop TODO file; we use our code hosting platform's issue tracker now.
    • 1.0.1 — Fix opening/reading of files after Python 3.x migration.
  • gunicorn:

    • 19.9.0-2 — Drop support for Python 2.x; the gunicorn package now provides the Python 3.x version. (#936679)
    • 19.9.0-3 — Port autopkgtests to Python 3.x.
    • 19.9.0-4 — Add a /usr/bin/gunicorn3/usr/bin/gunicorn compatibility symlink. (#939409)
  • installation-birthday (13):

    • Don't use the deprecated platform library. (#940803)
    • Add a gitlab-ci.yml.
    • Misc coding updates, inculding use the logging module's own string interpolation, not inheriting from object etc.
  • libfiu:

    • 1.00-1:

    • 1.00-2 — Also drop Python 2 support in the autopkgtests.

    • 1.00-3 — Patch the upstream Makefile to not build the Python 2.x bindings to ensure the tests pass.

  • memcached:

    • 1.5.17-1:
      • Adopt package. (#939425)
      • New upstream release. (#924584#939337#879797#835456#789835)
      • Source /etc/default/memcached in /etc/init.d/memcached. (#934542)
      • Add a Pre-Depends on ${misc:Pre-Depends} to ensure a correct dependency on init-system-helpers for the --skip-systemd-native flag.
      • Install README.damemtop to /usr/share/doc/memcached instead of under /usr/share/memcached
    • 1.5.17-2:
      • In the systemd .service file, specify a PIDFile under /run.
      • Add missing ${perl:Depends} to binary dependencies.
    • 1.5.18-1 — New upstream release

New upstream releases of bfs (1.5.1-1), django-auto-one-to-one (3.2.0-1), python-daiquiri (1.6.0-1), python-hiredis (1.0.0-1) and python-redis (3.3.7-1).

Finally, I sponsored uploads of adminer (4.7.3-1) and python-pyocr (0.7.2-1).


FTP Team

As a Debian FTP assistant I ACCEPTed 33 packages: crypto-policies, firmware-tomu, gdmd, golang-github-bruth-assert, golang-github-paypal-gatt, golang-github-rivo-uniseg, golang-github-xlab-handysort, golang-gopkg-libgit2-git2go.v28, icingaweb2-module-audit, icingaweb2-module-boxydash, icingaweb2-module-businessprocess, icingaweb2-module-cube, icingaweb2-module-director, icingaweb2-module-eventdb, icingaweb2-module-graphite, icingaweb2-module-map, icingaweb2-module-nagvis, icingaweb2-module-pnp, icingaweb2-module-statusmap, icingaweb2-module-x509, lazygit, ldh-gui-suite, meep, minder, node-solid-jose, ocaml-charinfo-width, ocaml-stdcompat, ppxfind, ppxlib, printrun, python-securesystemslib, sshesame & tpm2-initramfs-tool.

I additionally filed 6 RC bugs against packages that had potentially-incomplete debian/copyright files against crypto-policies, golang-github-paypal-gatt, icingaweb2-module-graphite, icingaweb2-module-statusmap, minder & printrun.

30 September, 2019 03:53PM

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities (2019-09)

It’s been a busy month on a personal level so there’s a bunch of my Debian projects that have been stagnant this month, I hope to fix that over October/November.

Upload sponsoring: This month, when sponsoring package uploads for Debian, I prioritised Python team uploads above mentors.debian.net uploads (where I usually spend my reviewing attention). The Python 2 deprecation is turning out to be a lot of work so I think the Python team can do with a lot more support from everyone at this point.

DebConf: I resigned from the DebConf Committee, I might consider joining again if there’s a position open again in the future. I’m not going to DC20 so it seems like a good to cut back a bit to help me focus more on my technical projects. I’ll still be involved in the DebConf team. Over the next DebConf cycle I’ll still be involved in bursaries and want to cover a whole bunch of documentation and policy improvements that are sorely needed. I also want to finish up the ToeTally integration with Voctomix for the video team and hopefully try it out at a minidebconf within the next year.

Debian Live: calamares-settings-debian has been updated for bullseye, although as of this time we don’t have new images available with that yet. I started looking in to the vmdebootstrap deprecation, it’s going to be more work than I originally thought, so there’s a good possibility we might be switching to FAI for generating live images. I have a script called debmower that works ok and creates good images, but it’s a somewhat hacky shell script and if I ever had the time to rewrite it in Python I might propose that too, but unfortunately finding the time too maintain more things is hard, so I think FAI is the way to go. Isabelle Simpkins created testing artwork so that Debian testing images are easier to differentiate from the last stable release. These will be replaced in Debian as soon as the next release artwork is available.

Activity log:

2019-09-09: Upload package gdisk (1.0.4-2) to debian unstable (Adopting package, closes #939421).

2019-09-09: Upload package calamares (3.2.13-1) to debian unstable.

2019-09-09: Upload package gnome-shell-extension-dash-to-panel (23-1) to debian unstable.

2019-09-09: Upload package toot (0.23.1) to debian unstable.

2019-09-09: File upstream bug for toot crash when launching in tui mode (Toot #124).

2019-09-10: Upload package bluefish (2.2.10-2) to debian unstable (Adopting package, Closes: #922891, #936220).

2019-09-10: Seek feedback on bugs #844449, #852733.

2019-09-11: File removal of pythonqt from debian unstable (BTS: #940025).

2019-09-11: Orphan package golang-gopkg-flosch-pongo2.v3 (BTS: #940030).

2019-09-16: Upload package python3-aniso8601 (8.0.0-1) to debian unstable.

2019-09-16: Upload package gnome-shell-extension-remove-dropdown-arrows (12-1) to debian unstable.

2019-09-16: Upload package bluefish (2.10-3) to debian unstable.

2019-09-16: Upload package gnome-shell-extension-move-clock (1.01-2) to debian unstable.

2019-06-16: Upload package tanglet (1.5.4-2) to debian unstable.

2019-09-16: Upload package gdisk (1.0.4-3) to debian unstable.

2019-09-16: Upload package tetzle (2.1.4+dfsg1-3) to debian unstable.

2019-09-16: Upload package bcachefs-tools (0.1+git20190829.aa2a42b-1~exp1) to debian unstable.

2019-09-16: Review package python-flask-jwt-extended (3.21.0-1) (needs some work) (mentors.debian.net request).

2019-09-16: Sponsor package flask-jwt-simple (0.0.3-1) for debian unstable (mentors.debian.net request, RFS: #940102).

2019-09-16: Sponsor package python3-fastentrypoints (0.12-1) for debian experimental (mentors.debian.net request, RFS: #934054).

2019-09-16: Sponsor package python3-netsnmpagent (0.6.0-1) for debian experimental (mentors.debian.net request, RFS: #934056).

2019-09-16: Review package pydevd (1.6.1+git20190712.1267523+dfsg) (mentors.debian.net request), recommend that another reviewer give it a second pass.

2019-09-16: Sponsor package python3-aiosqlite (0.10.0-1) for debian unstable (mentors.debian.net request, RFS: #927702).

2019-09-16: Upload package python3-flask-silk (0.2-14) to debian unstable.

2019-09-16: Sponsor package membernator (1.0.1-1) for debian unstable (Python team request).

2019-09-16: Sponsor package cosmiq (1.6.0-1) for debian unstable (mentors.debian.net request).

2019-09-16: Sponsor package micropython (1.11-1) for debian unstable (mentors.debian.net request, RFS: #939189).

2019-09-16: Sponsor package oomd (0.1.0-1) for debian unstable (mentors.debian.net request, RFS: #939096).

2019-09-16: Sponsor package python3-enc (0.4.0-5) for debian unstable (Python team request).

2019-09-16: Review package pcapy () (needs some more work) (Python team request).

2019-09-16: Review package impacket () (needs some more work) (Python team request).

2019-09-16: Sponsor package python-guizero (1.0.0+dfgs1-1) (Python team request).

2019-09-17: Sponsor package sentry-python (0.9..5-2) for debian unstable (Python team request).

2019-09-17: Sponsor package supysonic (0.4.1-1) for debian unstable (Python team request).

2019-09-17: Sponsor package python3-aiohttp-wsgi (0.8.2-2) for debian unstable (Python team request).

2019-09-17: Sponsor package python3-onedrivesdk (1.1.8-1) for debian experimental (Python team request).

2019-09-17: Review package python3-ptvsd (4.3.0+dfsg-1) (needs some more work) (Python team request).

2019-09-17: Review package python3-flask-jwt-extended (3.21.0-1) (needs some more work) (Python team request).

2019-09-17: Review package python3-pydevd (1.7.1+dfsg-1) (needs some more work) (Python team request).

2019-09-17: Sponsor package python3-bidict (0.18.2-1) for debian unstable (Python team request).

2019-09-18: Upload package python3-enc (0.4.0-4) to debian unstable.

2019-09-18: Sponsor package python3-pydevd (1.7.1+dfsg1) for debian unstable (Python team request).

2019-09-18: Sponsor package python-aiohttp (3.6.0-1) for debian unstable (Python team request).

2019-09-18: Review package py-postgresql (1.2.1+git20180803.ef7b9a9-1) (needs some more work) (Python team request).

2019-09-18: Review package irker (2.18+dfsg-4) (needs some more work) (Python team request).

2019-09-18: Sponsor package py-postgresql (1.2.1+git20180803.ef7b9a9-1) for debian unstable (Python team request).

2019-09-18: Upload package irker (2.18+dfsg-4) to debian unstable (team upload / Python team sponsor request).

2019-09-18: Sponsor package sphinx-autodoc-typehints (1.8.0-1) for debian unstable (Python team request).

2019-09-18: Sponsor package python3-sentry-sdk (0.12.0-1) for debian unstable (Python team request).

2019-09-19: Review package vonsh (1.0) (needs some more work) (mentors.debian.net request).

2019-09-19: Upload package live-tasks (11.0.1) to debian unstable (Closes: #932780, #936953, #934522).

2019-09-19: Upload package python3-flask-autoindex (0.6.2-2) to debian unstable (Closes: #936523).

2019-09-19: Upload package python3-flask-autoindex (0.6.2-3) to debian unstable (Re-opens: #936523).

2019-09-20: Upload package gamemode (1.5~git20190812-107d469-1~exp1) to debian experimental.

2019-09-20: Upload package gnome-shell-extension-remove-dropdown-arrows (13-1) to debian unstable.

2019-09-20: Sponsor package django-sortedm2m (2.0.0dfsg.1-1) for debian experimental (Python team request).

2019-09-20: Sponsor package python3-anosql (1.0.1-1) for debian unstable (Python team request).

2019-09-23: Upload package gnome-shell-extension-disconnect-wifi (21-1~exp1) to debian experimental.

2019-09-23: Upload package toot (0.24.0-1) to debian unstable.

2019-09-23: Upload package gamemode (1.5~git20190812-107d469-1~exp2) to debian experimental.

2019-09-23: Review package python3-pympler () (needs some more work) (Python team request).

2019-09-23: Close previously fixed bug #914044 in tuxpaint.

2019-09-23: Upload package kpmcore (4.0.0-1~exp1) to debian experimental.

2019-09-23: Upload package kpmcore (4.0.0-1~exp2) to debian experimental.

2019-09-25: Sponsor package assaultcube-data (1.2.0.2.1-3) for debian unstable (mentors.debian.net request).

2019-09-25: Sponsor package assaultcube (1.2.0.2.1-2) for debian unstable (mentors.debian.net request).

2019-09-25: Review package cpupower-gui (0.7.0-1) (needs some more work) (mentors.debian.net request).

2019-09-25: Sponsor package pympler (0.7+dfsg1-1~exp1) for debian experimental (Python team request).

2019-09-25: Sponsor package sentry-python (0.12.2-1) for debian unstable (Python team request).

2019-09-25: Sponsor package python-aiohttp (3.6.1-1) for debian unstable (Python team request).

2019-09-25: Upload package calamares-settings-debian (11.0.1-1) to debian unstable.

2019-09-25: Merge MR#2 for live-wrapper (Debian BTS: #866183).

2019-09-25: File bug #941131 against qa.debian.org (“Make oustanding MRs more visible in DDPO pages).

2019-09-25: Sponsor package color-theme-modern (0.0.2+4.g42a7926-1) for debian unstable (RFS: #905246) (mentors.debian.net request).

2019-09-26: Sponsor package python3-flask-jwt-extended for debian unstable (RFS:#940075) (mentors.debian.net request).

2019-09-26: Upload package tuxpaint (0.9.24~git20190922-f7d30d-1~exp1) to debian experimental.

2019-09:26: Review package python3-in-toto (0.4.0-1) (needs some more work) (mentors.debian.net request).

2019-09:30: Forward Calamares bug #941301 “write two random seeds to locations for urandom init script and systemd-random-seed service” to upstream bug #1252.

2019-09-30: Sponsor package color-theme-modern (0.0.2+4.g42a7926-1) for debian unstable (RFS: #905246) (mentors.debian.net request).

30 September, 2019 01:27PM by jonathan