July 16, 2018

Russ Allbery

Review: Effective Python

Review: Effective Python, by Brett Slatkin

Publisher: Addison-Wesley
Copyright: 2015
ISBN: 0-13-403428-7
Format: Trade paperback
Pages: 216

I'm still looking for a programming language book that's as good as Joshua Bloch's Effective Java, which goes beyond its surface mission to provide valuable and deep advice about how to think about software construction and interface design. Effective Python is, sadly, not that book. It settles for being a more pedestrian guide to useful or tricky corners of Python, with a bit of style guide attached (although not as much as I wanted).

Usually I read books like this as part of learning a language, but in this case I'd done some early experimenting with Python and have been using it extensively for my job for about the past four years. I was therefore already familiar with the basics and with some coding style rules, which made this book less useful. This is more of an intermediate than a beginner's book, but if you're familiar with list and hash comprehensions, closures, standard method decorators, context managers, and the global interpreter lock (about my level of experience when I started reading), at least half of this book will be obvious and familiar material.

The most useful part of the book for me was a deep look at Python's object system, including some fully-worked examples of mix-ins, metaclasses, and descriptors. This material was new to me and a bit different than the approach to similar problems in other programming languages I know. I think this is one of the most complex and hard-to-understand parts of Python and will probably use this as a reference the next time I have to deal with complex class machinery. (That said, this is also the part of Python that I think is the hardest to read and understand, so most programs are better off avoiding it.) The description of generators and coroutines was also excellent, and although the basic concepts will be familiar to most people who have done parallelism in other languages, Slatkin's treatment of parallelism and its (severe) limitations in Python was valuable.

But there were also a lot of things that I was surprised weren't covered. Some of these are due to the author deciding to limit the scope to the standard library, so testing only covers unittest and not the (IMO far more useful) pytest third-party module. Some are gaps in the language that the author can't fix (Python's documentation situation for user-written modules is sad). But there was essentially nothing here about distutils or how to publish modules properly, almost nothing about good namespace design and when to put code into __init__.py (a topic crying out for some opinionated recommendations), and an odd lack of mention of any static analysis or linting tools. Most books of this type I've read are noticeably more comprehensive and have a larger focus on how to share your code with others.

Slatkin doesn't even offer much of a style guide, which is usually standard in a book of this sort. He does steer the reader away from a few features (such as else with for loops) and preaches the merits of decomposition and small functions, among other useful tidbits. But it falls well short of Damian Conway's excellent guide for Perl, Perl Best Practices.

Anyone who already knows Python will be wondering how Slatkin handles the conflict between Python 2 and Python 3. The answer is that it mostly doesn't matter, since Slatkin spends little time on the parts of the language that differ. In the few places it matters, Effective Python discusses Python 3 first and then mentions the differences or gaps in Python 2. But there's no general discussion about differences between Python 2 and 3, nor is there any guide to updating your own programs or attempting to be compatible with both versions. That's one of the more common real-world problems in Python at the moment, and was even more so when this book was originally written, so it's an odd omission.

Addison-Wesley did a good job on the printing, including a nice, subtle use of color that made the physical book enjoyable to read. But the downside is that this book has a surprisingly expensive retail ($40 USD) for a fairly thin trade paperback. At the time of this writing, Amazon has it on sale at 64% off, which takes the cost down to about the right territory for what you get.

I'm not sorry I read this, and I learned a few things from it despite having used Python fairly steadily for the last few years. But it's nowhere near good enough to recommend to every Python programmer, and with a bit of willingness to search out on-line articles and study high-quality code bases, you can skip this book entirely and never miss it. I found it oddly unopinionated and unsatisfying in the places where I wish Python had more structure or stronger conventions. This is particularly strange given that it was written by a Google staff engineer and Google has a quite comprehensive and far more opinionated coding style guide for Python.

If you want to dig into some of Python's class and object features or see a detailed example of how to effectively use coroutines, Effective Python is a useful guide. Otherwise, you'll probably learn some things from this book, but it's not going to significantly change how you approach the language.

Rating: 6 out of 10

16 July, 2018 01:25AM

July 15, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppClassic 0.9.11

A new maintenance release, now at version 0.9.11, of the RcppClassic package arrived earlier today on CRAN. This package provides a maintained version of the otherwise deprecated initial Rcpp API which no new projects should use as the normal Rcpp API is so much better.

Per another request from CRAN, we updated the source code in four places to no longer use dynamic exceptions specification. This is something C++11 deprecated, and g++-7 and above now complain about each use. No other changes were made.

CRANberries also reports the changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 July, 2018 11:44PM

Mike Hommey

Announcing git-cinnabar 0.5.0 beta 4

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 3?

  • Fixed incompatibility with Mercurial 3.4.
  • Performance and memory consumption improvements.
  • Work around networking issues while downloading clone bundles from Mozilla CDN with range requests to continue past failure.
  • Miscellaneous metadata format changes.
  • The prebuilt helper for Linux now works across more distributions (as long as libcurl.so.4 is present, it should work)
  • Updated git to 2.18.0 for the helper.
  • Properly support the pack.packsizelimit setting.
  • Experimental support for initial clone from a git repository containing git-cinnabar metadata.
  • Changed the default make rule to only build the helper.
  • Now can successfully clone the pypy and GNU octave mercurial repositories.
  • More user-friendly errors.

15 July, 2018 11:04PM by glandium

Elena Gjevukaj

Google Summer of Code with a Debian Project

GETTING ACCEPTED TO GOOGLE SUMMER OF CODE

Yes! My project proposal was selected.

First of all I want to mention that I began my open source adventure with Debian.

I started to participate in the open source events like Hackathons, BSP and Conferences and doing small contribution to different projects and this is how everything started.

WHAT IS GOOGLE SUMMER OF CODE?

Google Summer of Code is a global program focused on bringing more student developers into open source software development. Undergrads, students in graduate programs, PhD candidates can take part in GSoC. Due to the prestigiousness of the program and the weight of Google’s name to go with it, acceptance into the program is no easy task. GSoC is based on the model of different organizations providing projects for students all around the world. Each organization assign mentors for every project and students are paired up with mentors once they get selected.


MY PROJECT: Wizard/GUI helping students/interns apply and get started

Throughout the application process and first few weeks of programs like Google Summer of Code and Outreachy, applicants typically need to work through many things for the first time, such as creating their own domain name and blog, mail account with proper filters, creating SSH and PGP keys, linking these keys with a Github account, joining mailing lists, IRC and XMPP channels, searching for free software groups in their local area and installing useful development tools on their computer. Daniel Pocock’s blog “Want to be selected for GSoC?” lists some of these steps with more details. This project involves creating a Python script with a GUI that helps applicants and interns complete as many of these steps as possible, in less than an hour. Imagine that a student has only just installed Debian, they install this script from a package using Synaptic and an hour later they have all their accounts, communications tools, development tools and a blog (using Jekyll/Git) up and running.


COMMUNITY BONDING

In practice, the community bonding period is all about, well, community bonding. Rather than jumping straight into coding, you’ve got some time to learn about your organization’s processes - release and otherwise - developer interactions, codes of conduct, etc.


WEEK 1

In the very frist week of GSoC we all were preparing for OSCAL’18, one of the biggest technology conferences held in my Ballkan. I was fortune enough to meet my mentor Daniel Pocock in his visit for this conference. He helped me a lot to get started with the project.

It was the week that mentors devided the tasks between us interns. After getting my tasks, I started with the research for the Python API’s, templates and other tools that I needed to use for my work on the project.

I read about python-git API, about python templates and together with my mentor we decided which template was the best to use.

Event: I meet other GSoC interns here in Kosovo, Diellza Shabani and Enekelena Haxhiu, where we discussed about our projects.


WEEK 2

The second week of GSoC I spent mostly learning new things and updating packages that I needed for the project, also I started writing the script to create a blog and learning Python and Jekyll Github pages.

Why Jekyll you ask?

  • Jekyll sites are basically static sites with an extra templating language called Liquid so it’s a small step to learn if you already know HTML, CSS and JavaScript. We can actually start with a static site and then introduce Jekyll functionality as it’s needed.

For templating we decided to use the Mako template.

<%inherit file="base.html"/>
<%
    rows = [[v for v in range(0,10)] for row in range(0,10)]
%>
<table>
    % for row in rows:
        ${makerow(row)}
    % endfor
</table>

<%def name="makerow(row)">
    <tr>
    % for name in row:
        <td>${name}</td>\
    % endfor
    </tr>
</%def>

Towards the beginning of the month, Daniel recommended that we could use Salsa for the development phase.


WEEK 3 & 4

In the begining og the third week of the project I was busy with two important things.

As asked from my mentors, I was doing a research on what things should be included in Wizard, in order to make our project more helpful. Meeting first hand with new people starting in the open source world I knew excatlly what should we include.

Things as:

  • Email account

  • Blog created

  • Github account

  • Debian wiki personal home page

  • IRC nick

  • IRC client configuration file generated with IRC nick in it

  • SSH key

  • PGP key

  • Fingerprint slips printed, PDF generated

  • Thunderbird and Lightning configured

  • IMAP

  • Labels

  • Enigmail

  • Lightning polling events and tasks for the chosen communities

  • CardBook plugin for contacts

  • Package install

  • Domain name

The other thing was my part on the creating the Jekyll blog.

I want to say that as a developer this project allowed me to explore and learn a lot of new things.


WEEK 5 & 6

I finished the first phase of the program succesfully. These weeks were the most challenging week so far.

I was getting started with Django Framework and Bootstrap, dynamic blog and compiling function.

You can take a look at my code in these repositories:

In this process I learned a lot about the Liquid templating language, Pipenv, Kivy and the Django Framework.


WEEK 7 & 8

I’m getting used to the dictionary in the Python and I wrote the function to generate the blogs from the templates and for the moment I’m in the testing phase.

def main():
    dictionary = createDictionary()

blog_params['first_name'] =   'Test'
blog_params['last_name'] = 'Student'
blog_params['email_address'] = 'test@school.edu'

create_blog(blog_params)

15 July, 2018 09:04AM by Elena Gjevukaj (gjevukaje@gmail.com)

hackergotchi for Steve Kemp

Steve Kemp

Automating builds via CI

I've been overhauling the way that I build Debian packages and websites for myself. In the past I used to use pbuilder but it is pretty heavyweight and in my previous role I was much happier using Gitlab runners for building and deploying applications.

Gitlab is something I've no interest in self-hosting, and Jenkins is an abomination, so I figured I'd write down what kind of things I did and explore options again. My use-cases are generally very simple:

  • I trigger some before-steps
    • Which mostly mean pulling a remote git repository to the local system.
  • I then start a build.
    • This means running debuild, hugo, or similar.
    • This happens in an isolated Docker container.
  • Once the build is complete I upload the results somehere.
    • Either to a Debian package-pool, or to a remote host via rsync.

Running all these steps inside a container is well-understood, but I cheat. It is significantly easier to run the before/after steps on your host - because that way you don't need to setup SSH keys in your containers, and that is required when you clone (private) remote repositories, or wish to upload to hosts via rsync over SSH.

Anyway I wrote more thoughts here:

Then I made it work. So that was nice.

To complete the process I also implemented a simple HTTP-server which will list all available jobs, and allow you to launch one via a click. The output of the build is streamed to your client in real-time. I didn't bother persisting output and success/failure to a database, but that would be trivial enough.

It'll tide me over until I try ick again, anyway. :)

15 July, 2018 09:01AM

July 14, 2018

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Solskogen 2018

I've been so busy I've forgotten to blog: Solskogen 2018 is underway! And so is the stream. http://live.solskogen.no has it all, and Saturday evening (CEST) is the big night; the home page has schedule (scroll down), and everything you missed is going up on the The YouTube channel as soon as time and bandwidth permits. :-)

14 July, 2018 02:19AM

July 13, 2018

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy call for participation -- July 2018

Thanks to efforts from several contributors I was able to push a substantive release of Policy a little over a week ago. It contains a lot of changes that I was very pleased to release:

Changes:
 debian-policy (4.1.5.0) unstable; urgency=medium
 .
   * Policy: Update section 4.1, "Standards conformance"
     Wording: Ian Jackson
     Seconded: Sean Whitton
     Seconded: Holger Levsen
     Closes: #901160
   * Policy: Require d-devel consultation for each epoch bump
     Wording: Ian Jackson
     Seconded: Sean Whitton
     Seconded: Paul Gevers
     Closes: #891216
   * Policy: Document Rules-Requires-Root
     Wording: Niels Thykier
     Wording: Guillem Jover
     Wording: Sean Whitton
     Seconded: Paul Gevers
     Closes: #880920
   * Policy: Update version of POSIX standard for shell scripts
     Wording: Sean Whitton
     Seconded: Simon McVittie
     Seconded: Julien Cristau
     Seconded: Gunnar Wolf
     Closes: #864615
   * Policy: Update version of FHS from 2.3 to 3.0
     Wording: Simon McVittie
     Seconded: Sean Whitton
     Seconded: Julien Cristau
     Closes: #787816
   * Add reference link to section 4.1 to section 5.6.11.

We’re now looking forward to the rolling Debian Policy spring at DebCamp18. You do not need to be physically present at DebCamp18 to participate. My goal is firstly to enable others to work on bugs by providing advice about the process and how to move things along, and secondarily to get some patches written for uncontroversial bugs.

See the linked wiki page for more information on the sprint.

13 July, 2018 04:41PM

Andrej Shadura

Upcoming git-crecord release

More than 1½ years since the first release of git-crecord, I’m preparing a big update. Not aware how exactly many people are using it, I neglected the maintenance for some time, but last month I’ve decided I need to take action and fix some issues I’ve known since the first release.

First of all, I’ve fixed a few minor issues with setup.py-based installer some users reported.

Second, I’ve ported a batch of updates from a another crecord derivative merged into Mercurial. That also brought some updates to the bits of Mercurial code git-crecord is using.

Third, long waited Python 3 support is here. I’m afraid at the moment I cannot guarantee support of patches in encodings other than the locale’s one, but if that turns out to be a needed feature, I can think about implementing it.

Fourth, missing staging and unstaging functionality is being implemented, subject to the availability of free time during the holiday :)

The project is currently hosted at GitHub: https://github.com/andrewshadura/git-crecord.

P.S. In case you’d like to support me hacking on git-crecord, or any other of my free software activities, you can tip my Patreon account.

13 July, 2018 11:10AM by Andrej Shadura

hackergotchi for Steve Kemp

Steve Kemp

Odroid-go initial impressions

Recently I came across a hacker news post about the Odroid-go, which is a tiny hand-held console, somewhat resembling a gameboy.

In terms of hardware this is powered by an ESP32 chip, which is something I'm familiar with due to my work on Arduino, ESP8266, and other simple hardware projects.

Anyway the Odroid device costs about $40, can be programmed via the Arduino-studio, and comes by default with a series of emulators for "retro" hardware:

  • Game Boy (GB)
  • Game Boy Color (GBC)
  • Game Gear (GG)
  • Nintendo Entertainment System (NES)
  • Sega Master System (SMS)

I figured it was cheap, and because of its programmable nature it might be fun to experiment with. Though in all honesty my intentions were mostly to play NES games upon it. The fact that it is programmable means I can pretend I'm doing more interesting things than I probably would be!

Most of the documentation is available on this wiki:

The arduino setup describes how to install the libraries required to support the hardware, but then makes no mention of the board-support required to connect to an ESP32 device.

So to get a working setup you need to do two things:

  • Install the libraries & sample-code.
  • Install the board-support.

The first is documented, but here it is again:

 git clone https://github.com/hardkernel/ODROID-GO.git \
     ~/Arduino/libraries/ODROID-GO

The second is not documented. In short you need to download the esp32 hardware support via this incantation:

    mkdir -p ~/Arduino/hardware/espressif && \
     cd ~/Arduino/hardware/espressif && \
     git clone https://github.com/espressif/arduino-esp32.git esp32 && \
     cd esp32 && \
     git submodule update --init --recursive && \
     cd tools && \
     python2 get.py

(This assumes that you're using ~/Arduino for your code, but since almost everybody does ..)

Anyway the upshot of this should be that you can:

  • Choose "Tools | Boards | ODROID ESP32" to select the hardware to build for.
  • Click "File | Examples |ODROID-Go | ...." to load a sample project.
    • This can now be compiled and uploaded, but read on for the flashing-caveat.

Another gap in the instructions is that uploading projects fails. Even when you choose the correct port (Tools | Ports | ttyUSB0). To correct this you need to put the device into flash-mode:

  • Turn it off.
  • Hold down "Volume"
  • Turn it on.
  • Click Upload in your Arduino IDE.

The end result is you'll have your own code running on the device, as per this video:

Enough said. Once you do this when you turn the device on you'll see the text scrolling around. So we've overwritten the flash with our program. Oops. No more emulation for us!

The process of getting the emulators running, or updated, is in two-parts:

  • First of all the system firmware must be updated.
  • Secondly you need to update the actual emulators.

Confusingly both of these updates are called "firmware" in various (mixed) documentation and references. The main bootloader which updates the system at boot-time is downloaded from here:

To be honest I expect you only need to do this part once, unless you're uploading your own code to it. The firmware pretty much just boots, and if it finds a magic file on the SD-card it'll copy it into flash. Then it'll either execute this new code, or execute the old/pre-existing code if no update was found.

Anyway get the tarball, extract it, and edit the two files if your serial device is not /dev/ttyUSB0:

  • eraseflash.sh
  • flashall.sh

One you've done that run them both, in order:

$ ./eraseflash.sh
$ ./flashall.sh

NOTE: Here again I had to turn the device off, hold down the volume button, turn it on, and only then execute ./eraseflash.sh. This puts the device into flashing-mode.

NOTE: Here again I had to turn the device off, hold down the volume button, turn it on, and only then execute ./flashall.sh. This puts the device into flashing-mode.

Anyway if that worked you'll see your blue LED flashing on the front of the device. It'll flash quickly to say "Hey there is no file on the SD-card". So we'll carry out the second step - Download and place firmware.bin into the root of your SD-card. This comes from here:

(firmware.bin contains the actual emulators.)

Insert the card. The contents of firmware.bin will be sucked into flash, and you're ready to go. Turn off your toy, remove the SD-card, load it up with games, and reinsert it.

Power on your toy and enjoy your games. Again a simple demo:

Games are laid out in a simple structure:

  ├── firmware.bin
  ├── odroid
  │   ├── data
  │   └── firmware
  └── roms
      ├── gb
      │   └── Tetris (World).gb
      ├── gbc
      ├── gg
      ├── nes
      │   ├── Lifeforce (U).nes
      │   ├── SMB-3.nes
      │   └── Super Bomberman 2 (E) [!].nes
      └── sms

You can find a template here:

13 July, 2018 09:01AM

bisco

Fifth GSoC Report

No shiny screenshots this time ;-)

In the last two weeks i’ve finished evaluating the SSO solutions. I’ve added the evaluation of ‘ipsilon’, a python based single sign on solution. I’ve also updated the existing evaluations with a bit more information about their possibility to work with multiple backends. And i’ve added gitlab configuration snippets for some of the solutions. Formorer asked me to create a tabular overview of the outcome of the evaluations, so i did that in the README of the corresponding salsa repository. I’ve also pushed the code for the test client application i used to test the SSO solutions.

This week i used to look into the existing Debian SSO solution that works with certificates. The idea is not to change it, but to integrate it with the chosen OAuth2/SAML solution. To test this, i’ve pulled the code and set up my own instance of it. Fortunatly it is a Django application, so i now have some experience with that. Its not working yet, but i’m getting there.

I’ve also reevaluated a design desicion i made with nacho and came to the same conclusion: that storing the temporary accounts in ldap too is the way to go. There are also still some small feature requests i want to implement.

13 July, 2018 05:28AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Taiwan Travel Blog - Day 5

The view from my hostel this morning

This is the fourth entry of my Taiwan Travel blog series! You can find posts on my first few days in Taiwan by following these links:

Old Zhuilu Road (锥麓古道)

Between all the trails I did in the Taroko National Park, I think this one was the best.

Old Zhuilu Road has quite a story behind it. It was built in the 1910s under the Japanese occupation of the Taiwan island. Most of the trail already existed, but the Japanese forced the natives to make it at least a meter large everywhere (it used to be only 10cm wide) using dynamite.

It's quite sad that such a beautiful trail is soaked the in blood of the Taroko natives.

The path at the top of Old Zhuilu Road

The Japanese wanted to widen the trail to be able to bring cannons and weapons up the mountain to subdue the native villages resisting the occupation. They eventually won and killed a bunch of them.

Even though this trail is less taxing than the Dali-Datong trail I did two days ago, it easily scores a hard 3/5 as half of the path is made of very narrow paths on the side of a 700m cliff. Better not trip and fall!

Sadly out of the original 10km, only a third of the trail is currently open. A large typhoon destroyed part of the path a few years ago. Once you arrive at the cliffside outpost you have to turn back and start climbing down.

Night market in Xincheng (新成乡)

Tonight was the night market in Xincheng township, the rural township where my hostel is located. I was pretty surprised by how large it was (more than 50 stalls) considering that only 20'000 people reside in the whole township. The ambiance was great and of course I ended up eating too much.

Looking down into the abyss

Fried and grilled chicken hearts, fried bits of fat, fried fish balls, grilled squid, hollow egg-shaped cakes, corndogs, sausages, soft-serve ice cream, sweet black tea, I guess I should have paced myself.

Although the food stalls were numerous, there were also a large number of clothes vendors, random toys stalls for the kids and a bunch of electronic pachinko machines where old people were placing bets on one another. The kids were pretty funny: since I seemed to be the only westerner in the whole market, they kept coming in groups of 3 or 4 to laugh at me for not speaking mandarin, and then left running and squealing when I started talking to them.

Blurp! More chicken hearts please!

13 July, 2018 04:00AM by Louis-Philippe Véronneau

July 12, 2018

Petter Reinholdtsen

Simple streaming the Linux desktop to Kodi using GStreamer and RTP

Last night, I wrote a recipe to stream a Linux desktop using VLC to a instance of Kodi. During the day I received valuable feedback, and thanks to the suggestions I have been able to rewrite the recipe into a much simpler approach requiring no setup at all. It is a single script that take care of it all.

This new script uses GStreamer instead of VLC to capture the desktop and stream it to Kodi. This fixed the video quality issue I saw initially. It further removes the need to add a m3u file on the Kodi machine, as it instead connects to the JSON-RPC API in Kodi and simply ask Kodi to play from the stream created using GStreamer. Streaming the desktop to Kodi now become trivial. Copy the script below, run it with the DNS name or IP address of the kodi server to stream to as the only argument, and watch your screen show up on the Kodi screen. Note, it depend on multicast on the local network, so if you need to stream outside the local network, the script must be modified. Also note, I have no idea if audio work, as I only care about the picture part.

#!/bin/sh
#
# Stream the Linux desktop view to Kodi.  See
# http://people.skolelinux.org/pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html
# for backgorund information.

# Make sure the stream is stopped in Kodi and the gstreamer process is
# killed if something go wrong (for example if curl is unable to find the
# kodi server).  Do the same when interrupting this script.
kodicmd() {
    host="$1"
    cmd="$2"
    params="$3"
    curl --silent --header 'Content-Type: application/json' \
	 --data-binary "{ \"id\": 1, \"jsonrpc\": \"2.0\", \"method\": \"$cmd\", \"params\": $params }" \
	 "http://$host/jsonrpc"
}
cleanup() {
    if [ -n "$kodihost" ] ; then
	# Stop the playing when we end
	playerid=$(kodicmd "$kodihost" Player.GetActivePlayers "{}" |
			    jq .result[].playerid)
	kodicmd "$kodihost" Player.Stop "{ \"playerid\" : $playerid }" > /dev/null
    fi
    if [ "$gstpid" ] && kill -0 "$gstpid" >/dev/null 2>&1; then
	kill "$gstpid"
    fi
}
trap cleanup EXIT INT

if [ -n "$1" ]; then
    kodihost=$1
    shift
else
    kodihost=kodi.local
fi

mcast=239.255.0.1
mcastport=1234
mcastttl=1

pasrc=$(pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | \
  cut -d" " -f2|head -1)
gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \
  videoconvert ! queue2 ! \
  x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \
  key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \
  mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \
  udpsink host=$mcast port=$mcastport ttl-mc=$mcastttl auto-multicast=1 sync=0 \
  pulsesrc device=$pasrc ! audioconvert ! queue2 ! avenc_aac ! queue2 ! mux. \
  > /dev/null 2>&1 &
gstpid=$!

# Give stream a second to get going
sleep 1

# Ask kodi to start streaming using its JSON-RPC API
kodicmd "$kodihost" Player.Open \
	"{\"item\": { \"file\": \"udp://@$mcast:$mcastport\" } }" > /dev/null

# wait for gst to end
wait "$gstpid"

I hope you find the approach useful. I know I do.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12 July, 2018 03:55PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

The Cure's 40th Anniversary

Last Saturday I joined roughly 65,000 other people to see the Cure play a 40th Anniversary celebration gig in Hyde Park, London. It was a short gig (by Cure) standards of about 2½ hours due to the venue's strict curfew, and as predicted, the set was (for the most part) a straightforward run through the greatest hits. However, the atmosphere was fantastic. It may have been helped along by the great weather we were enjoying (over 30°C), England winning a World Cup match a few hours earlier, and the infectious joy of London Pride that took place a short trip up the road. A great time was had by all.

Last year, a friend of mine who had never listened to the Cure had asked me to recommend (only) 5 songs which would give a reasonable overview. (5 from over 200 studio recorded songs). As with Coil, this is quite a challenging task, and here's what I came up with. In most cases, the videos are from the Hyde Park show (but it's worth seeking out the studio versions too)

1. "Pictures of You"

Walking a delicate line between their dark and light songs, "Pictures of You" is one of those rare songs where the extended remix is possibly better than the original (which is not short either)

2. "If Only Tonight We Could Sleep"

I love this song. I'm a complete sucker for the Phrygian scale. I was extremely happy to finally catch it live for the first time at Hyde Park, which was my fourth Cure gig (and hopefully not my last)

The nu-metal band "Deftones" have occasionally covered this song live, and they do a fantastic job of it. They played it this year for their Meltdown appearance, and a version appears on their "B-Side and Rarities". My favourite take was from a 2004 appearance on MTV's "MTV Icon" programme honouring the Cure:

3. "Killing An Arab"

The provocatively-titled first single by the group takes its name from the pivotal scene in the Albert Camus novel "The Stranger" and is not actually endorsing the murder of people. Despite this it's an unfortunate title, and in recent years they have often performed it as "Killing Another". The song loses nothing in renaming, in my opinion.

The original recording is a sparse, tight, angular post-punk piece, but it's in the live setting that this song really shines, and it's a live version I recommend you try.

4. "Just Like Heaven"

It might be obvious that my tastes align more to the Cure's dark side than the light, but the light side can't be ignored. Most of their greatest hits and best known work are light, accessible pop classics. Choosing just one was amongst the hardest decisions to make. For the selection I offered my friend, I opted for "Friday I'm In Love", which is unabashed joy, but it didn't meet a warm reception, so I now substitute it for "Just Like Heaven".

Bonus video: someone proposed in the middle of this song!

5. "The Drowning Man"

From their "Very Dark" period, another literature-influenced track, this time Mervyn Peake's "Gormenghast": "The Drowning Man"

If you let the video run on, you'll get a bonus 6th track, similarly rarely performed live: Faith. I haven't seen either live yet. Maybe one day!

12 July, 2018 01:59PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Taiwan Travel Blog - Day 4

This is the third entry of my Taiwan Travel blog series! You can find posts on my first few days in Taiwan by following these links:

The Baiyang Waterfall

I had to take care of a few things this morning so I left the hostel a little bit later than I would have liked. I've already done quite a few trails and I'm slowly starting to exhaust the places I wanted to visit in the Taroko National Park, or at least the ones I can reach via public bus.

I've got another day left here and I plan to use the mountain permit I got to visit Old Zhuilu Road (锥麓古道) and then I'll cycle back to Hualien.

Baiyang Waterfall Trail (白杨步道)

I took the bus to go to Tianxiang (天祥), a mountain station about 20km inland from the Taroko National Park entrance. I considered riding my bike there but the mountain road has a lot of blind turns and buses frequently pass one another.

I wanted to visit the Baiyang Waterfall and the Huoran temple, but only found the trail for the waterfall. I blame the heavy construction work in the area.

Formosan Rock Macaque, Yang Hsiao Chi, CC-BY-SA-4.0

Baiyang Waterfall was a very nice and leisurely trail. Paved all the way, it follows the Tacijili River (塔次基里) for a while. The trail starts by a tunnel 250m long carved into the mountain. There are no lights in there so it's pitch dark for a while!

I was very lucky to see a colony of bats sleeping with their pups at the exit of the entrance tunnel. So cute!

I also encountered a family of Formosan Rock Macaques in the forest next to the trail. Apparently, they are pretty common in Taiwan, but it was the first time I saw them. Andrew Lee tells me that if I want to meet more monkeys, I can go to the nearby hot spring and bathe with them. If it wasn't so darn hot I just might go.

12 July, 2018 04:00AM by Louis-Philippe Véronneau

Petter Reinholdtsen

Streaming the Linux desktop to Kodi using VLC and RTSP

PS: See the followup post for a even better approach.

A while back, I was asked by a friend how to stream the desktop to my projector connected to Kodi. I sadly had to admit that I had no idea, as it was a task I never had tried. Since then, I have been looking for a way to do so, preferable without much extra software to install on either side. Today I found a way that seem to kind of work. Not great, but it is a start.

I had a look at several approaches, for example using uPnP DLNA as described in 2011, but it required a uPnP server, fuse and local storage enough to store the stream locally. This is not going to work well for me, lacking enough free space, and it would impossible for my friend to get working.

Next, it occurred to me that perhaps I could use VLC to create a video stream that Kodi could play. Preferably using broadcast/multicast, to avoid having to change any setup on the Kodi side when starting such stream. Unfortunately, the only recipe I could find using multicast used the rtp protocol, and this protocol seem to not be supported by Kodi.

On the other hand, the rtsp protocol is working! Unfortunately I have to specify the IP address of the streaming machine in both the sending command and the file on the Kodi server. But it is showing my desktop, and thus allow us to have a shared look on the big screen at the programs I work on.

I did not spend much time investigating codeces. I combined the rtp and rtsp recipes from the VLC Streaming HowTo/Command Line Examples, and was able to get this working on the desktop/streaming end.

vlc screen:// --sout \
  '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{dst=projector.local,port=1234,sdp=rtsp://192.168.11.4:8080/test.sdp}'

I ssh-ed into my Kodi box and created a file like this with the same IP address:

echo rtsp://192.168.11.4:8080/test.sdp \
  > /storage/videos/screenstream.m3u

Note the 192.168.11.4 IP address is my desktops IP address. As far as I can tell the IP must be hardcoded for this to work. In other words, if someone elses machine is going to do the steaming, you have to update screenstream.m3u on the Kodi machine and adjust the vlc recipe. To get started, locate the file in Kodi and select the m3u file while the VLC stream is running. The desktop then show up in my big screen. :)

When using the same technique to stream a video file with audio, the audio quality is really bad. No idea if the problem is package loss or bad parameters for the transcode. I do not know VLC nor Kodi enough to tell.

Update 2018-07-12: Johannes Schauer send me a few succestions and reminded me about an important step. The "screen:" input source is only available once the vlc-plugin-access-extra package is installed on Debian. Without it, you will see this error message: "VLC is unable to open the MRL 'screen://'. Check the log for details." He further found that it is possible to drop some parts of the VLC command line to reduce the amount of hardcoded information. It is also useful to consider using cvlc to avoid having the VLC window in the desktop view. In sum, this give us this command line on the source end

cvlc screen:// --sout \
  '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{sdp=rtsp://:8080/}'

and this on the Kodi end

echo rtsp://192.168.11.4:8080/ \
  > /storage/videos/screenstream.m3u

Still bad image quality, though. But I did discover that streaming a DVD using dvdsimple:///dev/dvd as the source had excellent video and audio quality, so I guess the issue is in the input or transcoding parts, not the rtsp part. I've tried to change the vb and ab parameters to use more bandwidth, but it did not make a difference.

I further received a suggestion from Einar Haraldseid to try using gstreamer instead of VLC, and this proved to work great! He also provided me with the trick to get Kodi to use a multicast stream as its source. By using this monstrous oneliner, I can stream my desktop with good video quality in reasonable framerate to the 239.255.0.1 multicast address on port 1234:

gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \
  videoconvert ! queue2 ! \
  x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \
  key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \
  mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \
  udpsink host=239.255.0.1 port=1234 ttl-mc=1 auto-multicast=1 sync=0 \
  pulsesrc device=$(pactl list | grep -A2 'Source #' | \
    grep 'Name: .*\.monitor$' |  cut -d" " -f2|head -1) ! \
  audioconvert ! queue2 ! avenc_aac ! queue2 ! mux.

and this on the Kodi end

echo udp://@239.255.0.1:1234 \
  > /storage/videos/screenstream.m3u

Note the trick to pick a valid pulseaudio source. It might not pick the one you need. This approach will of course lead to trouble if more than one source uses the same multicast port and address. Note the ttl-mc=1 setting, which limit the multicast packages to the local network. If the value is increased, your screen will be broadcasted further, one network "hop" for each increase (read up on multicast to learn more. :)!

Having cracked how to get Kodi to receive multicast streams, I could use this VLC command to stream to the same multicast address. The image quality is way better than the rtsp approach, but gstreamer seem to be doing a better job.

cvlc screen:// --sout '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{mux=ts,dst=239.255.0.1,port=1234,sdp=sap}'

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12 July, 2018 12:00AM

July 11, 2018

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, June 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked 12 hours, so I have carried 3 hours over to July. Since Debian 7 "wheezy" LTS ended at the end of May, I prepared for Debian 8 "jessie" to enter LTS status.

I prepared a stable update of Linux 3.16, sent it out for review, and then released it. I rebased jessie's linux package on this, but didn't yet upload it.

Since the "jessie-backports" suite is no longer accepting updates, and there are LTS users depending on the updated kernel (Linux 4.9) there, I prepared to add it to the jessie-security suite. The source package I have prepared is similar to what was in jessie-backports, but I have renamed it to "linux-4.9" and disabled building some binary packages to avoid conflicting with the standard linux source package. I also disabled building the "udeb" packages used in the installer, since I don't expect anyone to need them and building them would require updating the "kernel-wedge" package too. I didn't upload this either, since there wasn't a new linux version in "stretch" to backport yet.

11 July, 2018 01:46AM

July 10, 2018

Reproducible builds folks

Reproducible Builds: Weekly report #167

Here’s what happened in the Reproducible Builds effort between Sunday July 1 and Saturday July 7 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

10 July, 2018 06:37AM

Minkush Jain

KubeCon + CloudNativeCon, Copenhagen

I attended KubeCon + CloudNativeCon 2018, Europe that took place from 2nd to 4th of May. It was held in Copenhagen, Denmark. I know it’s quite late since I attended it, but still I wanted to share my motivating experiences at the conference, so here it is!

I got scholarship from the Linux Foundation which gave me a wonderful opportunity to attend this conference. This was my first developer conference aboard and I was super-excited to attend it. I got the chance to learn more about containers, straight from the best people out there.

I attended the opening keynote sessions on 2nd May in Bella Centre, Copenhagen. The opening keynote was given by Dan Kohn, Executive Director, CNCF in an enormous hall filled with more than 4100 people. It was like everybody from the container community was present there!

The conference was very well organised considering the large scale of the event. People from all around the world were present, sharing their experience with Kubernetes.

Apart from the keynotes, I mostly attended beginner and intermediate level talks, due to the fact that some of the sessions required high technical knowledge that I didn’t possess yet.

One of the speech that I enjoyed was given by Oliver Beattie, Engineering head, Monzo Bank where he talked about the Kubernetes outrage that they experienced a few months ago and how they handled its consequences.

Other talks that interested me were how Adidas is using cloud native technologies and closing remarks given by the inspiring Kelsey Hightower. It was wonderful to see the growth of cloud and container technologies and its communities.

A large number of sponsor booths were present, including RedHat, AWS, IBM and Google cloud, Azure. They shared their workflow and technologies they were using. I visited several booths, interacted with amazing people and got lots of stickers and goodies!

I was fortunate enough to win two raffles conducted by the sponsors! A big thank you to node.js for the Raspberry Pi kit and Mesosphere for the drone.

Raffle Winner!

Our sponsors also organised Diversity Lunch event for the scholarship recipients. The committee had a great discussion on inclusion and diversity along with excellent meals.

I had face-to-face interactions with some inspiring developers and employees of tech giants. Being among the youngest to attend the conference, I had a lot to learn from everyone around me and grow my network.

On the day before the last, an all attendee party and dinner was held in Tivoli Gardens, in the heart of the city. The evening was filled with amusement rides, beautiful gardens, and more. What more would you expect?

Tivoli Gardens Event in Tivoli Gardens, Copenhagen

I would like to express gratitude to CNCF, The Linux Foundation and Wendy West for this opportunity, and for helping the community involve more diversity. I look forward to attend more such events in the future!

Photographs by Cloud Native Foundation

10 July, 2018 04:00AM by Minkush Jain

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Taiwan Travel Blog - Day 2 & 3

My Taiwan Travel blog continues! I was expecting the weather to go bad on July 10th, but the typhoon arrived late and the rain only started around 20:00. I'm pretty happy because that means I got to enjoy another beautiful day of hiking in Taroko National Park.

I couldn't find time on the 10th to sit down and blog about my trip, so this blog will also include what I did on the 11th.

Xiaozhuilu Trail (小锥麓步道)

Suspension bridge in Xiaozhuilu

The first path I did on the 10th was Xiaozhuilu to warm my muscles a little bit. It links the Shakadang Trail to the Taroko Visitor center and it's both easy and enjoyable. The path is mainly composed of stairs and man-made walkways, but it's the middle of the forest and goes by the LiWu river.

To me, the highlight of the trail was the short rope suspension bridge. How cute!

Dekalun Trail (得卡伦步道)

Once I finished the Xiaozhuilu trail, I decided I was ready for something a little more challenging. Since the park was slowly closing down because of the incoming Typhoon Maria, the only paths I could do were the ones where I didn't need to ride a bus.

I thus started climbing the Dekalun Trail, situated right behind the Taroko Visitor Center.

Although the path is very steep and goes through the wild forest/jungle, this path is also mainly man-made walkways and stairs. Here is a forest interpretation poster I really liked:

The leaves of a tree are its name cards. The name cards of the Macaranga tree are very special. They are large and round and the petiole is not on the leaf margin, it is inside the leaf blade. They are called perforated leaves and look like shields. [...] The Macaranga tree is like a spearhead. When the village here relocated and the fields were abandoned, it quickly moved in. The numerous leaves form a large umbrella that catches a large amount of sunlight and allows it to grow quickly. It can be predicted that in the future, the Macaranga will gradually be replaced by trees that are more shade tolerant. In the meantime however, its leaves, flowers and fruits are a source of food loved by the insects and birds.

A very fengshui tree yo.

Here is a bonus video of one of the giant spiders I was describing yesterday being eaten by ants. For size comparison, the half step you can see is about 10cm large...

Dali - Datong Trail (大礼-大同步道)

The Dekalun Trail ends quite abruptly and diverges into two other paths: one that goes back down and the other one that climbs to the Dali village and then continues to the Datong village.

The view from Dali Village

It was still early in the afternoon when I arrived at the crossroad so I decided that I was at least going to make it to Dali before turning back. Turns out that was a good idea, since the Dali path was a really beautiful mountainside path with a very challenging heigh difference. If the Dekalun Trail is a light 3/5, I'd say the Dali trail is a heavy 3/5. Although I'm in shape, I had to stop multiple times to sit down and try to cool myself. By itself the trail would be fine, but it's the 35+°C with a high level of humidity that made it challenging to me.

Once I arrived at Dali, I needed a permit to continue to Datong but the path was very easy, the weather beautiful and the view incredible, so I couldn't stop myself. I think I walked about half of the 6km trail from Dali to Datong before running out of water. Turns out 4L wasn't enough. The mixed guilt of not having a mountain permit and the concern I wouldn't have anything left to drink for a while made me turn back and start climbing down.

Still, no regrets! This trail was clearly the best one I did so far.

A Wild Andrew Appears!

So there I was in my bed after a day of hiking in the mountains, ready to go to sleep when Andrew Lee reached out to me.

He decided to come by my hostel to talk about the DebConf18 daytrip options. Turns out I'll be the one to lead the River Tracing daytrip on the MeiHua river (梅花溪). River tracing is a mix of bouldering and hiking, but in a river bed.

I'm a little apprehensive of taking the lead of the daytrip since I don't know if my mandarin will be good enough to fully understand the bus driver and the activity guide, but I'll try my best!

Anyway, once we finished talking about the daytrip, Andrew proposed we go to the Hualien night market. After telling him I wasn't able to rent a bike because of the incoming typhoon (nobody would rent me one), we swerved by Carrefour (a large super market chain) and ended up buying a bicycle! The clerk was super nice and even gave me a lock to go with it.

I'm now the proud owner of a brand new Giant bicycle for the rest of my trip in Taiwan. I'm retrospective, I think this was a pretty good idea. It'll end up cheaper than renting one for a large amount of time and will be pretty useful to get around during DebConf.1 It's a little small for me, but I will try to buy a longer seat post in Hualien.

Music and Smoked Flying Fish

After buying the bike, I guess we said fuck the night market and met up with one of Andrew's friend who is a musician and owns a small recording studio. We played music with him for a while and sang songs, and then went back to Andrew's place to eat some flying fish that Andrew had smoked. We drank a little and I decided to sleep there because it was getting pretty late.

Andrew was a wonderful guest and brought me back to my hostel the next day in the afternoon after showing me the Hualien beach and drinking some tea in a local teashop with me. I had a very good time.

What an eventful two days that was! Turns out the big typhoon that was supposed to hit on the 11th turned out to be a fluke and passed to the north of Taiwan: in Hualien we only had a little bit of rain. So much for the rainpocalyspe I was expecting!

Language Rant bis

Short but heartfelt language rant: Jesus Christ on a paddle-board, communication in a language you don't really master is exhausting. I recently understood one of the sentences I was trying to decipher was a pun and I laughed. Then cried a little.


  1. If you plan to stay in Taiwan after DebConf and need a bicycle, I would be happy to sell it for 1500 NTD$ (40€), half of what I paid. It's a little bit cheap, but it's brand new and comes with a 1 year warranty! Better than walking if you ask me. 

10 July, 2018 04:00AM by Louis-Philippe Véronneau

July 09, 2018

hackergotchi for Charles Plessy

Charles Plessy

Still not going to Debconf....

I was looking forward to this year's Debconf in Taiwan, the first in Asia, and the perspective of attending it with no jet lag, but I happen to be moving to Okinawa and changing jobs on August 1st, right at the middle of it...

Moving is a mixed feeling of happiness and excitation for what I am about to find, and melancholy about what and whom I am about to leave. But flights to Tôkyô and Yokohama are very affordable.

Special thanks to the Tôkyô Debian study group, where I got my GPG key signed by Debian developers a long time ago :)

09 July, 2018 08:35PM

hackergotchi for Bálint Réczey

Bálint Réczey

Run Ubuntu on Windows, even multiple releases in parallel!

Running Linux terminals on Windows needs just a few clicks since we can install Ubuntu, Debian and other distributions right from the Store as apps, without the old days’ hassle of dual-booting or starting virtual machines. It just works and it works even in enterprise environments where installation policies are tightly controlled.

If you check the Linux distribution apps based on the Windows Subsystem for Linux technology you may notice that there is not only one Ubuntu app, but there are already three, Ubuntu, Ubuntu 16.04 and Ubuntu 18.04. This is no accident. It matches the traditional Ubuntu release offering where the LTS releases are supported for long periods and there is always a recommended LTS release for production:

  • Ubuntu 16.04 (code name: Xenial) was the first release really rocking on WSL and it will be updated in the Store until 16.04’s EOL, April, 2021.
  • Ubuntu 18.04 (code name: Bionic) is the current LTS release (also rocking :-)) and the first one supporting even ARM64 systems on Windows. It will be updated in the Store until 18.04’s EOL, April, 2023.
  • Ubuntu (without the release version) always follows the recommended release, switching over to the next one when it gets the first point release. Right now it installs Ubuntu 16.04 and will switch to 18.04.1, on 26th July, 2018.

The apps in the Store are like installation kits. Each app creates a separate root file system in which Ubuntu terminals are opened but app updates don’t change the root file system afterwards. Installing a different app in parallel creates a different root file system allowing you to have both Ubuntu LTS releases installed and running in case you need it for keeping compatibility with other external systems. You can also upgrade your Ubuntu 16.04 to 18.04 by running ‘do-release-upgrade’ and have three different systems running in parallel, separating production and sandboxes for experiments.

What amazes me in the WSL technology is not only that Linux programs running directly on Windows perform surprisingly well (benchmarks), but the coverage of programs you can run unmodified without any issues and without the large memory overhead of virtual machines.

I hope you will enjoy the power or the Linux terminals on Windows at least as much we enjoyed building the apps at Canonical working closely with Microsoft to make it awesome!

09 July, 2018 07:50PM by Réczey Bálint

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in June 2018

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I advocated Phil Morrell to become Debian Maintainer with whom I have previously worked together on corsix-th. This month I sponsored his updates for scorched3d and the new play.it package, an installer for drm-free commercial games. Play.it is basically a collection of shell scripts that create a wrapper around games from gog.com or Steam and put them into a Debian package which is then seamlessly integrated into the user’s system.  Similar software are game-data-packager, playonlinux or lutris (not yet in Debian).
  • I packaged new upstream releases of blockattack, renpy, atomix and minetest, and also backported Minetest version 0.4.17.1 to Stretch later on.
  • I uploaded RC bug fixes from Peter de Wachter for torus-trooper, tumiki-fighters and val-and-rick and moved the packages to Git.
  • I tackled an RC bug (#897548) in yabause, a Saturn emulator.
  • I sponsored connectagram, cutemaze and tanglet updates for Innocent de Marchi.
  • Last but not least I refreshed the packaging of trophy and sauerbraten which had not seen any updates for the last couple of years.

Debian Java

  • I packaged a new upstream release of activemq and could later address #901366 thanks to a bug report by Chris Donoghue.
  • I also packaged upstream releases of bouncycastle, libpdfbox-java, libpdfbox2-java because of reported security vulnerabilities.
  • I investigated and fixed RC bugs in openjpa (#901045), osgi-foundation-ee (#893382) and ditaa (#897494, Java 10 related).
  • A snakeyaml update introduced a regression in apktool (#902666) which was only visible at runtime. Once known I could fix it.
  •   I worked on Netbeans again. It can be built from source now but there is still a runtime error (#891957) that prevents users from starting the application. The current plan is to package the latest release candidate of Netbeans 9 and move forward.

Debian LTS

This was my twenty-eight month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 18.06.2018 until 24.06.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in jasperreports, 389-ds-base, asterisk, lava-server, libidn, php-horde-image, tomcat8, thunderbird, glusterfs, ansible, mercurial, php5, jquery, redis, redmine, libspring-java, php-horde-crypt, mupdf, binutils, jetty9 and libpdfbox-java.
  • DSA-4221-1. Issued a security update for libvncserver fixing 1 CVE.
  • DLA-1398-1. Issued a security update for php-horde-crypt fixing 2 CVE.
  • DLA-1399-1. Issued a security update for ruby-passenger fixing 2 CVE.
  • DLA-1411-1. Issued a security update for tiff fixing 5 CVE.
  • DLA-1410-1. Issued a security update for python-pysaml fixing 2 CVE.
  • DLA-1418-1. Issued a security update for bouncycastle fixing 7 CVE.

ELTS

Extended Long Term Support (ELTS) is a new project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my first month and I have been paid to work 7 hours on ELTS.

  • ELA-1-1. Issued a security update for Git fixing 1 CVE.
  • ELA-8-1. Issued a security update for ruby-passenger fixing 1 CVE.
  • ELA-14-1. Backported the Linux 3.16 kernel from Jessie to Wheezy. This update also included backports of initramfs-tools and the linux-latest source package. The new kernel is available for amd64 and i386 architectures.

Misc

  • I prepared security updates for libvncserver (Stretch, DSA-4221-1) and Sid) and bouncycastle (Stretch, DSA-4233-1)

Thanks for reading and see you next time.

09 July, 2018 06:06PM by Apo

Petter Reinholdtsen

What is the most supported MIME type in Debian in 2018?

Five years ago, I measured what the most supported MIME type in Debian was, by analysing the desktop files in all packages in the archive. Since then, the DEP-11 AppStream system has been put into production, making the task a lot easier. This made me want to repeat the measurement, to see how much things changed. Here are the new numbers, for unstable only this time:

Debian Unstable:

  count MIME type
  ----- -----------------------
     56 image/jpeg
     55 image/png
     49 image/tiff
     48 image/gif
     39 image/bmp
     38 text/plain
     37 audio/mpeg
     34 application/ogg
     33 audio/x-flac
     32 audio/x-mp3
     30 audio/x-wav
     30 audio/x-vorbis+ogg
     29 image/x-portable-pixmap
     27 inode/directory
     27 image/x-portable-bitmap
     27 audio/x-mpeg
     26 application/x-ogg
     25 audio/x-mpegurl
     25 audio/ogg
     24 text/html

The list was created like this using a sid chroot: "cat /var/lib/apt/lists/*sid*_dep11_Components-amd64.yml.gz| zcat | awk '/^ - \S+\/\S+$/ {print $2 }' | sort | uniq -c | sort -nr | head -20"

It is interesting to see how image formats have passed text/plain as the most announced supported MIME type. These days, thanks to the AppStream system, if you run into a file format you do not know, and want to figure out which packages support the format, you can find the MIME type of the file using "file --mime <filename>", and then look up all packages announcing support for this format in their AppStream metadata (XML or .desktop file) using "appstreamcli what-provides mimetype <mime-type>. For example if you, like me, want to know which packages support inode/directory, you can get a list like this:

% appstreamcli what-provides mimetype inode/directory | grep Package: | sort
Package: anjuta
Package: audacious
Package: baobab
Package: cervisia
Package: chirp
Package: dolphin
Package: doublecmd-common
Package: easytag
Package: enlightenment
Package: ephoto
Package: filelight
Package: gwenview
Package: k4dirstat
Package: kaffeine
Package: kdesvn
Package: kid3
Package: kid3-qt
Package: nautilus
Package: nemo
Package: pcmanfm
Package: pcmanfm-qt
Package: qweborf
Package: ranger
Package: sirikali
Package: spacefm
Package: spacefm
Package: vifm
%

Using the same method, I can quickly discover that the Sketchup file format is not yet supported by any package in Debian:

% appstreamcli what-provides mimetype  application/vnd.sketchup.skp
Could not find component providing 'mimetype::application/vnd.sketchup.skp'.
%

Yesterday I used it to figure out which packages support the STL 3D format:

% appstreamcli what-provides mimetype  application/sla|grep Package
Package: cura
Package: meshlab
Package: printrun
%

PS: A new version of Cura was uploaded to Debian yesterday.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

09 July, 2018 06:05AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Taiwan Travel Blog - Day 1

I'm going to DebConf18 later this month, and since I had some free time and I speak a somewhat understandable mandarin, I decided to take a full month of vacation in Taiwan.

I'm not sure if I'll keep blogging about this trip, but so far it's been very interesting and I felt the urge to share the beauty I've seen with the world.

This was the first proper day I spent in Taiwan. I arrived on the 8th during the afternoon, but the time I had left was all spent traveling to Hualien County (花蓮縣) were I intent to spend the rest of my time before DebConf.

Language Rant

I'm mildly annoyed at Taiwan for using traditional Chinese characters instead of simplified ones like they do in Mainland China. So yeah, even though I've been studying mandarin for a while now, I can't read much if anything at all. For those of you not familiar with mandarin, here is an example of a very common character written with simplified (后) and traditional characters (後). You don't see the resemblance between the two? Me neither.

I must say technology is making my trip much easier though. I remember a time when I had to use my pocket dictionary to lookup words and characters and it used to take me up to 5 minutes to find a single character1. That's how you end up ordering cold duck blood soup from a menu without pictures after having given up on translating it.

Now, I can simply use my smartphone and draw the character I'm looking for in my dictionary app. It's fast, it's accurate and it's much more complete than a small pocket dictionary.

Takoro National Park (太鲁阁国家公园)

Since I've seen a bunch of large cities in China already and I both dislike pollution and large amounts of people squished up in too few square meters, I rapidly decided I wasn't going to visit Taipei and would try to move out and explore one of the many national parks in Taiwan.

After looking it up, Takoro National Park in the Hualien County seemed the best option for an extended stay. It's large enough that there is a substantial tourism economy built around visiting the multiple trails of the park, there are both beginner and advanced trails you can choose from and the scenery is incredible.

Also Andrew Lee lives nearby and had a bunch of very nice advice for me, making my trip to Takoro much easier.

Swallow Gorge (燕子口)

Picture of the LiWu river in Yanzikou

The first trail I visited in the morning was Swallow Gorge. Apparently it's frequently closed because of falling rocks. Since the weather was very nice and the trail was open, I decided to start by this one.

Fun fact, at first I thought the swallow in Swallow Gorge meant swallowing, but it is swallow as in the cute bird commonly associated with spring time. The gorge is named that way because the small holes in the cliffs are used by swallows to nest. I kinda understood that when I saw a bunch of them diving and playing in the wind in front of me.

The Gorge was very pretty, but it was full of tourists and the "trail" was actually a painted line next to the road where car drives. It was also pretty short. I guess that's ok for a lot of people, but I was looking for something a little more challenging and less noisy.

Shakadang Trail (砂卡礑步道)

The second trail I visited was the Shakadang trail. The trail dates back to 1940, when the Japanese tried to use the Shakadang river for hydroelectricity.

Shakadang's river water is bright blue and extremely clear

This trail was very different from Yanzikou, being in the wild and away from cars. It was a pretty easy trail (2/5) and although part of it was paved with concrete, the more you went the wilder it got. In fact, most of the tourist gave up after the first kilometer and I had the rest of the path to myself afterwards.

Some cute purple plant growing along the river

The path is home to a variety of wild animals, plants and insects. I didn't see any wild board, but gosh damn did I saw some freakingly huge spiders. As I learnt later, Taiwan is home of the largest spiders in the world. The ones I saw (Golden silk orb-weaver, Nephila pilipes) had bodies easily 3 to 5cm long and 2cm thick, with an overall span of 20cm with their legs.

I also heard some bugs (I guess it was bugs) making a huge racket that somewhat reminded me of an old car's loose alternator belt strap on a cold winter morning.


  1. Using a Chinese dictionary is a hard thing to do since there is no alphabet. Instead, the characters are classified by the number of strokes in their radicals and then by the number of strokes in the rest of the character. 

09 July, 2018 04:00AM by Louis-Philippe Véronneau

Ian Wienand

uwsgi; oh my!

The world of Python based web applications, WSGI, its interaction with uwsgi and various deployment methods can quickly turn into a incredible array of confusingly named acronym soup. If you jump straight into the uwsgi documentation it is almost certain you will get lost before you start!

Below tries to lay out a primer for the foundations of application deployment within devstack; a tool for creating a self-contained OpenStack environment for testing and interactive development. However, it is hopefully of more general interest for those new to some of these concepts too.

WSGI

Let's start with WSGI. Fully described in PEP 333 -- Python Web Server Gateway Interface the core concept a standardised way for a Python program to be called in response to a web request. In essence, it bundles the parameters from the incoming request into known objects, and gives you can object to put data into that will get back to the requesting client. The "simplest application", taken from the PEP directly below, highlights this perfectly:

def simple_app(environ, start_response):
     """Simplest possible application object"""
     status = '200 OK'
     response_headers = [('Content-type', 'text/plain')]
     start_response(status, response_headers)
     return ['Hello world!\n']

You can start building frameworks on top of this, but yet maintain broad interoperability as you build your application. There is plenty more to it, but that's all you need to follow for now.

Using WSGI

Your WSGI based application needs to get a request from somewhere. We'll refer to the diagram below for discussions of how WSGI based applications can be deployed.

Overview of some WSGI deployment methods

In general, this is illustrating how an API end-point http://service.com/api/ might be connected together to an underlying WSGI implementation written in Python (web_app.py). Of course, there are going to be layers and frameworks and libraries and heavens knows what else in any real deployment. We're just concentrating on Apache integration -- the client request hits Apache first and then gets handled as described below.

CGI

Starting with 1 in the diagram above, we see CGI or "Common Gateway Interface". This is the oldest and most generic method of a web server calling an external application in response to an incoming request. The details of the request are put into environment variables and whatever process is configured to respond to that URL is fork() -ed. In essence, whatever comes back from stdout is sent back to the client and then the process is killed. The next request comes in and it starts all over again.

This can certainly be done with WSGI; above we illustrate that you'd have a framework layer that would translate the environment variables into the python environ object and connect up the processes output to gather the response.

The advantage of CGI is that it is the lowest common denominator of "call this when a request comes in". It works with anything you can exec, from shell scripts to compiled binaries. However, forking processes is expensive, and parsing the environment variables involves a lot of fiddly string processing. These become issues as you scale.

Modules

Illustrated by 2 above, it is possible to embed a Python interpreter directly into the web server and call the application from there. This is broadly how mod_python, mod_wsgi and mod_uwsgi all work.

The overheads of marshaling arguments into strings via environment variables, then unmarshaling them back to Python objects can be removed in this model. The web server handles the tricky parts of communicating with the remote client, and the module "just" needs to translate the internal structures of the request and response into the Python WSGI representation. The web server can manage the response handlers directly leading to further opportunities for performance optimisations (more persistent state, etc.).

The problem with this model is that your web server becomes part of your application. This may sound a bit silly -- of course if the web server doesn't take client requests nothing works. However, there are several situations where (as usual in computer science) a layer of abstraction can be of benefit. Being part of the web server means you have to write to its APIs and, in general, its view of the world. For example, mod_uwsgi documentation says

"This is the original module. It is solid, but incredibly ugly and does not follow a lot of apache coding convention style".

uwsgi

mod_python is deprecated with mod_wsgi as the replacement. These are obviously tied very closely to internal Apache concepts.

In production environments, you need things like load-balancing, high-availability and caching that all need to integrate into this model. Thus you will have to additionally ensure these various layers all integrate directly with your web server.

Since your application is the web server, any time you make small changes you essentially need to manage the whole web server; often with a complete restart. Devstack is a great example of this; where you have 5-6 different WSGI-based services running to simulate your OpenStack environment (compute service, network service, image service, block storage, etc) but you are only working on one component which you wish to iterate quickly on. Stopping everything to update one component can be tricky in both production and development.

uwsgi

Which brings us to uwsgi (I call this "micro-wsgi" but I don't know if it actually intended to be a μ). uwsgi is a real Swiss Army knife, and can be used in contexts that don't have to do with Python or WSGI -- which I believe is why you can get quite confused if you just start looking at it in isolation.

uwsgi lets us combine some of the advantages of being part of the web server with the advantages of abstraction. uwsgi is a complete pluggable network daemon framework, but we'll just discuss it in one context illustrated by 3.

In this model, the WSGI application runs separately to the webserver within the embedded python interpreter provided by the uwsgi daemon. uwsgi is, in parts, a web-server -- as illustrated it can talk HTTP directly if you want it to, which can be exposed directly or via a traditional proxy.

By using the proxy extension mod_proxy_uwsgi we can have the advantage of being "inside" Apache and forwarding the requests via a lightweight binary channel to the application back end. In this model, uwsgi provides a uwsgi:// service using its internal protcol on a private port. The proxy module marshals the request into small packets and forwards it to the given port. uswgi takes the incoming request, quickly unmarshals it and feeds it into the WSGI application running inside. Data is sent back via similarly fast channels as the response (note you can equally use file based Unix sockets for local only communication).

Now your application has a level of abstraction to your front end. At one extreme, you could swap out Apache for some other web server completely and feed in requests just the same. Or you can have Apache start to load-balance out requests to different backend handlers transparently.

The model works very well for multiple applications living in the same name-space. For example, in the Devstack context, it's easy with mod_proxy to have Apache doing URL matching and separate out each incoming request to its appropriate back end service; e.g.

  • http://service/identity gets routed to Keystone running at localhost:40000
  • http://service/compute gets sent to Nova at localhost:40001
  • http://service/image gets sent to glance at localhost:40002

and so on (you can see how this is exactly configured in lib/apache:write_uwsgi_config).

When a developer makes a change they simply need to restart one particular uwsgi instance with their change and the unified front-end remains untouched. In Devstack (as illustrated) the uwsgi processes are further wrapped into systemd services which facilitates easy life-cycle and log management. Of course you can imagine you start getting containers involved, then container orchestrators, then clouds-on-clouds ...

Conclusion

There's no right or wrong way to deploy complex web applications. But using an Apache front end, proxying requests via fast channels to isolated uwsgi processes running individual WSGI-based applications can provide both good performance and implementation flexibility.

09 July, 2018 03:45AM by Ian Wienand

July 08, 2018

hackergotchi for Jonathan McDowell

Jonathan McDowell

Fixing a broken ESP8266

One of the IoT platforms I’ve been playing with is the ESP8266, which is a pretty incredible little chip with dev boards available for under £4. Arduino and Micropython are both great development platforms for them, but the first board I bought (back in 2016) only had a 4Mbit flash chip. As a result I spent some time writing against the Espressif C SDK and trying to fit everything into less than 256KB so that the flash could hold 2 images and allow over the air updates. Annoyingly just as I was getting to the point of success with Richard Burton’s rBoot my device started misbehaving, even when I went back to the default boot loader:

 ets Jan  8 2013,rst cause:1, boot mode:(3,6)

load 0x40100000, len 816, room 16
tail 0
chksum 0x8d
load 0x3ffe8000, len 788, room 8
tail 12
chksum 0xcf
ho 0 tail 12 room 4
load 0x3ffe8314, len 288, room 12
tail 4
chksum 0xcf
csum 0xcf

2nd boot version : 1.2
  SPI Speed      : 40MHz
  SPI Mode       : DIO
  SPI Flash Size : 4Mbit
jump to run user1

Fatal exception (0):
epc1=0x402015a4, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000000, depc=0x00000000
Fatal exception (0):
epc1=0x402015a4, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000000, depc=0x00000000
Fatal exception (0):

(repeats indefinitely)

Various things suggested this was a bad flash. I tried a clean Micropython install, a restore of the original AT firmware backup I’d taken, and lots of different combinations of my own code/the blinkenlights demo and rBoot/Espressif’s bootloader. I made sure my 3.3v supply had enough oompf (I’d previously been cheating and using the built in FT232RL regulator, which doesn’t have quite enough when the device is fully operational, rather than in UART boot mode, such as doing an OTA flash). No joy. I gave up and moved on to one of the other ESP8266 modules I had, with a greater amount of flash. However I was curious about whether this was simply a case of the flash chip wearing out (various sites claim the cheap ones on some dev boards will die after a fairly small number of programming cycles). So I ordered some 16Mb devices - cheap enough to make it worth trying out, but also giving a useful bump in space.

They arrived this week and I set about removing the old chip and soldering on the new one (Andreas Spiess has a useful video of this, or there’s Pete Scargill’s write up). Powered it all up, ran esptool.py flash_id to see that it was correctly detected as a 16Mb/2MB device and set about flashing my app onto it. Only to get:

 ets Jan  8 2013,rst cause:2, boot mode:(3,3)

load 0x40100000, len 612, room 16
tail 4
chksum 0xfd
load 0x88380000, len 565951362, room 4
flash read err, ets_unpack_flash_code
ets_main.c

Ooops. I had better luck with a complete flash erase (esptool.py erase_flash) and then a full program of Micropython using esptool.py --baud 460800 write_flash --flash_size=detect -fm dio 0 esp8266-20180511-v1.9.4.bin, which at least convinced me I’d managed to solder the new chip on correctly. Further experimention revealed I needed to pass all of the flash parameters to esptool.py to get rBoot entirely happy, and include esp_init_data_default.bin (FWIW I updated everything to v2.2.1 as part of the process):

esptool.py write_flash --flash_size=16m -fm dio 0x0 rboot.bin 0x2000 rom0.bin \
    0x120000 rom1.bin 0x1fc000 esp_init_data_default_v08.bin

Which gives (at the default 76200 of the bootloader bit):

 ets Jan  8 2013,rst cause:1, boot mode:(3,7)

load 0x40100000, len 1328, room 16
tail 0
chksum 0x12
load 0x3ffe8000, len 604, room 8
tail 4
chksum 0x34
csum 0x34

rBoot v1.4.2 - richardaburton@gmail.com
Flash Size:   16 Mbit
Flash Mode:   DIO
Flash Speed:  40 MHz

Booting rom 0.
rf cal sector: 507
freq trace enable 0
rf[112]

Given the cost of the modules it wasn’t really worth my time and energy to actually fix the broken one rather than buying a new one, but it was rewarding to be sure of the root cause. Hopefully this post at least serves to help anyone seeing the same exception messages determine that there’s a good chance their flash has died, and that a replacement may sort the problem.

08 July, 2018 02:21PM

Petter Reinholdtsen

Debian APT upgrade without enough free space on the disk...

Quite regularly, I let my Debian Sid/Unstable chroot stay untouch for a while, and when I need to update it there is not enough free space on the disk for apt to do a normal 'apt upgrade'. I normally would resolve the issue by doing 'apt install <somepackages>' to upgrade only some of the packages in one batch, until the amount of packages to download fall below the amount of free space available. Today, I had about 500 packages to upgrade, and after a while I got tired of trying to install chunks of packages manually. I concluded that I did not have the spare hours required to complete the task, and decided to see if I could automate it. I came up with this small script which I call 'apt-in-chunks':

#!/bin/sh
#
# Upgrade packages when the disk is too full to upgrade every
# upgradable package in one lump.  Fetching packages to upgrade using
# apt, and then installing using dpkg, to avoid changing the package
# flag for manual/automatic.

set -e

ignore() {
    if [ "$1" ]; then
	grep -v "$1"
    else
	cat
    fi
}

for p in $(apt list --upgradable | ignore "$@" |cut -d/ -f1 | grep -v '^Listing...'); do
    echo "Upgrading $p"
    apt clean
    apt install --download-only -y $p
    for f in /var/cache/apt/archives/*.deb; do
	if [ -e "$f" ]; then
	    dpkg -i /var/cache/apt/archives/*.deb
	    break
	fi
    done
done

The script will extract the list of packages to upgrade, try to download the packages needed to upgrade one package, install the downloaded packages using dpkg. The idea is to upgrade packages without changing the APT mark for the package (ie the one recording of the package was manually requested or pulled in as a dependency). To use it, simply run it as root from the command line. If it fail, try 'apt install -f' to clean up the mess and run the script again. This might happen if the new packages conflict with one of the old packages. dpkg is unable to remove, while apt can do this.

It take one option, a package to ignore in the list of packages to upgrade. The option to ignore a package is there to be able to skip the packages that are simply too large to unpack. Today this was 'ghc', but I have run into other large packages causing similar problems earlier (like TeX).

Update 2018-07-08: Thanks to Paul Wise, I am aware of two alternative ways to handle this. The "unattended-upgrades --minimal-upgrade-steps" option will try to calculate upgrade sets for each package to upgrade, and then upgrade them in order, smallest set first. It might be a better option than my above mentioned script. Also, "aptutude upgrade" can upgrade single packages, thus avoiding the need for using "dpkg -i" in the script above.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

08 July, 2018 10:10AM

Minkush Jain

Getting Started with Debian Packaging

One of my tasks in GSoC involved set up of Thunderbird extensions for the user. Some of the more popular add-ons like ‘Lightning’ (calendar organiser) already has a Debian package.

Another important add on is ‘Cardbook’ which is used to manage contacts for the user based on CardDAV and vCard standards. But it doesn’t have a package yet.

My mentor, Daniel motivated me to create a package for it and upload it to mentors.debian.net. It would ease the installation process as it could get installed through apt-get. This blog describes how I learned and created a Debian package for CardBook from scratch.

Since, I was new to packaging, I did extensive research on basics of building a package from the source code and checked if the license was DFSG compatible.

I learned from various Debian wiki guides like ‘Packaging Intro’, ‘Building a Package’ and blogs.

I also studied the amd64 files included in Lightning extension package.

The package I created could be found here.

Debian Package! Debian Package

Creating an empty package

I started by creating a debian directory by using dh_make command


# Empty project folder
$ mkdir -p Debian/cardbook

# create files
$ dh_make\
> --native \
> --single \
> --packagename cardbook_1.0.0 \
> --email minkush@example.com

Some important files like control, rules, changelog, copyright are initialized with it.

The list of all the files created:


$ find /debian
debian/
debian/rules
debian/preinst.ex
debian/cardbook-docs.docs
debian/manpage.1.ex
debian/install
debian/source
debian/source/format
debian/cardbook.debhelper.lo
debian/manpage.xml.ex
debian/README.Debian
debian/postrm.ex
debian/prerm.ex
debian/copyright
debian/changelog
debian/manpage.sgml.ex
debian/cardbook.default.ex
debian/README
debian/cardbook.doc-base.EX
debian/README.source
debian/compat
debian/control
debian/debhelper-build-stamp
debian/menu.ex
debian/postinst.ex
debian/cardbook.substvars
debian/files

I gained an understanding of Dpkg package management program in Debian and its use to install, remove and manage packages.

I build an empty package with dpkg commands. This created an empty package with four files namely .changes, .deb, .dsc, .tar.gz

.dsc file contains the changes made and signature

.deb is the main package file which can be installed

.tar.gz (tarball) contains the source package

The process also created the README and changelog files in /usr/share. They contain the essential notes about the package like description, author and version.

I installed the package and checked the installed package contents. My new package mentions the version, architecture and description!


$ dpkg -L cardbook
/usr
/usr/share
/usr/share/doc
/usr/share/doc/cardbook
/usr/share/doc/cardbook/README.Debian
/usr/share/doc/cardbook/changelog.gz
/usr/share/doc/cardbook/copyright


Including CardBook source files

After successfully creating an empty package, I added the actual CardBook add-on files inside the package. The CardBook’s codebase is hosted here on Gitlab. I included all the source files inside another directory and told the build package command which files to include in the package.

I did this by creating a file debian/install using vi editor and listed the directories that should be installed. In this process I spent some time learning to use Linux terminal based text editors like vi. It helped me become familiar with editing, creating new files and shortcuts in vi.

Once, this was done, I updated the package version in the changelog file to document the changes that I have made.


$ dpkg -l | grep cardbook
ii  cardbook       1.1.0          amd64        Thunderbird add-on for address book


changelog file Changelog file after updating Package

After rebuilding it, dependencies and detailed description can be added if necessary. The Debian control file can be edited to add the additional package requirements and dependencies.

Local Debian Repository

Without creating a local repository, CardBook could be installed with:


$ sudo dpkg -i cardbook_1.1.0.deb

To actually test the installation for the package, I decided to build a local Debian repository. Without it, the apt-get command would not locate the package, as it is not in uploaded in Debian packages on net.

For configuring a local Debian repository, I copied my packages (.deb) to Packages.gz file placed in a /tmp location.

Packages.gz Local Debian Repo

To make it work, I learned about the apt configuration and where it looks for files.

I researched for a way to add my file location in apt-config. Finally I could accomplish the task by adding *.list file with package’s path in APT and updating ‘apt-cache’ afterwards.

Hence, the latest CardBook version could be successfully installed by apt-get install cardbook

Installation CardBook Installation through apt-get

Fixing Packaging errors and bugs

My mentor, Daniel helped me a lot during this process and guided me how to proceed further with the package. He told me to use Lintian for fixing common packaging error and then using dput to finally upload the CardBook package.

Lintian is a Debian package checker which finds policy violations and bugs. It is one of the most widely used tool by Debian Maintainers to automate checks for Debian policies before uploading the package.

I have uploaded the second updated version of the package in a separate branch of the repository on Salsa here inside Debian directory.

I installed Lintian from backports and learned to use it on a package to fix errors. I researched on the abbreviations used in its errors and how to show detailed response from lintian commands


$ lintian -i -I --show-overrides cardbook_1.2.0.changes

Initially on running the command on the .changes file, I was surprised to see that a large number of errors, warnings and notes were displayed!

Running Lintian on changelog Brief errors after running Lintian on Package

Running Lintian Detailed Lintian errors (1)

Running Lintian Detailed Lintian errors (2) and many more…

I spend some days to fix some errors related to Debian package policy violations. I had to dig into every policy and Debian rules carefully to eradicate a simple error. For this I referred various sections on Debian Policy Manual and Debian Developer’s Reference.

I am still working on making it flawless and hope to upload it on mentors.debian.net soon!

It would be grateful if people from the Debian community who use Thunderbird could help fix these errors.

08 July, 2018 04:00AM by Minkush Jain

July 07, 2018

Dominique Dumont

New Software::LicenseMoreUtils Perl module

Hello

Debian project has rather strict requirements regarding package license. One of these requirements is to provide a copyright file mentioning the license of the files included in a Debian package.

Debian also recommends to provide this copyright information in a machine readable format that contain the whole text of the license(s) or a summary pointing to a pre-defined location on the file system (see this example).

cme and Config::Model::Dpkg::Copyright helps in this task using Software::License module. But this module lacks the following features to properly support the requirements of Debian packaging:

  • license summary
  • support for clause like “GPL version 2 or (at your option) any later version”

Long story short, I’ve written Software::LicenseMoreUtils to provide these missing features. This module is a wrapper around Software::License and has the same API.

Adding license summaries for Debian requires only to update this YAML file.

This modules was written for Debian while keeping other distros in minds. Debian derevatives like Ubuntu or Mind are supported. Adding license summaries for other Linux distribution is straightforward. Please submit a bug or a PR to add support for other distributions.

For more details. please see:

 

All the best

07 July, 2018 05:25PM by dod

Craig Small

wordpress 4.9.7

No sooner than I had patched WordPress 4.9.5 to fix the arbitrary unlink bug than I realised there is a WordPress 4.9.7 out there. This release (just out for Debian, if my Internet behaves) fixes the unlink bug found by RIPS Technologies.  However, the WordPress developers used a different method to fix it.

There will be Debian backports for WordPress that use one of these methods. It will come down to do those older versions use hooks and how different the code is in post.php

You should update, and if you don’t like WordPress deleting or editing its own files, perhaps consider using AppArmor.

07 July, 2018 12:50PM by Craig

July 06, 2018

hackergotchi for Clint Adams

Clint Adams

Solve for q

f 0   =  1.5875
f 0.5 =  1.5875
f 1   =  3.175
f 2   =  6.35
f 3   =  9.525
f 4   = 12.7
f 5   = 15.875
f 6   = 19.05
f 7   = 22.225
f 8   = 25.4
f 9   = 28.575
f 10  = 31.75
Posted on 2018-07-06
Tags: barks

06 July, 2018 10:13PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Newcastle University Historic Computing

some of our micro computers

some of our micro computers

Since first writing about my archiving activities in 2012 I've been meaning to write an update on what I've been up to, but I haven't got around to it. This, however, is noteable enough to be worth writing about!

In the last few months I became chair of the Historic Computing Committee at Newcastle University. We are responsible for a huge collection of historic computing artefacts from the University's past, going back to the 1950s, which has been almost single-handedly assembled and curated over the course of decades by the late Roger Broughton, who did much of the work in his retirement.

Segment of IBM/360 mainframe

Segment of IBM/360 mainframe

Sadly, Roger died in 2016.

Recently there has been an upsurge of interest and support for our project, partly as a result of other volunteers stepping in and partly due to the School of Computing moving to a purpose-built building and celebrating its 60th birthday.

We've managed to secure some funding from various sources to purchase proper, museum-grade storage and display cabinets. Although portions of the collection have been exhibited for one-off events, including School open days, this will be the first time that a substantial portion of the collection will be on (semi-)permanent public display.

Amstrad PPC640 portable PC

Amstrad PPC640 portable PC

Things have been moving very quickly recently. I am very happy to announce that the initial public displays will be unveiled as part of the Great Exhibition of the North! Most of the details are still TBC, but if you are interested you can keep an eye on this A History Of Computing events page.

For more about the Historic Computing Committee, cs-history Special Interest Group and related stuff, you can follow the CS History SIG blog, which we will hopefully be updating more often going forward. For the Historic Computing Collection specifically, please see the The Roger Broughton Museum of Computing Artefacts.

06 July, 2018 01:54PM

hackergotchi for Holger Levsen

Holger Levsen

20180706-rise-of-the-machines

Rise of the machines

Last week I was in a crowd of 256 people watching and cheering Compressorhead, some were stage-diving. Truely awesome.

06 July, 2018 12:12PM

July 05, 2018

Thorsten Alteholz

My Debian Activities in June 2018

FTP master

This month I accepted 166 packages and rejected only 7 uploads. The overall number of packages that got accepted this month was 216.

Debian LTS

This was my forty eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

  • [DLA 1404-1] lava-server security update for one CVE
  • [DLA 1403-1] zendframework security update for one CVE
  • [DLA 1409-1] mosquitto security update for two CVE
  • [DLA 1408-1] simplesamlphp security update for two CVE

I also prepared a test package for slurm-llnl but got no feadback yet *hint* *hint*.

This month has been the end of Wheezy LTS and the beginning of Jessie LTS. After asking Ansgar, I did the reconfiguration of the upload queues on seger to remove the embargoed queue for Jessie and reduce the number of supported architectures.

Further I started to work on opencv.

Unfortunately the normal locking mechanism for work on packages by claiming the package in dla-needed.txt did not really work during the transition. As a result I worked on libidn and mercurial parallel to others. There seems to be room for improvement for the next transition.

Last but not least I did one week of frontdesk duties.

Debian ELTS

This month was the first ELTS month.

During my allocated time I made the first CVE triage in my week of frontdesk duties, extended the check-syntax part in the ELTS security tracker and uploaded:

  • ELA-3-1 for file
  • ELA-4-1 for openssl

Other stuff

During June I continued the libosmocore transition but could not finish it. I hope I can upload all missing packages in July.

Further I continued to sponsor some glewlwyd packages for Nicolas Mora.

The DOPOM package for this month was dvbstream.

I also upload a new upstream version of …

05 July, 2018 07:05PM by alteholz

July 04, 2018

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

My Free Software Activities in June 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

I merged a branch adding appstream related data (thanks to Matthias Klumpp). I merged multiple small contributions from a new contributor: Lev Lazinskiy submitted a fix to have the correct version string in the documentation and ensured that we could not use the login page if we are already identified (see MR 36).

Arthur Del Esposte continued his summer of code project and submitted multiple merge requests that I reviewed multiple times before they were ready to be merged. He implemented a team search feature, created a small framework to display an overview of all packages of a team.

On a more administrative level, I had to deal with many subscriptions that became immediately invalid when alioth.debian.org shut down. So I tried to replace all email subscriptions using *@users.alioth.debian.org with alternate emails linked to the same account. When no fallback was possible, I simply deleted the subscription.

pkg-security work

I sponsored cewl 5.4.3-1 (new upstream release) and wfuzz_2.2.11-1.dsc (new upstream release), masscan 1.0.5+ds1-1 (taken over by the team, new upstream release) and wafw00f 0.9.5-1 (new upstream release). I sponsored wifite2, made the unittests run during the build and added some autopkgtest. I submitted a pull request to skip tests when some tools are unavailable.

I filed #901595 on reaver to get a fixed watch file.

Misc Debian work

I reviewed multiple merge requests on live-build (about its handling of archive keys and the associated documentation). I uploaded a new version of live-boot (20180603) with the pending changes.

I sponsored pylint-django 0.11-1 for Joseph Herlant, xlwt 1.3.0-2 (bug fix) and python-num2words_0.5.6-1~bpo9+1 (backport requested by a user).

I uploaded a new version of ftplib fixing a release critical bug (#901224: ftplib FTCBFS: uses the build architecture compiler).

I submitted two patches to git (fixing french l10n in git bisect and marking two strings for translation).

I reviewed multiple merge requests on debootstrap: make --unpack-tarball no longer downloads anything, --components not carried over with --foreign/--second-stage and enabling --merged-usr by default.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

04 July, 2018 01:35PM by Raphaël Hertzog

July 03, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.3.1

A new minor release of the anytime package is now on CRAN. This is the twelveth release, and the first in a little over a year as the package has stabilized.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release adds a few minor tweaks. For numeric input, the function is now immutable: arguments that are not already cast to a different type (a common use case) are now cloned so that the input is never changed. We also added two assertion helpers for dates and datetimes, a new formatting helper for the (arguably awful, but common) ‘yyyymmdd’ format, and expanded some unit tests.

Changes in anytime version 0.3.1 (2017-06-05)

  • Numeric input is now preserved rather than silently cast to the return object type (#69 fixing #68).

  • New assertion function assertDate() and assertTime().

  • Unit tests were expanded for the new functions, for conversion from integer as well as for yyyymmdd().

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

03 July, 2018 08:36PM

hackergotchi for Ana Beatriz Guerrero Lopez

Ana Beatriz Guerrero Lopez

Introducing debos, a versatile images generator

In Debian and derivative systems, there are many ways to build images. The simplest tool of choice is often debootstrap. It works by downloading the .deb files from a mirror and unpacking them into a directory which can eventually be chrooted into.

More often than not, we want to make some customization on this image, install some extra packages, run a script, add some files, etc

debos is a tool to make this kind of trivial tasks easier. debos works using recipe files in YAML listing the actions you want to perform in your image sequentially and finally, choosing the output formats.

As opposite to debootstrap and other tools, debos doesn't need to be run as root for making actions that require root privileges in the images. debos uses fakemachine a library that setups qemu-system allowing you to work in the image with root privileges and to create images for all the architectures supported by qemu user. However, for this to work, make sure your user has permission to use /dev/kvm.

Let's see how debos works with a simple example. If we wanted to create an arm64 image for Debian Stretch customized, we would follow these steps:

  • debootstrap the image
  • install the packages we need
  • create an user
  • setup our preferred hostname
  • run a script creating an user
  • copy a file adding the user to sudoers
  • creating a tarball with the final image

This would translate into a debos recipe like this one:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{{- $architecture := or .architecture "arm64" -}}
{{- $suite := or .suite "stretch" -}}
{{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }}

architecture: {{ $architecture }}

actions:
  - action: debootstrap
    suite: {{ $suite }}
    components:
      - main
    mirror: http://deb.debian.org/debian
    variant: minbase

  - action: apt
    recommends: false
    packages:
      - adduser
      - sudo

  - action: run
    description: Set hostname
    chroot: true
    command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname

  - action: run
    chroot: true
    script: scripts/setup-user.sh

  - action: overlay
    description: Add sudo configuration
    source: overlays/sudo

  - action: pack
    file: {{ $image }}
    compression: gz

(The files used in this example are available from this git repository)

We run debos on the recipe file:

$ debos simple.yaml

The result will be a tarball named debian-stretch-arm64.tar.gz. If you check the top two lines of the recipe, you can see that the recipe defaults to architecture arm64 and Debian stretch. We can override these defaults when running debos:

$ debos -t suite:"buster" -t architecture:"amd64" simple.yaml

This time the result will be a tarball debian-buster-amd64.tar.gz.

The recipe allows some customization depending on the parameters. We could install packages depending on the target architecture, for example, installing python-libsoc in armhf and arm64:

1
2
3
4
5
6
7
8
- action: apt
  recommends: false
  packages:
    - adduser
    - sudo
{{- if eq $architecture "armhf" "arm64" }}
    - python-libsoc
{{- end }}

What happens if in addition to a tarball we would like to create a filesystem image? This could be done adding two more actions to our example, a first action creating the image partition with the selected filesystem and a second one deploying the image in the filesystem:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
- action: image-partition
  imagename: {{ $ext4 }}
  imagesize: 1GB
  partitiontype: msdos
  mountpoints:
    - mountpoint: /
      partition: root
  partitions:
    - name: root
      fs: ext4
      start: 0%
      end: 100%
      flags: [ boot ]

- action: filesystem-deploy
  description: Deploying filesystem onto image

{{ $ext4 }} should be defined in the top of the file as follows:

1
{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

We could even make this step optional and make the recipe by default to only create the tarball and add the filesystem image only adding an option to debos:

$ debos -t type:"full" full.yaml

The final debos recipe will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
{{- $architecture := or .architecture "arm64" -}}
{{- $suite := or .suite "stretch" -}}
{{ $type := or .type "min" }}
{{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }}
{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

architecture: {{ $architecture }}

actions:
  - action: debootstrap
    suite: {{ $suite }}
    components:
      - main
    mirror: http://deb.debian.org/debian
    variant: minbase

  - action: apt
    recommends: false
    packages:
      - adduser
      - sudo
{{- if eq $architecture "armhf" "arm64" }}
      - python-libsoc
{{- end }}

  - action: run
    description: Set hostname
    chroot: true
    command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname

  - action: run
    chroot: true
    script: scripts/setup-user.sh

  - action: overlay
    description: Add sudo configuration
    source: overlays/sudo

  - action: pack
    file: {{ $image }}
    compression: gz

{{ if eq $type "full" }}
  - action: image-partition
    imagename: {{ $ext4 }}
    imagesize: 1GB
    partitiontype: msdos
    mountpoints:
      - mountpoint: /
        partition: root
    partitions:
      - name: root
        fs: ext4
        start: 0%
        end: 100%
        flags: [ boot ]

  - action: filesystem-deploy
    description: Deploying filesystem onto image
{{end}}

debos also provides some other actions that haven't been covered in the example above:

  • download allows to download a single file from the internet
  • raw can directly write a file to the output image at a given offset
  • unpack can be used to unpack files from archive in the filesystem
  • ostree-commit create an OSTree commit from rootfs
  • ostree-deploy deploy an OSTree branch to the image

The example in this blog post is simple and short on purpose. Combining the actions presented above, you could also include a kernel and install a bootloader to make a bootable image. Upstream is planning to add more examples soon to the debos recipes repository.

debos is a project from Sjoerd Simons at Collabora, it's still missing some features but it's actively being developed and there are big plans for the future!

03 July, 2018 05:00PM by Ana

hackergotchi for Daniel Silverstone

Daniel Silverstone

Docker Compose

I glanced back over my shoulder to see the Director approaching. Zhe stood next to me, watched me intently for a few moments, before turning and looking out at the scape. The water was preturnaturally calm, above it only clear blue. A number of dark, almost formless, shapes were slowly moving back and forth beneath the surface.

"Is everything in readiness?" zhe queried, sounding both impatient and resigned at the same time. "And will it work?" zhe added. My predecessor, and zir predecessor before zem, had attempted to reach the same goal now set for myself.

"I believe so" I responded, sounding perhaps slightly more confident than I felt. "All the preparations have been made, everything is in accordance with what has been written". The director nodded, zir face pinched, with worry writ across it.

I closed my eyes, took a deep breath, opened them, raised my hand and focussed on the scape, until it seemed to me that my hand was almost floating on the water. With all of my strength of will I formed the incantation, repeating it over and over in my mind until I was sure that I was ready. I released it into the scape and dropped my arm.

The water began to churn, the blue above darkening rapidly, becoming streaked with grey. The shapes beneath the water picked up speed and started to grow, before resolving to what appeared to be stylised Earth whales. Huge arcs of electricity speared the water, a screaming, crashing, wall of sound rolled over us as we watched, a foundation rose up from the depths on the backs of the whale-like shapes wherever the lightning struck.

Chunks of goodness-knows-what rained down from the grey streaked morass, thumping into place seamlessly onto the foundations, slowly building what I had envisioned. I started to allow myself to feel hope, things were going well, each tower of the final solution was taking form, becoming the slick and clean visions of function which I had painstakingly selected from among the masses of clamoring options.

Now and then, the whale-like shapes would surface momentarily near one of the towers, stringing connections like bunting across the water, until the final design was achieved. My shoulders tightened and I raised my hand once more. As I did so, the waters settled, the grey bled out from the blue, and the scape became calm and the towers shone, each in its place, each looking exactly as it should.

Chanting the second incantation under my breath, over and over, until it seemed seared into my very bones, I released it into the scape and watched it flow over the towers, each one ringing out as the command reached it, until all the towers sang, producing a resonant and consonant chord which rose of its own accord, seeming to summon creatures from the very waters in which the towers stood.

The creatures approached the towers, reached up as one, touched the doors, and screamed in horror as their arms caught aflame. In moments each and every creature was reduced to ashes, somehow fundamentally unable to make use of the incredible forms I had wrought. The Director sighed heavily, turned, and made to leave. The towers I had sweated over the design of for months stood proud, beautiful, worthless.

I also turned, made my way out of the realisation suite, and with a sigh hit the scape-purge button on the outer wall. It was over. The grand design was flawed. Nothing I created in this manner would be likely to work in the scape and so the most important moment of my life was lost to ruin, just as my predecessor, and zir predecessor before zem.

Returning to my chambers, I snatched up the book from my workbench. The whale-like creature winking to me from the cover, grinning, as though it knew what I had tried to do and relished my failure. I cast it into the waste chute and went back to my drafting table to design new towers, towers which might be compatible with the creatures which were needed to inhabit them and breath life into their very structure, towers which would involve no grinning whales.

03 July, 2018 03:13PM by Daniel Silverstone

hackergotchi for Julien Danjou

Julien Danjou

How I stopped merging broken code

How I stopped merging broken code

It's been a while since I moved all my projects to GitHub. It's convenient to host Git projects, and the collaboration workflow is smooth.

I love pull requests to merge code. I review them, I send them, I merge them. The fact that you can plug them into a continuous integration system is great and makes sure that you don't merge code that will break your software. I usually have Travis-CI setup and running my unit tests and code style check.

The problem with the GitHub workflow is that it allows merging untested code.

What?

Yes, it does. If you think that your pull requests, all green decorated, are ready to be merged, you're wrong.

How I stopped merging broken code This might not be as good as you think

You see, pull requests on GitHub are marked as valid as soon as the continuous integration system passes and indicates that everything is valid. However, if the target branch (let's say, master) is updated while the pull request is opened, nothing forces to retest that pull request with this new master branch. You think that the code in this pull request works while that might have become false.

How I stopped merging broken code Master moved, the pull request is not up to date though it's still marked as passing integration.

So it might be that what went into your master branch now breaks this not-yet-merged pull request. You've no clue. You'll trust GitHub and press that green merge button, and you'll break your software. For whatever reason, it's possible that the test will break.

How I stopped merging broken code If the pull request has not been updated with the latest version of its target branch, it might break your integration.

The good news is that's something that's solvable with the strict workflow that Mergify provides. There's a nice explanation and example in Mergify's blog post You are merging untested code that I advise you to read. What Mergify provides here is a way to serialize the merge of pull requests while making sure that they are always updated with the latest version of their target branch. It makes sure that there's no way to merge broken code.

That's a workflow I've now adopted and automatized on all my repositories, and we've been using such a workflow for Gnocchi for more than a year, with great success. Once you start using it, it becomes impossible to go back!

03 July, 2018 07:22AM by Julien Danjou

Athos Ribeiro

Towards Debian Unstable builds on Debian Stable OBS

This is the sixth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian Unstable builds on OBS

Lately, I have been working towards triggering Debian Unstable builds with Debian OBS packages. As reported before, We can already build packages for both Debian 8 and 9 based on the example project configurations shipped with the package in Debian Stable and with the project configuration files publicly available on OBS SUSE instance.

While trying to build packages agains Debian unstable I have been reaching the following issue:

OBS scheduler reads the project configuration and starts downloading dependencies. The deendencies get downloaded but the build is never dispatched (the package stays on a “blocked” state). The downloaded dependencies get cleaned up and the scheduler starts the downloads again. OBS enters in an infinite loop there.

This only happens for builds on sid (unstable) and buster (testing).

We realized that the OBS version packaged in Debian 9 (the one we are currently using) does not support debian source packages built with dpkg >= 1.19. At first I started applying this patch on the OBS Debian package, but after reporting the issue to Debian OBS maintainers, they pointed me to the obs-build package in Debian stable backports repositories, which included the mentioned patch.

While the backports package included the patch needed to support source packages built with newr versions of dpkg, we still get the same issue with unstable and testing builds: the scheduler downloads the dependencies, hangs for a while but the build is never dispatched (the package stays on a “blocked” state). After a while, the dependencies get cleaned up and the scheduler starts the downloads again.

The bug has been quite hard to debug since OBS logs do not provide feedback on the problem we have been facing. To debug the problem, We tried to trigger local builds with osc. First, I (successfuly) triggered a few local builds against Debian 8 and 9 to make sure the command would work. Then We proceeded to trigger builds against Debian Unstable.

The first issue we faced was that the osc package in Debian stable cannot handle builds against source packages built with new dpkg versions. We fixed that by patching osc/util/debquery.py (we just substituted the file with the latest file in osc development version). After applying the patch, we got the same results we’d get when trying to build the package remotelly, but with debug flags on, we could have a better understanding of the problem:

BuildService API error: CPIO archive is incomplete (see .errors file)

The .errors file would just contain a list of dependencies which were missing in the CPIO archive.

If we kept retrying, OBS would keep caching more and more dependencies, until the build succeeded at some point.

We now know that the issue lies with the Download on Demand feature.

We then tried a local build in a fresh OBS instance (no cached packages) using the --disable-cpio-bulk-download osc build option, which would make OBS download each dependency individually instead of doing so in bulks. For our surprise, the builds succeeded in our first attempt.

Finally, we traced the issue all the way down to the OBS API call which is triggered when OBS needs to download missing dependenies. For some reason, the number of parameters (number of dependencies to be downloaded) affects the final response of the API call. When trying to download too many packages, The CPIO archive is not built correctly and OBS builds fail.

At the moment, we are still investigating why such calls fail with too many params and why it only fails for Debian Testing and Unstable repositories.

Next steps (A TODO list to keep on the radar)

  • Fix OBS builds on Debian Testing and Unstable
  • Write patch for Debian osc’s debquery.py so it can build Debian packages with control.tar.xz
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)

03 July, 2018 12:32AM

July 02, 2018

Reproducible builds folks

Reproducible Builds: Weekly report #166

Here’s what happened in the Reproducible Builds effort between Sunday June 24 and Saturday June 30 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

diffoscope versions 97 and 98 were uploaded to Debian unstable by Chris Lamb. They included contributions already covered in previous weeks as well as new ones from:

Chris Lamb also updated the SSL certificate for try.difoscope.org.

Authors

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

02 July, 2018 09:16PM

hackergotchi for Sune Vuorela

Sune Vuorela

80bit x87 FPU

Once again, I got surprised by the 80 bit x87 FPU stuff.

First time was around a decade ago. Back then, it was something along the lines of a sort function like:

struct ValueSorter()
{
    bool operator (const Value& first, const Value& second) const
    {
         double valueFirst = first.amount() * first.value();
         double valueSecond = second.amount() * second.value();
         return valueFirst < valueSecond;
    }
}

With some values, first and would be smaller than second, and second smaller than first. All depending on which one got truncated to 64 bit, and which one came directly from the 80bit fpu.

This time, the 80 bit version when cast to integers was 1 smaller than the 64 bit version.

Oh. The joys of x86 CPU’s.

02 July, 2018 08:24PM by Sune Vuorela

hackergotchi for Steve Kemp

Steve Kemp

Another golang port, this time a toy virtual machine.

I don't understand why my toy virtual machine has as much interest as it does. It is a simple project that compiles "assembly language" into a series of bytecodes, and then allows them to be executed.

Since I recently started messing around with interpreters more generally I figured I should revisit it. Initially I rewrote the part that interprets the bytecodes in golang, which was simple, but now I've rewritten the compiler too.

Much like the previous work with interpreters this uses a lexer and an evaluator to handle things properly - in the original implementation the compiler used a series of regular expressions to parse the input files. Oops.

Anyway the end result is that I can compile a source file to bytecodes, execute bytecodes, or do both at once:

I made a couple of minor tweaks in the port, because I wanted extra facilities. Rather than implement an opcode "STRING_LENGTH" I copied the idea of traps - so a program can call-back to the interpreter to run some operations:

int 0x00  -> Set register 0 with the length of the string in register 0.

int 0x01  -> Set register 0 with the contents of reading a string from the user

etc.

This notion of traps should allow complex operations to be implemented easily, in golang. I don't think I have the patience to do too much more, but it stands as a simple example of a "compiler" or an interpreter.

I think this program is the longest I've written. Remember how verbose assembly language is?

Otherwise: Helsinki Pride happened, Oiva learned to say his name (maybe?), and I finished reading all the James Bond novels (which were very different to the films, and have aged badly on the whole).

02 July, 2018 09:01AM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Modern OpenGL

New project, new version of OpenGL—4.5 will be my hard minimum this time. Sorry, macOS, you brought this on yourself.

First impressions: Direct state access makes things a bit less soul-sucking. Immutable textures is not really a problem when you design for it to begin with, as opposed to retrofitting them. But you still need ~150 lines of code to compile a shader and render a fullscreen quad to another texture. :-/ VAOs, you are not my friend.

Next time, maybe Vulkan? Except the amount of stuff to get that first quad on screen seems even worse there.

02 July, 2018 08:24AM

July 01, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.2.1

A new minor version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it now uses a more rigorous S4-based approach thanks to a rewrite by Leonardo Silvestri.

This release brings three different enhancements / fixes that robustify usage. No new features were added.

Changes in version 0.2.1 (2018-07-01)

  • Added attribute-preserving comparison (Leonardo in #33).

  • Added two integer64 casts in constructors (Dirk in #36).

  • Added two checks for empty arguments (Dirk in #37).

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

01 July, 2018 10:45PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

It's been 10 years since I changed Company and Job.

It's been 10 years since I changed Company and Job. If you ask me now I think it was a successful move but not without issues. I think it's a high risk move to change company and job and location at the same time, you should change one of them. I changed job and company and marital status at the same time, that was too high risk.

01 July, 2018 10:15AM by Junichi Uekawa

Paul Wise

FLOSS Activities June 2018

Changes

Issues

Review

Administration

  • fossjobs: merge pull requests
  • Debian: LDAP support request
  • Debian mentors: fix disk space issue
  • Debian wiki: clean up temp files, whitelist domains, whitelist email addresses, unblacklist IP addresses, disable accounts with bouncing email

Communication

Sponsors

The apt-cacher-ng bugs, leptonlib backport and git-repair feature request were sponsored by my employer. All other work was done on a volunteer basis.

01 July, 2018 01:37AM

June 30, 2018

Elana Hashman

Report on the Debian Bug Squashing Party

Last weekend, six folks (one new contributor and five existing contributors) attended the bug squashing party I hosted in Brooklyn. We got a lot done, and attendees demanded that we hold another one soon!

So when's the next one?

We agreed that we'd like to see the NYC Debian User Group hold two more BSPs in the next year: one in October or November of 2018, and another after the Buster freeze in early 2019, to squash RC bugs. Stay tuned for more details; you may want to join the NYC Debian User Group mailing list.

If you know of an organization that would be willing to sponsor the next NYC BSP (with space, food, and/or dollars), or you're willing to host the next BSP, please reach out.

What did folks work on?

We had a list of bugs we collected on an etherpad, which I have now mirrored to gobby (gobby.debian.org/BSP/2018/06-Brooklyn). Attendees updated the etherpad with their comments and progress. Here's what each of the participants reported.

Elana (me)

  • I filed bugs against two upstream repos (pomegranate, leiningen) to update dependencies, which are breaking the Debian builds due to the libwagon upgrade.
  • I uploaded new versions of clojure and clojure1.8 to fix a bug in how alternatives were managed: despite /usr/share/java/clojure.jar being owned by the libclojure${ver}-java binary package, the alternatives group was being managed by the postinst script in the clojure${ver} package, a holdover from when the library was not split from the CLI. Unfortunately, the upgrade is still not passing piuparts in testing, so while I thoroughly tested local upgrades and they seemed to work okay I'm hoping the failures didn't break anyone on upgrade. I'll be taking a look at further refining the preinst script to address the piuparts failure this week.
  • I fixed the Vcs error on libjava-jdbc-clojure and uploaded new version 0.7.0-2. I also added autopkgtests to the package.
  • I fixed the Vcs warning on clojure-maven-plugin and uploaded new version 1.7.1-2.
  • I helped dkg with setting up and running autopkgtests.

Clint

  • was hateful in #901327 (tigervnc)
  • uploaded a fix for #902279 (youtube-dl) to DELAYED/3-day
  • sent a patch for #902318 (monkeysphere)
  • was less hateful in #899060 (monkeysphere)

Lincoln

Editor's note: Lincoln was a new Debian contributor and made lots of progress! Thanks for joining us—we're thrilled with your contributions 🎉 Here's his report.

  • Got my environment setup to work on packages again \o/
  • Played with 901318 for a while but didn't really make progress because it seems to be a long discussion so its bad for a newcomer
  • Was indeed more successful on the python lands. Opened the following PRs
  • Now working on uploading the patches to python-pyscss & python-pytest-benchmark packages removing the dependency on python-pathlib.

dkg

  • worked on debugging/diagnosing enigmail in preparation for making it DFSG-free again (see #901556)
  • got pointers from Lincoln about understanding flow control in asynchronous javascript for debugging the failing autopkgtest suites
  • got pointers from Elana on emulating ci.debian.net's autopkgtest infrastructure so i have a better chance of replicating the failures seen on that platform

Simon

  • Moved python-requests-oauthlib to salsa
  • Updated it to 1.0 (new release), pending a couple final checks.

Geoffrey

  • Worked with Lincoln on both bugs
  • Opened #902323 about removing python-pathlib
  • Working on new pymssql upstream release / restoring it to unstable

By the numbers

All in all, we completed 6 uploads, worked on 8 bugs, filed 3 bugs, submitted 3 patches or pull requests, and closed 2 bugs. Go us! Thanks to everyone for contributing to a very productive effort.

See you all next time.

30 June, 2018 07:30PM by Elana Hashman

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in June 2018

Here is my monthly update covering what I have been doing in the free software world during June 2018 (previous month):


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:



Debian

Patches contributed


Debian LTS


This month I have been worked 18 hours on Debian Long Term Support (LTS) and 7 hours on its sister Extended LTS project. In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, responding to user questions, etc.
  • A fair amount of initial setup and administraton to accomodate the introduction for the new "Extended LTS" initiative as well as for the transition of LTS moving from supporting Debian wheezy to jessie:
    • Fixing various shared scripts, including adding pushing to the remote repository for ELAs [...] and updating hard-coded wheezy references [...]. I also added instructions on exactly how to use the kernel offered by Extended LTS [...].
    • Updating, expanding and testing my personal scripts and workflow to also work for the new "Extended" initiative.
  • Provided some help on updating the Mercurial packages. [...]
  • Began work on updating/syncing the ca-certificates packages in both LTS and Extended LTS.
  • Issued DLA 1395-1 to fix two remote code execution vulnerabilities in php-horde-image, the image processing library for the Horde <https://www.horde.org/> groupware tool. The original fix applied upstream has a regression in that it ignores the "force aspect ratio" option which I have fixed upstream .
  • Issued ELA 9-1 to correct an arbitrary file write vulnerability in the archiver plugin for the Plexus compiler system — a specially-crafted .zip file could overwrite any file on disk, leading to a privilege esclation.
  • During the overlap time between the support of wheezy and jessie I took the opportunity to address a number of vulnerabilities in all suites for the Redis key-value database, including CVE-2018-12326, CVE-2018-11218 & CVE-2018-11219) (via #902410 & #901495).

Uploads

  • redis:
    • 4.0.9-3 — Make /var/log/redis, etc. owned by the adm group. (#900496)
    • 4.0.10-1 — New upstream security release (#901495). I also uploaded this to stretch-backports and backported the packages to stretch.
    • Proposed 3.2.6-3+deb9u2 for inclusion in the next Debian stable release to address an issue in the systemd .service file. (#901811, #850534 & #880474)
  • lastpass-cli (1.3.1-1) — New upstream release, taking over maintership and completely overhauling the packaging. (#898940, #858991 & #842875)
  • python-django:
    • 1.11.13-2 — Fix compatibility with Python 3.7. (#902761)
    • 2.1~beta1-1 — New upstream release (to experimental).
  • installation-birthday (11) — Fix an issue in calcuclating the age of the system by always prefering the oldest mtime we can find. (#901005
  • bfs (1.2.2-1) — New upstream release.
  • libfiu (0.96-4) — Apply upstream patch to make the build more robust with --as-needed. (#902363)
  • I also sponsored an upload of yaml-mode (0.0.13-1) for Nicholas Steeves.

Debian bugs filed

  • cryptsetup-initramfs: "ERROR: Couldn't find sysfs hierarchy". (#902183)
  • git-buildpackage: Assumes capable UTF-8 locale. (#901586)
  • kitty: Render and ship HTML versions of asciidoc. (#902621)
  • redis: Use the system Lua to avoid an embedded code copy. (#901669)

30 June, 2018 06:11PM

Petter Reinholdtsen

The worlds only stone power plant?

So far, at least hydro-electric power, coal power, wind power, solar power, and wood power are well known. Until a few days ago, I had never heard of stone power. Then I learn about a quarry in a mountain in Bremanger i Norway, where the Bremanger Quarry company is extracting stone and dumping the stone into a shaft leading to its shipping harbour. This downward movement in this shaft is used to produce electricity. In short, it is using falling rocks instead of falling water to produce electricity, and according to its own statements it is producing more power than it is using, and selling the surplus electricity to the Norwegian power grid. I find the concept truly amazing. Is this the worlds only stone power plant?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

30 June, 2018 08:35AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.8.600.0.0

armadillo image

A new RcppArmadillo release 0.8.600.0.0, based on the new Armadillo release 8.600.0 from this week, just arrived on CRAN.

It follows our (and Conrad’s) bi-monthly release schedule. We have made interim and release candidate versions available via the GitHub repo (and as usual thoroughly tested them) but this is the real release cycle. A matching Debian release will be prepared in due course.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 479 other packages on CRAN.

A high-level summary of changes follows (which omits the two rc releases leading up to 8.600.0). Conrad did his usual impressive load of upstream changes, but we are also grateful for the RcppArmadillo fixes added by Keith O’Hara and Santiago Olivella.

Changes in RcppArmadillo version 0.8.600.0.0 (2018-06-28)

  • Upgraded to Armadillo release 8.600.0 (Sabretooth Rugrat)

    • added hess() for Hessenberg decomposition

    • added .row(), .rows(), .col(), .cols() to subcube views

    • expanded .shed_rows() and .shed_cols() to handle cubes

    • expanded .insert_rows() and .insert_cols() to handle cubes

    • expanded subcube views to allow non-contiguous access to slices

    • improved tuning of sparse matrix element access operators

    • faster handling of tridiagonal matrices by solve()

    • faster multiplication of matrices with differing element types when using OpenMP

Changes in RcppArmadillo version 0.8.500.1.1 (2018-05-17) [GH only]

  • Upgraded to Armadillo release 8.500.1 (Caffeine Raider)

    • bug fix for banded matricex
  • Added slam to Suggests: as it is used in two unit test functions [CRAN requests]

  • The RcppArmadillo.package.skeleton() function now works with example_code=FALSE when pkgKitten is present (Santiago Olivella in #231 fixing #229)

  • The LAPACK tests now cover band matrix solvers (Keith O'Hara in #230).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 June, 2018 02:11AM

June 29, 2018

hackergotchi for Gunnar Wolf

Gunnar Wolf

Want to set up a Tor node in Mexico? Hardware available

Hi friends,

Thanks to the work I have been carrying out with the "Derechos Digitales" NGO, I have received ten Raspberry Pi 3B computers, to help the growth of Tor nodes in Latin America.

The nodes can be intermediate (relays) or exit nodes. Most of us will only be able to connect relays, but if you have the possibility to set up an exit node, that's better than good!

Both can be set up in any non-filtered Internet connection that gives a publicly reachable IP address. I have to note that, although we haven't done a full ISP survey in Mexico (and it would be a very important thing to do — If you are interested in helping with that, please contact me!), I can tell you that connections via Telmex (be it via their home service, Infinitum, or their corporate brand, Uninet) are not good because the ISP filters most of the Tor Directory Authorities.

What do you need to do? Basically, mail me (gwolf@gwolf.org) sending a copy to Ignacio (ignacio@derechosdigitales.org), the person working at this NGO who managed to send me said computers. Oh, of course - And you have to be (physically) in Mexico.

I have ten computers ready to give out to whoever wants some. I am willing and even interested in giving you the needed tech support to do this. Who says "me"?

29 June, 2018 10:58PM by gwolf

hackergotchi for Neil Williams

Neil Williams

Automation & Risk

First of two posts reproducing some existing content for a wider audience due to delays in removing viewing restrictions on the originals. The first is a bit long... Those familiar with LAVA may choose to skip forward to Core elements of automation support.

A summary of this document was presented by Steve McIntyre at Linaro Connect 2018 in Hong Kong. A video of that presentation and the slides created from this document are available online: http://connect.linaro.org/resource/hkg18/hkg18-tr10/

Although the content is based on several years of experience with LAVA, the core elements are likely to be transferable to many other validation, CI and QA tasks.

I recognise that this document may be useful to others, so this blog post is under CC BY-SA 3.0: https://creativecommons.org/licenses/by-sa/3.0/legalcode See also https://creativecommons.org/licenses/by-sa/3.0/deed.en

Automation & Risk

Background

Linaro created the LAVA (Linaro Automated Validation Architecture) project in 2010 to automate testing of software using real hardware. Over the seven years of automation in Linaro so far, LAVA has also spread into other labs across the world. Millions of test jobs have been run, across over one hundred different types of devices, ARM, x86 and emulated. Varied primary boot methods have been used alone or in combination, including U-Boot, UEFI, Fastboot, IoT, PXE. The Linaro lab itself has supported over 150 devices, covering more than 40 different device types. Major developments within LAVA include MultiNode and VLAN support. As a result of this data, the LAVA team have identified a series of automated testing failures which can be traced to decisions made during hardware design or firmware development. The hardest part of the development of LAVA has always been integrating new device types, arising from issues with hardware design and firmware implementations. There are a range of issues with automating new hardware and the experience of the LAVA lab and software teams has highlighted areas where decisions at the hardware design stage have delayed deployment of automation or made the task of triage of automation failures much harder than necessary.

This document is a summary of our experience with full background and examples. The aim is to provide background information about why common failures occur, and recommendations on how to design hardware and firmware to reduce problems in the future. We describe some device design features as hard requirements to enable successful automation, and some which are guaranteed to block automation. Specific examples are used, naming particular devices and companies and linking to specific stories. For a generic summary of the data, see Automation and hardware design.

What is LAVA?

LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing, although extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

LAVA is a collection of participating components in an evolving architecture. LAVA aims to make systematic, automatic and manual quality control more approachable for projects of all sizes.

LAVA is designed for validation during development - testing whether the code that engineers are producing “works”, in whatever sense that means. Depending on context, this could be many things, for example:

  • testing whether changes in the Linux kernel compile and boot
  • testing whether the code produced by gcc is smaller or faster
  • testing whether a kernel scheduler change reduces power consumption for a certain workload etc.

LAVA is good for automated validation. LAVA tests the Linux kernel on a range of supported boards every day. LAVA tests proposed android changes in gerrit before they are landed, and does the same for other projects like gcc. Linaro runs a central validation lab in Cambridge, containing racks full of computers supplied by Linaro members and the necessary infrastucture to control them (servers, serial console servers, network switches etc.)

LAVA is good for providing developers with the ability to run customised test on a variety of different types of hardware, some of which may be difficult to obtain or integrate. Although LAVA has support for emulation (based on QEMU), LAVA is best at providing test support for real hardware devices.

LAVA is principally aimed at testing changes made by developers across multiple hardware platforms to aid portability and encourage multi-platform development. Systems which are already platform independent or which have been optimised for production may not necessarily be able to be tested in LAVA or may provide no overall gain.

What is LAVA not?

LAVA is designed for Continuous Integration not management of a board farm.

LAVA is not a set of tests - it is infrastructure to enable users to run their own tests. LAVA concentrates on providing a range of deployment methods and a range of boot methods. Once the login is complete, the test consists of whatever scripts the test writer chooses to execute in that environment.

LAVA is not a test lab - it is the software that can used in a test lab to control test devices.

LAVA is not a complete CI system - it is software that can form part of a CI loop. LAVA supports data extraction to make it easier to produce a frontend which is directly relevant to particular groups of developers.

LAVA is not a build farm - other tools need to be used to prepare binaries which can be passed to the device using LAVA.

LAVA is not a production test environment for hardware - LAVA is focused on developers and may require changes to the device or the software to enable automation. These changes are often unsuitable for production units. LAVA also expects that most devices will remain available for repeated testing rather than testing the software with a changing set of hardware.

The history of automated bootloader testing

Many attempts have been made to automate bootloader testing and the rest of this document cover the issues in detail. However, it is useful to cover some of the history in this introduction, particularly as that relates to ideas like SDMux - the SD card multiplexer which should allow automated testing of bootloaders like U-Boot on devices where the bootloader is deployed to an SD card. The problem of SDMux details the requirements to provide access to SD card filesystems to and from the dispatcher and the device. Requirements include: ethernet, no reliance on USB, removable media, cable connections, unique serial numbers, introspection and interrogation, avoiding feature creep, scalable design, power control, maintained software and mounting holes. Despite many offers of hardware, no suitable hardware has been found and testing of U-Boot on SD cards is not currently possible in automation. The identification of the requirements for a supportable SDMux unit are closely related to these device requirements.

Core elements of automation support

Reproducibility

The ability to deploy exactly the same software to the same board(s) and running exactly the same tests many times in a row, getting exactly the same results each time.

For automation to work, all device functions which need to be used in automation must always produce the same results on each device of a specific device type, irrespective of any previous operations on that device, given the same starting hardware configuration.

There is no way to automate a device which behaves unpredictably.

Reliability

The ability to run a wide range of test jobs, stressing different parts of the overall deployment, with a variety of tests and always getting a Complete test job. There must be no infrastructure failures and there should be limited variability in the time taken to run the test jobs to avoid the need for excessive Timeouts.

The same hardware configuration and infrastructure must always behave in precisely the same way. The same commands and operations to the device must always generate the same behaviour.

Scriptability

The device must support deployment of files and booting of the device without any need for a human to monitor or interact with the process. The need to press buttons is undesirable but can be managed in some cases by using relays. However, every extra layer of complexity reduces the overall reliability of the automation process and the need for buttons should be limited or eliminated wherever possible. If a device uses on LEDs to indicate the success of failure of operations, such LEDs must only be indicative. The device must support full control of that process using only commands and operations which do not rely on observation.

Scalability

All methods used to automate a device must have minimal footprint in terms of load on the workers, complexity of scripting support and infrastructure requirements. This is a complex area and can trivially impact on both reliability and reproducibility as well as making it much more difficult to debug problems which do arise. Admins must also consider the complexity of combining multiple different devices which each require multiple layers of support.

Remote power control

Devices MUST support automated resets either by the removal of all power supplied to the DUT or a full reboot or other reset which clears all previous state of the DUT.

Every boot must reliably start, without interaction, directly from the first application of power without the limitation of needing to press buttons or requiring other interaction. Relays and other arrangements can be used at the cost of increasing the overall complexity of the solution, so should be avoided wherever possible.

Networking support

Ethernet - all devices using ethernet interfaces in LAVA must have a unique MAC address on each interface. The MAC address must be persistent across reboots. No assumptions should be made about fixed IP addresses, address ranges or pre-defined routes. If more than one interface is available, the boot process must be configurable to always use the same interface every time the device is booted. WiFi is not currently supported as a method of deploying files to devices.

Serial console support

LAVA expects to automate devices by interacting with the serial port immediately after power is applied to the device. The bootloader must interact with the serial port. If a serial port is not available on the device, suitable additional hardware must be provided before integration can begin. All messages about the boot process must be visible using the serial port and the serial port should remain usable for the duration of all test jobs on the device.

Persistence

Devices supporting primary SSH connections have persistent deployments and this has implications, some positive, some negative - depending on your use case.

  • Fixed OS - the operating system (OS) you get is the OS of the device and this must not be changed or upgraded.
  • Package interference - if another user installs a conflicting package, your test can fail.
  • Process interference - another process could restart (or crash) a daemon upon which your test relies, so your test will fail.
  • Contention - another job could obtain a lock on a constrained resource, e.g. dpkg or apt, causing your test to fail.
  • Reusable scripts - scripts and utilities your test leaves behind can be reused (or can interfere) with subsequent tests.
  • Lack of reproducibility - an artifact from a previous test can make it impossible to rely on the results of a subsequent test, leading to wasted effort with false positives and false negatives.
  • Maintenance - using persistent filesystems in a test action results in the overlay files being left in that filesystem. Depending on the size of the test definition repositories, this could result in an inevitable increase in used storage becoming a problem on the machine hosting the persistent location. Changes made by the test action can also require intermittent maintenance of the persistent location.

Only use persistent deployments when essential and always take great care to avoid interfering with other tests. Users who deliberately or frequently interfere with other tests can have their submit privilege revoked.

The dangers of simplistic testing

Connect and test

Seems simple enough - it doesn’t seem as if you need to deploy a new kernel or rootfs every time, no need to power off or reboot between tests. Just connect and run stuff. After all, you already have a way to manually deploy stuff to the board. The biggest problem with this method is Persistence as above - LAVA keeps the LAVA components separated from each other but tests frequently need to install support which will persist after the test, write files which can interfere with other tests or break the manual deployment in unexpected ways when things go wrong. The second problem within this fallacy is simply the power drain of leaving the devices constantly powered on. In manual testing, you would apply power at the start of your day and power off at the end. In automated testing, these devices would be on all day, every day, because test jobs could be submitted at any time.

ssh instead of serial

This is an over-simplification which will lead to new and unusual bugs and is only a short step on from connect & test with many of the same problems. A core strength of LAVA is demonstrating differences between types of devices by controlling the boot process. By the time the system has booted to the point where sshd is running, many of those differences have been swallowed up in the boot process.

Test everything at the same time

Issues here include:

Breaking the basic scientific method of test one thing at a time

The single system contains multiple components, like the kernel and the rootfs and the bootloader. Each one of those components can fail in ways which can only be picked up when some later component produces a completely misleading and unexpected error message.

Timing

Simply deploying the entire system for every single test job wastes inordinate amounts of time when you do finally identify that the problem is a configuration setting in the bootloader or a missing module for the kernel.

Reproducibility

The larger the deployment, the more complex the boot and the tests become. Many LAVA devices are prototypes and development boards, not production servers. These devices will fail in unpredictable places from time to time. Testing a kernel build multiple times is much more likely to give you consistent averages for duration, performance and other measurements than if the kernel is only tested as part of a complete system.Automated recovery - deploying an entire system can go wrong, whether an interrupted copy or a broken build, the consequences can mean that the device simply does not boot any longer.

Every component involved in your test must allow for automated recovery

This means that the boot process must support being interrupted before that component starts to load. With a suitably configured bootloader, it is straightforward to test kernel builds with fully automated recovery on most devices. Deploying a new build of the bootloader itself is much more problematic. Few devices have the necessary management interfaces with support for secondary console access or additional network interfaces which respond very early in boot. It is possible to chainload some bootloaders, allowing the known working bootloader to be preserved.

I already have builds

This may be true, however, automation puts extra demands on what those builds are capable of supporting. When testing manually, there are any number of times when a human will decide that something needs to be entered, tweaked, modified, removed or ignored which the automated system needs to be able to understand. Examples include /etc/resolv.conf and customised tools.

Automation can do everything

It is not possible to automate every test method. Some kinds of tests and some kinds of devices lack critical elements that do not work well with automation. These are not problems in LAVA, these are design limitations of the kind of test and the device itself. Your preferred test plan may be infeasible to automate and some level of compromise will be required.

Users are all admins too

This will come back to bite! However, there are other ways in which this can occur even after administrators have restricted users to limited access. Test jobs (including hacking sessions) have full access to the device as root. Users, therefore, can modify the device during a test job and it depends on the device hardware support and device configuration as to what may happen next. Some devices store bootloader configuration in files which are accessible from userspace after boot. Some devices lack a management interface that can intervene when a device fails to boot. Put these two together and admins can face a situation where a test job has corrupted, overridden or modified the bootloader configuration such that the device no longer boots without intervention. Some operating systems require a debug setting to be enabled before the device will be visible to the automation (e.g. the Android Debug Bridge). It is trivial for a user to mistakenly deploy a default or production system which does not have this modification.

LAVA and CI

LAVA is aimed at kernel and system development and testing across a wide variety of hardware platforms. By the time the test has got to the level of automating a GUI, there have been multiple layers of abstraction between the hardware, the kernel, the core system and the components being tested. Following the core principle of testing one element at a time, this means that such tests quickly become platform-independent. This reduces the usefulness of the LAVA systems, moving the test into scope for other CI systems which consider all devices as equivalent slaves. The overhead of LAVA can become an unnecessary burden.

CI needs a timely response - it takes time for a LAVA device to be re-deployed with a system which has already been tested. In order to test a component of the system which is independent of the hardware, kernel or core system a lot of time has been consumed before the “test” itself actually begins. LAVA can support testing pre-deployed systems but this severely restricts the usefulness of such devices for actual kernel or hardware testing.

Automation may need to rely on insecure access. Production builds (hardware and software) take steps to prevent systems being released with known login identities or keys, backdoors and other security holes. Automation relies on at least one of these access methods being exposed, typically a way to access the device as the root or admin user. User identities for login must be declared in the submission and be the same across multiple devices of the same type. These access methods must also be exposed consistently and without requiring any manual intervention or confirmation. For example, mobile devices must be deployed with systems which enable debug access which all production builds will need to block.

Automation relies on remote power control - battery powered devices can be a signficant problem in this area. On the one hand, testing can be expected to involve tests of battery performance, low power conditions and recharge support. However, testing will also involve broken builds and failed deployments where the only recourse is to hard reset the device by killing power. With a battery in the loop, this becomes very complex, sometimes involving complex electrical bodges to the hardware to allow the battery to be switched out of the circuit. These changes can themselves change the performance of the battery control circuitry. For example, some devices fail to maintain charge in the battery when held in particular states artificially, so the battery gradually discharges despite being connected to mains power. Devices which have no battery can still be a challenge as some are able to draw power over the serial circuitry or USB attachments, again interfering with the ability of the automation to recover the device from being “bricked”, i.e. unresponsive to the control methods used by the automation and requiring manual admin intervention.

Automation relies on unique identification - all devices in an automation lab must be uniquely identifiable at all times, in all modes and all active power states. Too many components and devices within labs fail to allow for the problems of scale. Details like serial numbers, MAC addresses, IP addresses and bootloader timeouts must be configurable and persistent once configured.

LAVA is not a complete CI solution - even including the hardware support available from some LAVA instances, there are a lot more tools required outside of LAVA before a CI loop will actually work. The triggers from your development workflow to the build farm (which is not LAVA), the submission to LAVA from that build farm are completely separate and outside the scope of this documentation. LAVA can help with the extraction of the results into information for the developers but LAVA output is generic and most teams will benefit from some “frontend” which extracts the data from LAVA and generates relevant output for particular development teams.

Features of CI

Frequency

How often is the loop to be triggered?

Set up some test builds and test jobs and run through a variety of use cases to get an idea of how long it takes to get from the commit hook to the results being available to what will become your frontend.

Investigate where the hardware involved in each stage can be improved and analyse what kind of hardware upgrades may be useful.

Reassess the entire loop design and look at splitting the testing if the loop cannot be optimised to the time limits required by the team. The loop exists to serve the team but the expectations of the team may need to be managed compared to the cost of hardware upgrades or finite time limits.

Scale

How many branches, variants, configurations and tests are actually needed?

Scale has a direct impact on the affordability and feasibility of the final loop and frontend. Ensure that the build infrastructure can handle the total number of variants, not just at build time but for storage. Developers will need access to the files which demonstrate a particular bug or regression

Scale also provides benefits of being able to ignore anomalies.

Identify how many test devices, LAVA instances and Jenkins slaves are needed. (As a hint, start small and design the frontend so that more can be added later.)

Interface

The development of a custom interface is not a small task

Capturing the requirements for the interface may involve lengthy discussions across the development team. Where there are irreconcilable differences, a second frontend may become necessary, potentially pulling the same data and presenting it in a radically different manner.

Include discussions on how or whether to push notifications to the development team. Take time to consider the frequency of notification messages and how to limit the content to only the essential data.

Bisect support can flow naturally from the design of the loop if the loop is carefully designed. Bisect requires that a simple boolean test can be generated, built and executed across a set of commits. If the frontend implements only a single test (for example, does the kernel boot?) then it can be easy to identify how to provide bisect support. Tests which produce hundreds of results need to be slimmed down to a single pass/fail criterion for the bisect to work.

Results

This may take the longest of all elements of the final loop

Just what results do the developers actually want and can those results be delivered? There may be requirements to aggregate results across many LAVA instances, with comparisons based on metadata from the original build as well as the LAVA test.

What level of detail is relevant?

Different results for different members of the team or different teams?

Is the data to be summarised and if so, how?

Resourcing

A frontend has the potential to become complex and need long term maintenance and development

Device requirements

At the hardware design stage, there are considerations for the final software relating to how the final hardware is to be tested.

Uniqueness

All units of all devices must uniquely identify to the host machine as distinct from all other devices which may be connected at the same time. This particularly covers serial connections but also any storage devices which are exported, network devices and any other method of connectivity.

Example - the WaRP7 integration has been delayed because the USB mass storage does not export a filesystem with a unique identifier, so when two devices are connected, there is no way to distinguish which filesystem relates to which device.

All unique identifiers must be isolated from the software to be deployed onto the device. The automation framework will rely on these identifiers to distinguish one device from up to a dozen identical devices on the same machine. There must be no method of updating or modifying these identifiers using normal deployment / flashing tools. It must not be possible for test software to corrupt the identifiers which are fundamental to how the device is identified amongst the others on the same machine.

All unique identifiers must be stable across multiple reboots and test jobs. Randomly generated identifiers are never suitable.

If the device uses a single FTDI chip which offers a single UART device, then the unique serial number of that UART will typically be a permanent part of the chip. However, a similar FTDI chip which provides two or more UARTs over the same cable would not have serial numbers programmed into the chip but would require a separate piece of flash or other storage into which those serial numbers can be programmed. If that storage is not designed into the hardware, the device will not be capable of providing the required uniqueness.

Example - the WaRP7 exports two UARTs over a single cable but fails to give unique identifiers to either connection, so connecting a second device disconnects the first device when the new tty device replaces the existing one.

If the device uses one or more physical ethernet connector(s) then the MAC address for each interface must not be generated randomly at boot. Each MAC address needs to be:

  • persistent - each reboot must always use the same MAC address for each interface.
  • unique - every device of this type must use a unique MAC address for each interface.

If the device uses fastboot, then the fastboot serial number must be unique so that the device can be uniquely identified and added to the correct container. Additionally, the fastboot serial number must not be modifiable except by the admins.

Example - the initial HiKey 960 integration was delayed because the firmware changed the fastboot serial number to a random value every time the device was rebooted.

Scale

Automation requires more than one device to be deployed - the current minimum is five devices. One device is permanently assigned to the staging environment to ensure that future code changes retain the correct support. In the early stages, this device will be assigned to one of the developers to integrate the device into LAVA. The devices will be deployed onto machines which have many other devices already running test jobs. The new device must not interfere with those devices and this makes some of the device requirements stricter than may be expected.

  • The aim of automation is to create a homogenous test platform using heterogeneous devices and scalable infrastructure.

  • Do not complicate things.

  • Avoid extra customised hardware

    Relays, hardware modifications and mezzanine boards all increase complexity

    Examples - X15 needed two relay connections, the 96boards initially needed a mezzanine board where the design was rushed, causing months of serial disconnection issues.

  • More complexity raises failure risk nonlinearly

    Example - The lack of onboard serial meant that the 96boards devices could not be tested in isolation from the problematic mezzanine board. Numerous 96boards devices were deemed to be broken when the real fault lay with intermittent failures in the mezzanine. Removing and reconnecting a mezzanine had a high risk of damaging the mezzanine or the device. Once 96boards devices moved to direct connection of FTDI cables into the connector formerly used by the mezzanine, serial disconnection problems disappeared. The more custom hardware has to be designed / connected to a device to support automation, the more difficult it is to debug issues within that infrastructure.

  • Avoid unreliable protocols and connections

    Example. WiFi is not a reliable deployment method, especially inside a large lab with lots of competing signals and devices.

  • This document is not demanding enterprise or server grade support in devices.

    However, automation cannot scale with unreliable components.

    Example - HiKey 6220 and the serial mezzanine board caused massively complex problems when scaled up in LKFT.

  • Server support typically includes automation requirements as a subset:

    RAS, performance, efficiency, scalability, reliability, connectivity and uniqueness

  • Automation racks have similar requirements to data centres.

  • Things need to work reliably at scale

Scale issues also affect the infrastructure which supports the devices as well as the required reliability of the instance as a whole. It can be difficult to scale up from initial development to automation at scale. Numerous tools and utilities prove to be uncooperative, unreliable or poorly isolated from other processes. One result can be that the requirements of automation look more like the expectations of server-type hardware than of mobile hardware. The reality at scale is that server-type hardware has already had fixes implemented for scalability issues whereas many mobile devices only get tested as standalone units.

Connectivity and deployment methods

  • All test software is presumed broken until proven otherwise
  • All infrastructure and device integration support must be proven to be stable before tests can be reliable
  • All devices must provide at least one method of replacing the current software with the test software, at a level lower than you're testing.

The simplest method to automate is TFTP over physical ethernet, e.g. U-Boot or UEFI PXE. This also puts the least load on the device and automation hardware when delivering large images

Manually writing software to SD is not suitable for automation. This tends to rule out many proposed methods for testing modified builds or configurations of firmware in automation.

See https://linux.codehelp.co.uk/the-problem-of-sd-mux.html for more information on how the requirements of automation affect the hardware design requirements to provide access to SD card filesystems to and from the dispatcher and the device.

Some deployment methods require tools which must be constrained within an LXC. These include but are not limited to:

  • fastboot - due to a common need to have different versions installed for different hardware devices

    Example - Every fastboot device suffers from this problem - any running fastboot process will inspect the entire list of USB devices and attempt to connect to each one, locking out any other fastboot process which may be running at the time, which sees no devices at all.

  • IoT deployment - some deployment tools require patches for specific devices or use tools which are too complex for use on the dispatcher.

    Example - the TI CC3220 IoT device needs a patched build of OpenOCD, the WaRP7 needs a custom flashing tool compiled from a github repository.

Wherever possible, existing deployment methods and common tools are strongly encouraged. New tools are not likely to be as reliable as the existing tools.

Deployments must not make permanent changes to the boot sequence or configuration.

Testing of OS installers may require modifying the installer to not install an updated bootloader or modify bootloader configuration. The automation needs to control whether the next reboot boots the newly deployed system or starts the next test job, for example when a test job has been cancelled, the device needs to be immediately ready to run a different test job.

Interfaces

Automation requires driving the device over serial instead of via a touchscreen or other human interface device. This changes the way that the test is executed and can require the use of specialised software on the device to translate text based commands into graphical inputs.

It is possible to test video output in automation but it is not currently possible to drive automation through video input. This includes BIOS-type firmware interaction. UEFI can be used to automatically execute a bootloader like Grub which does support automation over serial. UEFI implementations which use graphical menus cannot be supported interactively.

Reliability

The objective is to have automation support which runs test jobs reliably. Reproducible failures are easy to fix but intermittent faults easily consume months of engineering time and need to be designed out wherever possible. Reliable testing means only 3 or 4 test job failures per week due to hardware or infrastructure bugs across an entire test lab (or instance). This can involve thousands of test jobs across multiple devices. Some instances may have dozens of identical devices but they still need not to exceed the same failure rate.

All devices need to reach the minimum standard of reliability, or they are not fit for automation. Some of these criteria might seem rigid, but they are not exclusive to servers or enterprise devices. To be useful mobile and IoT devices need to meet the same standards, even though the software involved and the deployment methods might be different. The reason is that the Continuous Integration strategy remains the same for all devices. The problem is the same, regardless of underlying considerations.

A developer makes a change; that change triggers a build; that build triggers a test; that test reports back to the developer whether that change worked or had unexpected side effects.

  • False positive and false negatives are expensive in terms of wasted engineering time.
  • False positives can arise when not enough of the software is fully tested, or if the testing is not rigorous enough to spot all problems.
  • False negatives arise when the test itself is unreliable, either because of the test software or the test hardware.

This becomes more noticeable when considering automated bisections which are very powerful in tracking the causes of potential bugs before the product gets released. Every test job must give a reliable result or the bisection will not reliably identify the correct change.

Automation and Risk

Linaro kernel functional test framework (LKFT) https://lkft.validation.linaro.org/

We have seen with LKFT that complexity has a non-linear relationship with the reliability of any automation process. This section aims to set out some guidelines and recommendations on just what is acceptable in the tools needed to automate testing on a device. These guidelines are based on our joint lab and software team experiences with a wide variety of hardware and software.

Adding or modifying any tool has a risk of automation failure

Risk increases non-linearly with complexity. Some of this risk can be mitigated by testing the modified code and the complete system.

Dependencies installed count as code in terms of the risks of automation failure

This is a key lesson learnt from our experiences with LAVA V1. We added a remote worker method, which was necessary at the time to improve scalability. But it massively increased the risk of automation failure simply due to the extra complexity that came with the chosen design.These failures did not just show up in the test jobs which actively used the extra features and tools; they caused problems for all jobs running on the system.

The ability in LAVA V2 to use containers for isolation is a key feature

For the majority of use cases, the small extension of the runtime of the test to set up and use a container is negligible. The extra reliability is more than worth the extra cost.

Persistent containers are themselves a risk to automation

Just as with any persistent change to the system.

Pre-installing dependencies in a persistent container does not necessarily lower the overall risk of failure. It merely substitutes one element of risk for another.

All code changes need to be tested

In unit tests and in functional tests. There is a dividing line where if something is installed as a dependency of LAVA, then when that something goes wrong, LAVA engineers will be pressured into fixing the code of that dependency whether or not we have any particular experience of that language, codebase or use case. Moving that code into a container moves that burden but also makes triage of that problem much easier by allowing debug builds / options to be substituted easily.

Complexity also increases the difficulty of debugging, again in a nonlinear fashion

A LAVA dependency needs a higher bar in terms of ease of triage.

Complexity cannot be easily measured

Although there are factors which contribute.

Monoliths

Large programs which appear as a single monolith are harder to debug than the UNIX model of one utility joined with other utilities to perform a wider task. (This applies to LAVA itself as much as any one dependency - again, a lesson from V1.)

Feature creep

Continually adding features beyond the original scope makes complex programs worse. A smaller codebase will tend to be simpler to triage than a large codebase, even if that codebase is not monolithic.

Targeted utilities are less risky than large environments

A program which supports protocol after protocol after protocol will be more difficult to maintain than 3 separate programs for each protocol. This only gets worse when the use case for that program only requires the use of one of the many protocols supported by the program. The fact that the other protocols are supported increases the complexity of the program beyond what the use case actually merits.

Metrics in this area are impossible

The risks are nonlinear, the failures are typically intermittent. Even obtaining or applying metrics takes up huge amounts of engineering time.

Mismatches in expectations

The use case of automation rarely matches up with the more widely tested use case of the upstream developers. We aren't testing the code flows typically tested by the upstream developers, so we find different bugs, raising the level of risk. Generally, the simpler it is to deploy a device in automation, the closer the test flow will be to the developer flow.

Most programs are written for the single developer model

Some very widely used programs are written to scale but this is difficult to determine without experience of trying to run it at scale.

Some programs do require special consideration

QEMU would fail most of these guidelines above, so there are mitigating factors:

  • Programs which can be easily restricted to well understood use cases lower the risk of failure. Not all use cases of the same program not need to be covered.
  • Programs which have excellent community and especially in-house support also lower the risk of failure. (Having QEMU experts in Linaro is a massive boost for having QEMU as a dispatcher dependency.)

Unfamiliar languages increase the difficulty of triage

This may affect dependencies in unexpected ways. A program which has lots of bindings into a range of other languages becomes entangled in transitions and bugs in those other languages. This commonly delays the availability of the latest version which may have a critical fix for one use case but which fails to function at all in what may seem to be an unrelated manner.

The dependency chain of the program itself increases the risk of failure in precisely the same manner as the program

In terms of maintenance, this can include the build dependencies of the program as those affect delivery / availability of LAVA in distributions like Debian.

Adding code to only one dispatcher amongst many increases the risk of failure on the instance as a whole

By having an untested element which is at variance to the rest of the system.

Conditional dependencies increase the risk

Optional components can be supported but only increase the testing burden by extending the matrix of installations.

Presence of the code in Debian main can reduce the risk of failure

This does not outweigh other considerations - there are plenty of packages in Debian (some complex, some not) which would be an unacceptable risk as a dependency of the dispatcher, fastboot for one. A small python utility from github can be a substantially lower risk than a larger program from Debian which has unused functionality.

Sometimes, "complex" simply means "buggy" or "badly designed"

fastboot is not actually a complex piece of code but we have learnt that it does not currently scale. This is a result of the disparity between the development model and the automation use case. Disparities like that actually equate to complexity, in terms of triage and maintenance. If fastboot was more complex at the codebase level, it may actually become a lower risk than currently.

Linaro as a whole does have a clear objective of harmonising the ecosystem

Adding yet another variant of existing support is at odds with the overall objective of the company. Many of the tools required in automation have no direct affect on the distinguishing factors for consumers. Adding another one "just because" is not a good reason to increase the risk of automation failure. Just as with standards.

Having the code on the dispatcher impedes development of that code

Bug fixes will take longer to be applied because the fix needs to go through a distribution or other packaging process managed by the lab admins. Applying a targeted fix inside an LXC is useful for proving that the fix works.

Not all programs can work in an LXC

LAVA also provides ways to test using those programs by deploying the code onto a test device. e.g. the V2 support for fastmodels involves only deploying the fastmodel inside a LAVA Test Shell on a test device, e.g. x86 or mustang or Juno.

Speed of running a test job in LAVA is important for CI

The goal of speed must give way to the requirement for reliability of automation

Resubmitting a test job due to a reliability failure is more harmful to the CI process than letting tests take longer to execute without such failures. Test jobs which run quickly are easier to parallelize by adding more test hardware.

Modifying software on the device

Not all parts of the software stack can be replaced automatically, typically the firmware and/or bootloader will need to be considered carefully. The boot sequence will have important effects on what kind of testing can be done automatically. Automation relies on being able to predict the behaviour of the device, interrupt that default behaviour and then execute the test. For most devices, everything which executes on the device prior to the first point at which the boot sequence can be interrupted can be considered as part of the primary boot software. None of these elements can be safely replaced or modified in automation.

The objective is to deploy the device such that as much of the software stack can be replaced as possible whilst preserving the predictable behaviour of all devices of this type so that the next test job always gets a working, clean device in a known state.

Primary boot software

For many devices, this is the bootloader, e.g. U-Boot, UEFI or fastboot.

Some devices include support for a Baseboard management controller or BMC which allows the bootloader and other firmware to be updated even if the device is bricked. The BMC software itself then be considered as the primary boot software, it cannot be safely replaced.

All testing of the primary boot software will need to be done by developers using local devices. SDMux was an idea which only fitted one specific set of hardware, the problem of testing the primary boot software is a hydra. Adding customised hardware to try to sidestep the primary boot software always increases the complexity and failure rates of the devices.

It is possible to divide the pool of devices into some which only ever use known versions of the primary boot software controlled by admins and other devices which support modifying the primary boot software. However, this causes extra work when processing the results, submitting the test jobs and administering the devices.

A secondary problem here is that it is increasingly common for the methods of updating this software to be esoteric, hacky, restricted and even proprietary.

  • Click-through licences to obtain the tools

  • Greedy tools which hog everything in /dev/bus/usb

  • NIH tools which are almost the same as existing tools but add vendor-specific "functionality"

  • GUI tools

  • Changing jumpers or DIP switches,

    Often in inaccessible locations which require removal of other ancillary hardware

  • Random, untrusted, compiled vendor software running as root

  • The need to press and hold buttons and watch for changes in LED status.

We've seen all of these - in various combinations - just in 2017, as methods of getting devices into a mode where the primary boot software can be updated.

Copyright 2018 Neil Williams linux@codehelp.co.uk

Available under CC BY-SA 3.0: https://creativecommons.org/licenses/by-sa/3.0/legalcode

29 June, 2018 02:19PM by Neil Williams

hackergotchi for Ana Beatriz Guerrero Lopez

Ana Beatriz Guerrero Lopez

Introducing debos, a versatile images generator

In Debian and derivative systems, there are many ways to build images. The simplest tool of choice is often debootstrap. It works by downloading the .deb files from a mirror and unpacking them into a directory which can eventually be chrooted into.

More often than not, we want to make some customization on this image, install some extra packages, run a script, add some files, etc

debos is a tool to make this kind of trivial tasks easier. debos works using recipe files in YAML listing the actions you want to perform in your image sequentially and finally, choosing the output formats.

As opposite to debootstrap and other tools, debos doesn't need to be run as root for making actions that require root privileges in the images. debos uses fakemachine a library that setups qemu-system allowing you to work in the image with root privileges and to create images for all the architectures supported by qemu user. However, for this to work, make sure your user has permission to use /dev/kvm.

Let's see how debos works with a simple example. If we wanted to create an arm64 image for Debian Stretch customized, we would follow these steps:

  • debootstrap the image
  • install the packages we need
  • create an user
  • setup our preferred hostname
  • run a script creating an user
  • copy a file adding the user to sudoers
  • creating a tarball with the final image

This would translate into a debos recipe like this one:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{{- $architecture := or .architecture "arm64" -}}
{{- $suite := or .suite "stretch" -}}
{{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }}

architecture: {{ $architecture }}

actions:
  - action: debootstrap
    suite: {{ $suite }}
    components:
      - main
    mirror: http://deb.debian.org/debian
    variant: minbase

  - action: apt
    recommends: false
    packages:
      - adduser
      - sudo

  - action: run
    description: Set hostname
    chroot: true
    command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname

  - action: run
    chroot: true
    script: scripts/setup-user.sh

  - action: overlay
    description: Add sudo configuration
    source: overlays/sudo

  - action: pack
    file: {{ $image }}
    compression: gz

(The files used in this example are available from this git repository)

We run debos on the recipe file:

$ debos simple.yaml

The result will be a tarball named debian-stretch-arm64.tar.gz. If you check the top two lines of the recipe, you can see that the recipe defaults to architecture arm64 and Debian stretch. We can override these defaults when running debos:

$ debos -t suite:"buster" -t architecture:"amd64" simple.yaml

This time the result will be a tarball debian-buster-amd64.tar.gz.

The recipe allows some customization depending on the parameters. We could install packages depending on the target architecture, for example, installing python-libsoc in armhf and arm64:

1
2
3
4
5
6
7
8
- action: apt
  recommends: false
  packages:
    - adduser
    - sudo
{{- if eq $architecture "armhf" "arm64" }}
    - python-libsoc
{{- end }}

What happens if in addition to a tarball we would like to create a filesystem image? This could be done adding two more actions to our example, a first action creating the image partition with the selected filesystem and a second one deploying the image in the filesystem:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
- action: image-partition
  imagename: {{ $ext4 }}
  imagesize: 1GB
  partitiontype: msdos
  mountpoints:
    - mountpoint: /
      partition: root
  partitions:
    - name: root
      fs: ext4
      start: 0%
      end: 100%
      flags: [ boot ]

- action: filesystem-deploy
  description: Deploying filesystem onto image

{{ $ext4 }} should be defined in the top of the file as follows:

1
{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

We could even make this step optional and make the recipe by default to only create the tarball and add the filesystem image only adding an option to debos:

$ debos -t type:"full" full.yaml

The final debos recipe will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
{{- $architecture := or .architecture "arm64" -}}
{{- $suite := or .suite "stretch" -}}
{{ $type := or .type "min" }}
{{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }}
{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

architecture: {{ $architecture }}

actions:
  - action: debootstrap
    suite: {{ $suite }}
    components:
      - main
    mirror: http://deb.debian.org/debian
    variant: minbase

  - action: apt
    recommends: false
    packages:
      - adduser
      - sudo
{{- if eq $architecture "armhf" "arm64" }}
      - python-libsoc
{{- end }}

  - action: run
    description: Set hostname
    chroot: true
    command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname

  - action: run
    chroot: true
    script: scripts/setup-user.sh

  - action: overlay
    description: Add sudo configuration
    source: overlays/sudo

  - action: pack
    file: {{ $image }}
    compression: gz

{{ if eq $type "full" }}
  - action: image-partition
    imagename: {{ $ext4 }}
    imagesize: 1GB
    partitiontype: msdos
    mountpoints:
      - mountpoint: /
        partition: root
    partitions:
      - name: root
        fs: ext4
        start: 0%
        end: 100%
        flags: [ boot ]

  - action: filesystem-deploy
    description: Deploying filesystem onto image
{{end}}

debos also provides some other actions that haven't been covered in the example above:

  • download allows to download a single file from the internet
  • raw can directly write a file to the output image at a given offset
  • unpack can be used to unpack files from archive in the filesystem
  • ostree-commit create an OSTree commit from rootfs
  • ostree-deploy deploy an OSTree branch to the image

The example in this blog post is simple and short on purpose. Combining the actions presented above, you could also include a kernel and install a bootloader to make a bootable image. Upstream is planning to add more examples soon to the debos recipes repository.

debos is a project from Sjoerd Simons at Collabora, it's still missing some features but it's actively being developed and there are big plans for the future!

29 June, 2018 01:00PM by Ana

bisco

Fourth GSoC Report

As announced in the last report, i started looking into SSO solutions and evaluated and tested them. At the begining my focus was on SAML integration, but i soon realized that OAuth2 would be more important.

I started with installing Lemonldap-NG. LL-NG is a WebSSO solution writting in perl that uses ModPerl or FastCGI for delivering Webcontent. There is a Debian package in stable, so the installation was no problem at all. The configuration was a bit harder, as LL-NG has a complex architecture with different vhosts. But after some fiddling i managed to connect the installation to our test LDAP instance and was able to authenticate against the LL-NG portal. Then i started to research how to integrate an OAuth2 client. For the tests i had on the one hand a gitlab installation that i tried to connect to the OAuth2 providers using the omniauth-oauth2-generic strategy. To have a bit more fine grained control over the OAuth2 client configuration i also used the python requests-oauthlib module and modified the web app example from their documentation to my needs. After some fiddling and a bit of back and forth on the lemonldap-ng mailinglist i managed both test clients to authenticate against LL-NG.

Lemonldap-NG Screenshot

The second solution i tested was Keycloak, an identity and access management solution written in java by Redhat. There is no debian package, but nonetheless it was very easy to get it running. It is enough to install jre-default from the package repositories and then run the standalone script from the extracted keycloak folder. Because keycloak only listens on localhost and i didn’t want to get into configuring the java webserver stuff, i installed nginx and configured is as a proxy. In Keycloak too the first step was to configure the LDAP backend. When i was able to successfully login using my LDAP credentials, i looked into configuring an OAuth2 client, which wasn’t that hard either.

Keycloak Screenshot

The third solution i looked into was Glewlwyd, written by babelouest. There is a Debian package in buster, so i added the buster sources, set up apt pinning and installed the needed packages. Glewlwyd is a system service that listens on localhost:4593, so i also used nginx in this case. The configuration for the LDAP backend is done in the configuration file which is on Debian /etc/glewlwyd/glewlwyd-debian.conf. Glewlwyd provides a webinterface for managing users and clients and it is possible to store all the values in LDAP.

Keycloak Screenshot

The next steps will be to test the last candidate, which is ipsilon and also test all the solutions for some important features, like multiple backends and exporting of configurable attributes. Last but not least i want to create a table to have an overview of all the features and drawbacks of the solutions. All the evaluations are public in a salsa repository

I also carried on doing some work on nacho, though most of the issues that have to be fixed are rather small. I reguarly stumble upon texts about Python or Django, like for example the Django NewbieMistakes and try to read all of them and use that for improving on my work.

29 June, 2018 05:28AM

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

Debian and free software personal misc news

Many of them probably are worth a blog post each, but it seems I cannot find the time or motivation to craft nice blog posts for now, so here’s a quick update of some of the things that happened and happen in my digital life:

  • Debian Jessie became LTS and I still didn’t upgrade my home server to stable. Well, I could say myself that now I have 2 more years to try to find the time (thanks LTS team!) and that the machine just works (and that’s probably the reason for not finding the motivation to upgrade it or to put time on it (thanks Debian and the software projects of the services I run there!)) but I have to find the way to give some love to my home server during this summer, otherwise I won’t be able to do it probably until the next summer.

 

  • Quitter.se is down since several weeks, and I’m afraid it probably won’t come back. This means my personal account larjona@quitter.se in GNU Social is not working, and the Debian one (debian@quitter.se) is not working either. I would like to find another good instance where to create both accounts (I would like to selfhost but it’s not realistic, e.g. see above point). Considering both GNU Social and Mastodon networks, but I still need to do some research on uptimes, number of users, workforce behind the instances, who’s there, etc. Meanwhile, my few social network updates are posted in larjona@identi.ca as always, and for Debian news you can follow https://micronews.debian.org (it provides RSS feed), or debian@identi.ca. When I resurrect @debian in the fediverse I’ll publicise it and I hope followers find us again.

 

  • We recently migrated the Debian website from CVS to git: https://salsa.debian.org/webmaster-team/webwml/ I am very happy and thankful to all the people that helped to make it possible. I think that most of the contributors adapted well to the changes (also because keeping the used workflows was a priority), but if you feel lost or want to comment on anything, just tell. We don’t want to loose anybody, and we’re happy to welcome and help anybody who wants to get involved.

 

  • Alioth’s shutdown and the Debian website migration triggered a lot of reviews in the website content (updating links and paragraphs, updating translations…) and scripts. Please be patient and help if you can (e.g. contact your language team, or have a look at the list of bugs: tagged or the bulk list). I will try to do remote “DebCamp18” work and attack some of them, but I’m also considering organising or attending a BSP in September/October. We’ll see.

 

  • In the Spanish translation team, I am very happy that we have several regular contributors, translating and/or reviewing. In the last months I did less translation work than what I would like, but I try not to loose pace and I hope to put more time on translations and reviews during this summer, at least in the website and in package descriptions.

 

  • One more year, I’m not coming to DebConf. This year my schedule/situation was clear from long ago, so it’s been easier to just accept that I cannot go, and continue being involved somehow. It’s sad not being able to celebrate the migration with web team mates, but I hope they celebrate anyway! I am a bit behind with DebConf publicity work but I will try to catch up soon, and for DebConf itself I will try to do the microblogging coverage as former years, and also participate in the IRC and watching the streaming, thanks timezones and siesta, I guess 😉

 

  • Since January I am enjoying my new phone (the Galaxy S III broke, and I bought a BQ Aquaris U+) with Lineage OS 14.x and F-Droid. I keep on having a look each several days to the F-Droid tab that shows the news and updated apps and it’s amazing the activity and life of the projects. A non exhaustive list of the free software apps that I use: AdAway, Number Guesser (I play with my son to this), Conversations, Daily Dozen, DavDroid, F-Droid, Fennec F-Droid, Hacker’s Keyboard, K-9 Mail, KDE Connect, Kontalk, LabCoat, Document Reader, LibreOffice Viewer (old but it works), Memetastic, NewPipe, OSMAnd~, PassAndroid, Periodical, Puma, Quasseldroid, QuickDic, RadioDroid, Reader for Pepper and Carrot, Red Moon, RedReader, Ring, Slight Backup, Termux. Some other apps that I don’t use them all the time but I find it’s nice to have them are AFWall+, Atomic (for when my Quassel server is down), Call Recorder, Pain Diary, Yalp Store. My son decided not to play games in phones/tablets so we removed Anuto TD, Apple Finger, Seafood Berserker, Shattered Pixel Dungeon and Turo (I appreciate the games but I only play some times, if another person plays too, just to share the game). My only non-free apps: the one that gives me the time I need to wait at the bus stop, Wallapop (second hand, person to person buy/sell app), and Whatsapp. I have no Google services in the phone and no Location services available for those apps, but I give the bus stop number or the Postal code number by hand, and they work.

 

  • I am very very happy with my Lenovo X230 laptop, its keyboard and everything. It runs Debian stable for now, and Plasma Desktop. I only have 2 issues with it: (1) hibernation, and (2) smart card reader. About the hibernation: sometimes, when on battery, I close the lid and it seems it does not hibernate well because when I open the lid again it does not come back, the power button blinks slowly, and pressing it, typing something or moving the touchpad, have no effect. The only ‘solution’ is to long-press the power button so it abruptly shuts down (or take the battery off, with the same effect). After that, I turn on again and the filesystem complains about the unexpected shut down but it boots correctly. About the smart card reader: I have a C3PO LTC31 smart card reader and when I connect it via USB to use my GPG smart card, I need to restart pcsc service manually to be able to use it. If I don’t do that, the smart card is not recognised (Thunderbird or whatever program asks me repeatedly to insert the card). I’m not sure why is that, and if it’s related to my setup, or to this particular reader. I have another reader (other model) at work, but always forget to switch them to make tests. Anyway I can live with it until I find time to research more.

There are probably more things that I forget, but this post became too long already. Bye!

 

29 June, 2018 12:32AM by larjona

June 28, 2018

hackergotchi for Jonathan McDowell

Jonathan McDowell

Thoughts on the acquisition of GitHub by Microsoft

Back at the start of 2010, I attended linux.conf.au in Wellington. One of the events I attended was sponsored by GitHub, who bought me beer in a fine Wellington bar (that was very proud of having an almost complete collection of BrewDog beers, including some Tactical Nuclear Penguin). I proceeded to tell them that I really didn’t understand their business model and that one of the great things about git was the very fact it was decentralised and we didn’t need to host things in one place any more. I don’t think they were offended, and the announcement Microsoft are acquiring GitHub for $7.5 billion proves that they had a much better idea about this stuff than me.

The acquisition announcement seems to have caused an exodus. GitLab reported over 13,000 projects being migrated in a single hour. IRC and Twitter were full of people throwing up their hands and saying it was terrible. Why is this? The fear factor seemed to come from was who was doing the acquiring. Microsoft. The big, bad Linux is a cancer folk. I saw a similar, though more muted, reaction when LinkedIn were acquired.

This extremely negative reaction to Microsoft seems bizarre to me these days. I’m well aware of their past, and their anti-competitive practises (dating back to MS-DOS vs DR-DOS). I’ve no doubt their current embrace of Free Software is ultimately driven by business decisions rather than a sudden fit of altruism. But I do think their current behaviour is something we could never have foreseen 15+ years ago. Did you ever think Microsoft would be a contributor to the Linux kernel? Is it fair to maintain such animosity? Not for me to say, I guess, but I think that some of it is that both GitHub and LinkedIn were services that people were already uneasy about using, and the acquisition was the straw that broke the camel’s back.

What are the issues with GitHub? I previously wrote about the GitHub TOS changes, stating I didn’t think it was necessary to fear the TOS changes, but that the centralised nature of the service was potentially something to be wary of. joeyh talked about this as long ago as 2011, discussing the aspects of the service other than the source code hosting that were only API accessible, or in some other way more restricted than a git clone away. It’s fair criticism; the extra features offered by GitHub are very much tied to their service. And yet I don’t recall the same complaints about SourceForge, long the home of choice for Free Software projects. Its problems seem to be more around a dated interface, being slow to enable distributed VCSes and the addition of advertising. People left because there were much better options, not because of idiological differences.

Let’s look at the advantages GitHub had (and still has) to offer. I held off on setting up a GitHub account for a long time. I didn’t see the need; I self-hosted my Git repositories. I had the ability to setup mailing lists if I needed them (and my projects generally aren’t popular enough that they did). But I succumbed in 2015. Why? I think it was probably as part of helping to run an OpenHatch workshop, trying to get people involved in Free software. That may sound ironic, but helping out with those workshops helped show me the benefit of the workflow GitHub offers. The whole fork / branch / work / submit a pull request approach really helps lower the barrier to entry for people getting started out. Suddenly fixing an annoying spelling mistake isn’t a huge thing; it’s easy to work in your own private playground and then make that work available to upstream and to anyone else who might be interested.

For small projects without active mailing lists that’s huge. Even for big projects that can be a huge win. And it’s not just useful to new contributors. It lowers the barrier for me to be a patch ‘n run contributor. Now that’s not necessarily appealing to some projects, because they’d rather get community involvement. And I get that, but I just don’t have the time to be active in all the projects I feel I can offer something to. Part of that ease is the power of git, the fact that a clone is a first class repo, capable of standing alone or being merged back into the parent. But another part is the interface GitHub created, and they should get some credit for that. It’s one of those things that once you’re presented with it it makes sense, but no one had done it quite as slickly up to that point. Submissions via mailing lists are much more likely to get lost in the archives compared to being able to see a list of all outstanding pull requests on GitHub, and the associated discussion. And subscribe only to that discussion rather than everything.

GitHub also seemed to appear at the right time. It, like SourceForge, enabled easy discovery of projects. Crucially it did this at a point when web frameworks were taking off and a whole range of developers who had not previously pull large chunks of code from other projects were suddenly doing so. And writing frameworks or plugins themselves and feeling in the mood to share them. GitHub has somehow managed to hit critical mass such that lots of code that I’m sure would have otherwise never seen the light of day are available to all. Perhaps the key was that repos were lightweight setups under usernames, unlike the heavier SourceForge approach of needing a complete project setup per codebase you wanted to push. Although it’s not my primary platform I engage with GitHub for my own code because the barrier is low; it’s couple of clicks on the website and then I just push to it like my other remote repos.

I seem to be coming across as a bit of a GitHub apologist here, which isn’t my intention. I just think the knee-jerk anti GitHub reaction has been fascinating to observe. I signed up to GitLab around the same time as GitHub, but I’m not under any illusions that their hosted service is significantly different from GitHub in terms of having my data hosted by a third party. Nothing that’s up on either site is only up there, and everything that is is publicly available anyway. I understand that as third parties they can change my access at any point in time, and so I haven’t built any infrastructure that assumes their continued existence. That said, why would I not take advantage of their facilities when they happen to be of use to me?

I don’t expect my use of GitHub to significantly change now they’ve been acquired.

28 June, 2018 08:30PM

Daniel Stender

Dynamic inventories for Ansible using Python

Ansible not only accepts static machine inventories represented in an inventory file, but it is capable of leveraging also dynamic inventories. To use that mechanism the only thing which is needed is a program resp. a script which creates the particular machines which are needed for a certain project and returns their addresses as a JSON object, which represents an inventory like an inventory file does it. This makes it possible to created specially crafted tools to set up the number of cloud machines which are needed for an Ansible project, and the mechanism theoretically is open to any programming language. Instead of selecting an inventory project with the option -i like with ansible-playbook, just give the name of the program you’ve set up, and Ansible executes it and evaluates the inventory which is given back by that.

Here’s a little example of an dynamic inventory for Ansible written in Python. The script uses the python-digitalocean library in Debian (https://github.com/koalalorenzo/python-digitalocean) to launch a couple of DigitalOcean droplets for a particular Ansible project:

#!/usr/bin/env python
import os
import sys
import json
import digitalocean
import ConfigParser

config = ConfigParser.ConfigParser()
config.read(os.path.dirname(os.path.realpath(__file__)) + '/inventory.cfg')
nom = config.get('digitalocean', 'number_of_machines')
keyid = config.get('digitalocean', 'key-id')
try:
    token = os.environ['DO_ACCESS_TOKEN']
except KeyError:
    token = config.get('digitalocean', 'access-token')

manager = digitalocean.Manager(token=token)

def get_droplets():
    droplets = manager.get_all_droplets(tag_name='ansible-demo')
    if not droplets:
        return False
    elif len(droplets) != 0 and len(droplets) != int(nom):
        print "The number of already set up 'ansible-demo' differs"
        sys.exit(1)
    elif len(droplets) == int(nom):
        return droplets

key = manager.get_ssh_key(keyid)
tag = digitalocean.Tag(token=token, name='ansible-demo')
tag.create()

def create_droplet(name):
    droplet = digitalocean.Droplet(token=token,
                                   name=name,
                                   region='fra1',
                                   image='debian-8-x64',
                                   size_slug='512mb',
                                   ssh_keys=[key])
    droplet.create()
    tag.add_droplets(droplet.id)
    return True

if get_droplets() is False:
    for node in range(int(nom))[1:]:
        create_droplet(name='wordpress-node'+str(node))
    create_droplet('load-balancer')

droplets = get_droplets()
inventory = {}
hosts = {}
machines = []
for droplet in droplets:
    if 'load-balancer' in droplet.name:
        machines.append(droplet.ip_address)
        hosts['hosts']=machines
        inventory['load-balancer']=hosts
hosts = {}
machines = []
for droplet in droplets:
    if 'wordpress' in droplet.name:
        machines.append(droplet.ip_address)
        hosts['hosts']=machines
        inventory['wordpress-nodes']=hosts

print json.dumps(inventory)

It’s a simple basic script to demonstrate how you can craft something for your own needs to leverage dynamic inventories for Ansible. The parameter of droplets like the size (512mb) the image (debian-8-x64) and the region (fra1) are hard coded, and can be changed easily if wanted. Other things needed like the total number of wanted machines, the access token for the DigitalOcean API and the ID of the public SSH key which is going to be applied to the virtual machines is evaluated using a simple configuration file (inventory.cfg):

[digitalocean]
access-token = 09c43afcbdf4788c611d5a02b5397e5b37bc54c04371851
number_of_machines = 4
key-id = 21699531

The script of course can be executed independently of Ansible. The first time you execute it, it creates the number of machines which is wanted (consisting of always of one load-balancer node and – given the total number of machines, which is four – three wordpress-nodes), and gives back the IP adresses of the newly created machines being put into groups:

$ ./inventory.py 
{"wordpress-nodes": {"hosts": ["159.89.111.78", "159.89.111.84", "159.89.104.60"]}, "load-balancer": {"hosts": ["159.89.98.64"]}}

droplets-ansible-demo

Any consecutive execution of this script recognizes that the wanted machines already have been created, and just returns this inventory the same way one more time:

$ ./inventory.py 
{"wordpress-nodes": {"hosts": ["159.89.111.78", "159.89.111.84", "159.89.104.60"]}, "load-balancer": {"hosts": ["159.89.98.64"]}}

If you delete the droplets then, and run the script again, a new set of machines gets created:

$ for i in $(doctl compute droplet list | awk '/ansible-demo/{print $(1)}'); do doctl compute droplet delete $i; done
$ ./inventory.py 
{"wordpress-nodes": {"hosts": ["46.101.115.214", "165.227.138.66", "165.227.153.207"]}, "load-balancer": {"hosts": ["138.68.85.93"]}}

As you can see, the JSON object1 which is given back represents an Ansible inventory, the same inventory represented in a file it would have this form:

[load-balancer]
138.68.85.93

[wordpress-nodes]
46.101.115.214
165.227.138.66
165.227.153.207

Like said, you can use this “one-trick pony” Python script instead of an inventory file, just given the name of that, and the Ansible CLI tool runs it and works on the inventory which is given back:

$ ansible wordpress-nodes -i ./inventory.py -m ping -u root --private-key=~/.ssh/id_digitalocean
165.227.153.207 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
46.101.115.214 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
165.227.138.66 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Note: the script doesn’t yet supports a waiter mechanism but completes as soon as there are IP adresses available. It always could take a little while until the newly created machines are completely created, booted, and accessible via SSH, thus there could be errors on the hosts not being accessible. In that case, just wait a few seconds and run the Ansible command again.


  1. For the exact structure of the JSON object I’m drawing from: https://gist.github.com/jtyr/5213fabf2bcb943efc82f00959b91163 [return]

28 June, 2018 08:22PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Abuse of childhood

The blog post is in homage to any abuse victims and more directly to parents and children being separated by policies formed by a Government whose chief is supposed to be ‘The leader of the free world’. I sat on the blog post for almost a week even though I got it proof-read by two women, Miss S and Miss K to see if there is or was anything wrongful about the post. Both the women gave me their blessings as it’s something to be shared.

I am writing this blog post writing from my house in a safe environment, having chai (tea), listening to some of my favorite songs, far from trauma some children are going through.

I have been disturbed by the news of families and especially young children being separated from their own families because of state policy. I was pretty hesitant to write this post as we are told to only share our strengths and not our weaknesses or traumas of the past. I partly want to share so people who might be on the fence of whether separating families is a good idea or not might have something more to ponder over. The blog post is not limited to the ongoing and proposed U.S. Policy called Separations but all and any situations involving young children and abuse.

The first experience was when my cousin sister and her family came to visit me and mum. We often do not get relatives or entertain them due to water shortage issues. It’s such a common such issue all over India that nobody bats an eye over, we will probably talk about it in some other blog post if need be.

The sister who came, she has two daughters. The older one knew me and mum and knew that both of us have a penchant for pulling legs but at the same time like to spoil Didi and her. All of us are foodies so we have a grand time. The younger one though was unknown and I were unknown to her. In playfulness, we said we would keep the bigger sister with us and she was afraid. She clung to her sister like anything. Even though we tried to pacify her but she wasn’t free with us till the time she was safely tucked in with her sister in the family car along with her mum and dad.

While this is a small incident, it triggered a memory kept hidden over 35+ years back. I was perhaps 4-5 years old. I was being bought up by a separated working mum who had a typical government 9-5 job. My grandparents were (mother’s side) used to try and run the household in her absence, my grandmother doing all household chores, my grandfather helping here and there, while all outside responsibilities were his.

In this, there was a task to put me in school. Mum probably talked to some of her colleagues or somebody or the other suggested St. Francis, a Catholic missionary school named after one of the many saints named Saint Francis. It is and was a school nearby. There was a young man who used to do odd-jobs around the house and was trusted by all who was a fan of ( Amitabh Bachchan) and who is/was responsible for my love for first-day first shows of his movies. A genuinely nice elderly brother kind of person with whom I have had lot of beautiful memories of childhood.

Anyways, his job was to transport me back and fro to the school which he did without fail. The trouble started for me in school, I do not know the reason till date, maybe I was a bawler or whatever, I was kept in a dark, dank toilet for a year (minus the holidays). The first time I went to the dark, foreboding place, I probably shat and vomited for which I was beaten quite a bit. I learnt that if I were sent to the dark room, I had to put my knickers somewhere top where they wouldn’t get dirty so I would not get beaten. Sometimes I was also made to clean my vomit or shit which made the whole thing more worse. I would be sent to the room regularly and sometimes beaten. The only nice remembrance I had were the last hour before school used to be over as I was taken out of the toilet, made presentable and was made to sit near the window-sill from where I could see trains running by. I dunno whether it was just the smell of free, fresh air plus seeing trains and freedom got somehow mixed and a train-lover was born.

I don’t know why I didn’t ever tell my mum or anybody else about the abuse happening with me. Most probably because the teacher may have threatened me with something or the other. Somehow the year ended and I was failed. The only thing probably mother and my grandparents saw and felt that I had grown a bit thinner.

Either due to mother’s intuition or because I had failed I was made to change schools. While I was terrified of the change because I thought there was something wrong with me and things will be worse, it was actually the opposite. While corporal punishment was still the norm, there wasn’t any abuse unlike in the school before. In the eleven years I spent in the school, there was only one time that I was given toilet duty and that too because I had done something naughty like pulling a girl’s hair or something like that, and it was one or two students next to me. Rather than clean the toilets we ended up playing with water.

I told part of my experience to mum about a year, year and half after I was in the new school half-expecting something untoward to happen as the teacher has said. The only thing I remember from that conversation was shock registering on her face. I didn’t tell her about the vomit and shit part as I was embarrassed about it. I had nightmares about it till I was in my teens when with treks and everything I understood that even darkness can be a friend, just like light is.

For the next 13 odd years till I asked her to stop checking on me, she used to come to school every few months, talk to teachers and talk with class-mates. The same happened in college till I asked her to stop checking as I used to feel embarrassed when other class-mates gossiped.

It was only years after when I began working I understood what she was doing all along. She was just making sure I was ok.

The fact that it took me 30+ years to share this story/experience with the world at large also tells that somewhere I still feel a bit scarred, on the soul.

If you are feeling any sympathy or empathy towards me, while I’m thankful for it. It would be much better served to direct it towards those who are in a precarious vulnerable situation like I was. It doesn’t matter what politics you believe in or peddle in, separating children from their parents is immoral as a being forget even a human being. Even in the animal world, we see how predators only attack those young whose fathers and mothers are not around to protect them.

As in any story/experience/tale there are lessons or takeaways that I hope most parents teach their young ones, especially Indian or Asiatic parents at large –

1. Change the rule of ‘Respect all elders and obey them no matter what’ to ‘Respect everybody including yourself’ should be taught from parents to their children. This will boost their self-confidence a bit and also be share any issues that happen with them.

2. If somebody threatens you or threatens family immediately inform us (i.e. the parents).

3. The third one is perhaps the most difficult ‘telling the truth without worrying about consequences’. In Indian families we learn about ‘secrets’ and ‘modifying truth’ from our parents and elders. That needs to somehow change.

4. Few years ago, Aamir Khan (a film actor) with people specializing in working with children talked and shared about ‘Good touch, bad touch’ as a prevention method, maybe somebody could also do something similar for such kinds of violence.

At the end I recently came across an article and also Terminal.

28 June, 2018 06:11PM by shirishag75

Antoine Beaupré

My free software activities, June 2018

It's been a while since I haven't done a report here! Since I need to do one for LTS, I figured I would also catchup you up with the work I've done in the last three months. Maybe I'll make that my new process: quarterly reports would reduce the overhead on my side with little loss on you, my precious (few? many?) readers.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

I omitted doing a report in May because I didn't spend a significant number of hours, so this also covers a handful of hours of work in May.

May and June were strange months to work on LTS, as we made the transition between wheezy and jessie. I worked on all three LTS releases now, and I must have been absent from the last transition because I felt this one was a little confusing to go through. Maybe it's because I was on frontdesk duty during that time...

For a week or two it was unclear if we should have worked on wheezy, jessie, or both, or even how to work on either. I documented which packages needed an update from wheezy to jessie and proposed a process for the transition period. This generated a good discussion, but I am not sure we resolved the problems we had this time around in the long term. I also sent patches to the security team in the hope they would land in jessie before it turns into LTS, but most of those ended up being postponed to LTS.

Most of my work this month was spent actually working on porting the Mercurial fixes from wheezy to jessie. Technically, the patches were ported from upstream 4.3 and led to some pretty interesting results in the test suite, which fails to build from source non-reproducibly. Because I couldn't figure out how to fix this in the alloted time, I uploaded the package to my usual test location in the hope someone else picks it up. The test package fixes 6 issues (CVE-2018-1000132, CVE-2017-9462, CVE-2017-17458 and three issues without a CVE).

I also worked on cups in a similar way, sending a test package to the security team for 2 issues (CVE-2017-18190, CVE-2017-18248). Same for Dokuwiki, where I sent a patch single issue (CVE-2017-18123). Those have yet to be published, however, and I will hopefully wrap that up in July.

Because I was looking for work, I ended up doing meta-work as well. I made a prototype that would use the embedded-code-copies file to populate data/CVE/list with related packages as a way to address a problem we have in LTS triage, where package that were renamed between suites do not get correctly added to the tracker. It ended up being rejected because the changes were too invasive, but led to Brian May suggesting another approach, we'll see where that goes.

I've also looked at splitting up that dreaded data/CVE/list but my results were negative: it looks like git is very efficient at splitting things up. While a split up list might be easier on editors, it would be a massive change and was eventually refused by the security team.

Other free software work

With my last report dating back to February, this will naturally be a little imprecise, as three months have passed. But let's see...

LWN

I wrote eigth articles in the last three months, for an average of three monthly articles. I was aiming at an average of one or two a week, so I didn't get reach my goal. My last article about Kubecon generated a lot of feedback, probably the best I have ever received. It seems I struck a chord for a lot of people, so that certainly feels nice.

Linkchecker

Usual maintenance work, but we at last finally got access to the Linkchecker organization on GitHub, which meant a bit of reorganizing. The only bit missing now it the PyPI namespace, but that should also come soon. The code of conduct and contribution guides were finally merged after we clarified project membership. This gives us issue templates which should help us deal with the constant flow of issues that come in every day.

The biggest concern I have with the project now is the C parser and the outdated Windows executable. The latter has been removed from the website so hopefully Windows users won't report old bugs (although that means we won't gain new Windows users at all) and the former might be fixed by a port to BeautifulSoup.

Email over SSH

I did a lot of work to switch away from SMTP and IMAP to synchronise my workstation and laptops with my mailserver. Having the privilege of running my own server has its perks: I have SSH access to my mail spool, which brings the opportunity for interesting optimizations.

The first I have done is called rsendmail. Inspired by work from Don Armstrong and David Bremner, rsendmail is a Python program I wrote from scratch to deliver email over a pipe, securely. I do not trust the sendmail command: its behavior can vary a lot between platforms (e.g. allow flushing the mailqueue or printing it) and I wanted to reduce the attack surface. It works with another program I wrote called sshsendmail which connects to it over a pipe. It integrates well into "dumb" MTAs like nullmailer but I also use it with the popular Postfix as well, without problems.

The second is to switch from OfflineIMAP to Syncmaildir (SMD). The latter allows synchronization over SSH only. The migration was a little difficult but I very much like the results: SMD is faster than OfflineIMAP and works transparently in the background.

I really like to use SSH for email. I used to have my email password stored all over the place: in my Postfix config, in my email clients' memory, it was a mess. With the new configuration, things just work unattended and email feels like a solved problem, at least the synchronization aspects of it.

Emacs

As often happens, I've done some work on my Emacs configuration. I switched to a new Solarized theme, the bbatsov version which has support for a light and dark mode and generally better colors. I had problems with the cursor which are unfortunately unfixed.

I learned about and used the Emacs iPython Notebook project (EIN) and filed a feature request to replicate the "restart and run" behavior of the web interface. Otherwise it's real nice to have a decent editor to work on Python notebooks and I have used this to work on the terminal emulators series and the related source code

I have also tried to complete my conversion to Magit, a pretty nice wrapper around git for Emacs. Some of my usual git shortcuts have good replacements, but not all. For example, those are equivalent:

  • vc-annotate (C-x C-v g): magit-blame
  • vc-diff (C-x C-v =): magit-diff-buffer-file

Those do not have a direct equivalent:

  • vc-next-action (C-x C-q, or F6): anarcat/magic-commit-buffer, see below
  • vc-git-grep (F8): no replacement

I wrote my own replacement for "diff and commit this file" as the following function:

(defun anarcat/magit-commit-buffer ()
  "commit the changes in the current buffer on the fly

This is different than `magit-commit' because it calls `git
commit' without going through the staging area AKA index
first. This is a replacement for `vc-next-action'.

Tip: setting the git configuration parameter `commit.verbose' to
2 will show the diff in the changelog buffer for review. See
`git-config(1)' for more information.

An alternative implementation was attempted with `magit-commit':

  (let ((magit-commit-ask-to-stage nil))
    (magit-commit (list \"commit\" \"--\"
                        (file-relative-name buffer-file-name)))))

But it seems `magit-commit' asserts that we want to stage content
and will fail with: `(user-error \"Nothing staged\")'. This is
why this function calls `magit-run-git-with-editor' directly
instead."
  (interactive)
  (magit-run-git-with-editor (list "commit" "--" (file-relative-name buffer-file-name))))

It's not very pretty, but it works... Mostly. Sometimes the magit-diff buffer becomes out of sync, but the --verbose output in the commitlog buffer still works.

I've also looked at git-annex integration. The magit-annex package did not work well for me: the file listing is really too slow. So I found the git-annex.el package, but did not try it out yet.

While working on all of this, I fell in a different rabbit hole: I found it inconvenient to "pastebin" stuff from Emacs, as it would involve selection a region, piping to pastebinit and copy-pasting the URL found in the *Messages* buffer. So I wrote this first prototype:

(defun pastebinit (begin end)
  "pass the region to pastebinit and add output to killring

TODO: prompt for possible pastebins (pastebinit -l) with prefix arg

Note that there's a `nopaste.el' project which already does this,
which we should use instead.
"
  (interactive "r")
  (message "use nopaste.el instead")
  (let ((proc (make-process :filter #'pastebinit--handle
                            :command '("pastebinit")
                            :connection-type 'pipe
                            :buffer nil
                            :name "pastebinit")))
    (process-send-region proc begin end)
    (process-send-eof proc)))

(defun pastebinit--handle (proc string)
  "handle output from pastebinit asynchronously"
  (let ((url (car (split-string string))))
    (kill-new url)
    (message "paste uploaded and URL added to kill ring: %s" url)))

It was my first foray into aynchronous process operations in Emacs: difficult and confusing, but it mostly worked. Those who know me know what's coming next, however: I found not only one, but two libraries for pastebins in Emacs: nopaste and (after patching nopaste to add asynchronous support and customize support of course) debpaste.el. I'm not sure where that will go: there is a proposal to add nopaste in Debian that was discussed a while back and I made a detailed report there.

Monkeysign

I made a minor release of Monkeysign to cover for CVE-2018-12020 and its GPG sigspoof vulnerability. I am not sure where to take this project anymore, and I opened a discussion to possibly retire the project completely. Feedback welcome.

ikiwiki

I wrote a new ikiwiki plugin called bootstrap to fix table markup to match what the Bootstrap theme expects. This was particularly important for the previous blog post which uses tables a lot. This was surprisingly easy and might be useful to tweak other stuff in the theme.

Random stuff

  • I wrote up a review of security of APT packages when compared with the TUF project, in TufDerivedImprovements
  • contributed to about 20 different repositories on GitHub, too numerous to list here

28 June, 2018 05:55PM