June 01, 2016

Russell Coker

I Just Ordered a Nexus 6P

Last year I wrote a long-term review of Android phones [1]. I noted that my Galaxy Note 3 only needed to last another 4 months to be the longest I’ve been happily using a phone.

Last month (just over 7 months after writing that) I fell on my Note 3 and cracked the screen. The Amourdillo case is good for protecting the phone [2] so it would have been fine if I had just dropped it. But I fell with the phone in my hand, the phone landed face down and about half my body weight ended up in the middle of the phone which apparently bent it enough to crack the screen. As a result of this the GPS seems to be less reliable than it used to be so there might be some damage to the antenna too.

I was quoted $149 to repair the screen, I could possibly have found a cheaper quote if I had shopped around but it was a good starting point for comparison. The Note 3 originally cost $550 including postage in 2014. A new Note 4 costs $550 + postage now from Shopping Square and a new Note 3 is on ebay with a buy it now price of $380 with free postage.

It seems like bad value to pay 40% of the price of a new Note 3 or 25% the price of a Note 4 to fix my old phone (which is a little worn and has some other minor issues). So I decided to spend a bit more and have a better phone and give my old phone to one of my relatives who doesn’t mind having a cracked screen.

I really like the S-Pen stylus on the Samsung Galaxy Note series of phones and tablets. I also like having a hardware home button and separate screen space reserved for the settings and back buttons. The downsides to the Note series are that they are getting really expensive nowadays and the support for new OS updates (and presumably security fixes) is lacking. So when Kogan offered a good price on a Nexus 6P [3] with 64G of storage I ordered one. I’m going to give the Note 3 to my father, he wants a phone with a bigger screen and a stylus and isn’t worried about cracks in the screen.

I previously wrote about Android device service life [4]. My main conclusion in that post was that storage space is a major factor limiting service life. I hope that 64G in the Nexus 6P will solve that problem, giving me 3 years of use and making it useful to my relatives afterwards. Currently I have 32G of storage of which about 8G is used by my music video collection and about 3G is free, so 64G should last me for a long time. Having only 3G of RAM might be a problem, but I’m thinking of trying CyanogenMod again so maybe with root access I can reduce the amount of RAM use.

01 June, 2016 05:27AM by etbe

May 31, 2016

hackergotchi for Junichi Uekawa

Junichi Uekawa

Now I have a working FUSE module I wonder what next.

Now I have a working FUSE module I wonder what next.

31 May, 2016 09:09PM by Junichi Uekawa

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in May 2016

Here is my monthly update covering a large part of what I have been doing in the free software world (previously):

  • Modified LetsEncrypt's "certbot" tool (previously the Let's Encrypt Client) to ensure that the documentation is built reproducibly. The issue was that a Python default keyword argument was non-deterministic and was appearing in documentation with the function's definition. (#3005)
  • Sent a pull request to Mailvelope, a browser extension for GPG/OpenPGP encryption with webmail services, to ensure that passphrase field is cleared when entered incorrectly. (#385)
  • Proposed an optional addition to django-enumfield, a custom Django web development field for type-safe named constants, that automatically enumerations to the template context to save DRY violations in views, etc. (#33)
  • Fixed an issue in the cdist configuration management's build system to ensure that the documentation builds reproducibly. It was previously including various documentation sections non-deterministically depending on the filesystem ordering. (#437)
  • Various improvements to django-slack, my library to easily post messages to the Slack group-messaging utility from projects using the Django web development framework:
    • Raise more specific exception types (instead of the more generic ValueError) wherever possible so that clients can detect specific error conditions. (#45)
    • Pass through arbitrary Python keyword arguments to the backend, allowing custom behaviour for special case. (#46)
    • Ensure that the backend result is returned by the Celery distributed task queue wrapper. (#47)
  • Updated my Strava Enhancement Suite, a Chrome extension that improves and fixes annoyances in the web interface of the Strava cycling and running tracker, to hide more internal advertisements. (#49)
  • Sent a pull request to the build system for gtk-gnutella (a server/client for the Gnutella peer-to-peer network) to ensure the build is reproducible if the SOURCE_DATE_EPOCH environment variable is available. (#17)
  • Updated the SSL certificate for try.diffoscope.org, a hosted version of the diffoscope in-depth and content-aware diff utility. Thanks to Bytemark for sponsoring the hardware.

Debian

My work in the Reproducible Builds project was covered in our weekly reports. (#53, #54, #55, #56 & #57)


Debian LTS


This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • A week of "frontdesk" duties, triaging CVEs, assigning tasks, etc.
  • Issued DLA 464-1 for libav, a multimedia player, server, encoder and transcoder library that fixed a use-after free vulnerability.
  • Issued DLA 469-1 for libgwenhywfar (an OS abstraction layer that allows porting of software to different operating systems like Linux, *BSD, Windows, etc.) correcting the use of an outdated CA certificate bundle.
  • Issued DLA 470-1 for libksba, a X.509 and CMS certificate support library. patching a buffer vulnerability.
  • Issued DLA 474-1 for dosfstools, a collection of utilities for making and checking MS-DOS FAT filesystems, fixing an invalid memory and heap overflow vulnerability.
  • Issued DLA 482-1 for libgd2 graphics library, rectifying a stack consumption vulnerability.

Uploads

  • python-django (1.9.6-1) — New upstream bugfix release.
  • redis (3.2.0-1, etc.) — New upstream release, correct build on more exotic architectures and minor packaging fixups.
  • gunicorn (19.5.0-1 & 19.6.0-1) — New upstream releases and minor packaging fixups.



31 May, 2016 08:49PM

Enrico Zini

Python gotcha

This code:

#!/usr/bin/python3

class Test:
    def __init__(self, items=[]):
        self.items = items

    def add(self, item):
        self.items.append(item)


a = Test()
a.add("foo")
b = Test()
b.add("bar")
print(repr(a.items))
print(repr(b.items))

"obviously" prints:

['foo', 'bar']
['foo', 'bar']

Because the default value of the items argument is a mutable list constructed just once when the code is compiled when the function definition is evaluated, and then reused.

So, in Python, mutable items in default arguments are a good way to get more fun time with debugging.

31 May, 2016 10:23AM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Iron Blogger DC

Back in 2014, Mako ran a Boston Iron Blogger chapter, where you had to blog once a week, or you owed $5 into the pot. A while later, I ran it (along with Molly and Johns), and things were great.

When I moved to DC, I had already talked with Tom Lee and Eric Mill about running a DC Iron Blogger chapter, but it hasn’t happened in the year and a half I’ve been in DC.

This week, I make good on that, with a fantastic group set up at dc.iron-blogger.com; with more to come (I’m sure!).

Looking forward to many parties and though provoking blog posts in my future. I’m also quite pleased I’ll be resuming my blogging. Hi, again, planet Debian!

31 May, 2016 01:37AM

May 30, 2016

Russ Allbery

Review: By the Sword

Review: By the Sword, by Mercedes Lackey

Series: Vows and Honor #4
Publisher: DAW
Copyright: February 1991
ISBN: 0-88677-463-2
Format: Mass market
Pages: 492

By the Sword is the next book in my (slow) Valdemar re-read. This one is a bit hard to classify in the series; it's technically a stand-alone novel, and it doesn't require a lot of prior series knowledge. But the heroine, Kerowyn, is a relative of Tarma and Kethry, and Tarma and Kethry appear in this novel. Most of the book also deals with similar themes as the rest of the Tarma and Kethry books, even though it's also a bridge into Valdemar proper. I'm going to follow Fantastic Fiction and call it book four of the Vows and Honor series, even though the publisher doesn't refer to it that way and it's not strictly correct. I think that creates the right impression, and it's mildly better to read the other Tarma and Kethry novels first.

This book is also a bit confusing for reading order. It was published just before the Mage Winds trilogy, and happens before them in series chronological order (between that trilogy and the Talia series). But some of the chronologies in some of the Valdemar books show it after the Mage Winds trilogy. I think I originally read it afterwards, but both natural reading order and publication order puts it first, and that's the ordering I followed this time.

Series ordering trivia aside (sometimes the comic book shared universe continuity geek in me raises its head), By the Sword is a hefty, self-contained novel about a very typical Lackey protagonist. Kerowyn is the daughter of a noble house, largely ignored by her father in favor of her brother and tasked with keeping the keep running since her mother died. She wants to learn to fight and ride, but that's not part of her father's plans for her. But those plans become suddenly irrelevant when the keep is attacked during her brother's wedding and the bride kidnapped. Unless someone at least attempts to recover her, this will be taken as an excuse for conquest of the keep by the bride's family.

(Spoilers for the start of the book in the following paragraph. I think the outcomes are reasonably obvious given the type of book this is, but skip it if you don't want to know anything about the plot.)

If you're familiar with Lackey's musical work (most probably won't be, but you might if you follow filk), "Kerowyn's Ride" is the start of this book. Kerowyn goes to her grandmother Kethry, who is semi-legendary to Kerowyn but well-known to readers of the rest of the series. From Kethry, she acquires Need; with Need's help, she improbably manages to rescue her brother's bride. It seems like a happy ending, but it completely disrupts and destroys her life. Her role as hero does not fit any of the expectations the remaining members of the household have for her. But it also gives her an escape: she ends up as Tarma and Kethry's student, learning all the things about fighting she'd craved to learn and preparing for a life as a mercenary.

Quite a few adventures follow, all of which are familiar to Lackey readers and particularly to readers of the Tarma and Kethry books. But I think this is one of Lackey's better-written books. The pacing is reasonably good despite the length of the book, Kerowyn is a likable and interesting character, and I like the pragmatism and practicalities that Lackey brings to sword and sorcery mercenary groups. In style and subject matter, it's the closest to Oathbreakers, which was also my favorite of the Tarma and Kethry novels.

By the Sword is both the natural conclusion of the Tarma and Kethry era and arc, and vital foundational material for what I think of as the "core" Valdemar story: Elspeth's adventures during Selany's reign, which start in the Mage Winds trilogy immediately following this. Kerowyn becomes a vital supporting character in the rest of the story, and Need is hugely important in events to come. But even if you're not as invested in the overall Valdemar story arc as I am, this is solid, if a bit predictable and unspectacular, sword and sorcery writing presented in a meaty and satisfying novel with a good coming-of-age story.

This is one of my favorites of the Valdemar series as measured by pure story-telling. There are other books that provide more interesting lore and world background, but there are few characters I like as well as Kerowyn, and I find the compromise she reaches with Need delightful. If you liked Oathbreakers, I'm pretty sure you'd like this as well. And, of course, recommended if you're reading the whole Valdemar series as a fairly key link in the plot and a significant bridge between the Heralds and Tarma and Kethry's world, a bridge that Elspeth is about to cross in the other direction.

Rating: 8 out of 10

30 May, 2016 10:30PM

Reproducible builds folks

Reproducible builds: week 57 in Stretch cycle

What happened in the Reproducible Builds effort between May 22nd and May 28th 2016:

Media coverage

Documentation update

  • The wiki page TimestampsProposal has been extended to cover more usage examples and to list more softwares supporting SOURCE_DATE_EPOCH. (Axel Beckert, Dhole and Ximin Luo)
  • h01ger started a reference card for tools and information about reproducible builds but hasn't progressed much yet. Help with it is much welcome, this is also a good opportunity to learn about this project ;-) The idea is simply to have one coherent place with pointers to all the stuff we have and provide, without repeating nor replacing other documentation.

Toolchain fixes

  • Alexis Bienvenüe submitted a patch (#824050) against emacs24 for SOURCE_DATE_EPOCH support in autoloads files, but upstream already disabled timestamps by default some time before.
  • proj/4.9.2-3 uploaded by Bas Couwenberg (original patch by Alexis Bienvenü) properly initializes memory with zero to prevent the nad2bin tool from leaking random memory content into output artefacts.
  • Reiner Herrmann submitted a patch (#825569, upstream) against Ruby to sort object files in generated Makefiles, which are used to compile C sources that are part of Ruby projects.

Packages fixed

The following 18 packages have become reproducible due to changes in their build dependencies: canl-c configshell dbus-java dune-common frobby frown installation-guide jexcelapi libjsyntaxpane-java malaga octave-ocs paje.app pd-boids pfstools r-cran-rniftilib scscp-imcce snort vim-addon-manager

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

  • #803547 against bbswitch (reopened) by Reiner Herrmann: sort members of tar archive
  • #806945 against bash (follow-up) by Reiner Herrmann: use system man2html instead of embedded copy
  • #825122 against kapptemplate by Scarlett Clark: set owner/group of members in tarball to root
  • #825138 against console-setup by Reiner Herrmann: fix umask issue; sort entries in shell script; sort fontsets/charmaps locale-independently
  • #825285 against kodi by Lukas Rechberger: replace build timestamps with version numbers
  • #825322 against choqok by Scarlett Clark: force UTF-8 locale so kconfig_compiler behaves correctly
  • #825544 against wavemon by Reiner Herrmann: sort list of object files
  • #825545 against dwm by Reiner Herrmann: sort list of header files
  • #825547 against tennix by Reiner Herrmann: sort list of data files being archived
  • #825584 against ffmpeg2theora by Reiner Herrmann: sort list of source files
  • #825588 against kball by Reiner Herrmann: sort list of source files
  • #825634 against miceamaze by Reiner Herrmann: sort list of object files
  • #825643 against dash by Reiner Herrmann: fix sorting of struct members in generated source file
  • #825655 against libselinux by Reiner Herrmann: sort list of source files
  • #825656 against libsepol by Reiner Herrmann: sort list of source files
  • #825674 against libsemanage by Reiner Herrmann: sort list of source files

Package reviews

123 reviews have been added, 57 have been updated and 135 have been removed in this week.

21 FTBFS bugs have been reported by Chris Lamb and Santiago Vila.

strip-nondeterminism development

  • strip-nondeterminsim development: treat *.htb as Zip files (by Sascha Steinbiss).
  • strip-nondeterminism 0.017-1 uploaded by h01ger.

tests.reproducible-builds.org

  • The kde pkg set was extended, though the change ain't visible yet, as there are currently non-installable packages in it (and so the set can't be computed). (h01ger)

Misc.

  • Mattia improved misc.git/reports (=the tools to help writing the weekly statistics for this blog) some more.

This week's edition was written by Reiner Herrmann and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

30 May, 2016 09:57PM

John Goerzen

That was satisfying

It’s been awhile due to all sorts of other stuff going on. Nice to see this clogging my inbox again:

screenshot

It really is satisfying to close bugs!

30 May, 2016 01:29PM by John Goerzen

Daniel Stender

My work for Debian in May

No double posting this time ;-)

I've got not so much spare time this month to spend on Debian, but I could work on the following packages:

  • golang-github-hpcloud-tail/1.0.0+git20160415.b294095-3: put versioned dependency & rebuild against golang-fsnotify/1.3.0-3 to fix FTBFS on ppc64el.

  • updates: packer/0.10.1-1, pybtex/0.20.1-1, afl/2.12b-1, afl/2.13b-1, pyutilib/5.3.5-1.

  • new packages: golang-github-azure-go-ntlmssp/0.0~git20160412.e0b63eb-1 (needed by Packer 0.10.1), and python-latexcodec/1.0.3-1 (needed by Pybtex 0.20).

  • prospector/0.11.7-7 fixed for reproducible builds: there were variations in the sorting order of dependencies in prospector.egg-info/requires.txt. I've prepared a patch to make the package reproducible again (that problem began with 0.11.7-5) before the proposed toolchain patch for setuptools (#804249) gets accepted.

  • python-latexcodec/1.0.3-3 also fixed for reproducible builds (#824454).

This series of blog postings also includes little introductions of and into new packages in the archive. This month there is:

Pyinfra

Pyinfra is a new project which is currently still in development state. It has been already pointed out in an interesting German article1, and is now available as package maintained within the Python Applications Team. It's currently a one man production by Nick Barrett, and eagerly developed in the past weeks (we're currently at 0.1~dev24).

Pyinfra is a remote server configuration/provisioning/service deployment tool which belongs in the same software category like Puppet or Ansible2. It's for provisioning one or an array of remote servers with software packages and to configure them. Pyinfra runs agentless like Ansible, that means for using it nothing special (like a daemon) has to run on targeted servers. It's written to be used for provisioning POSIX compatible Linux systems and has alternatives when it comes to special features like package managers (e.g. supports apt as well as yum). The documentation could be found in usr/share/doc/pyinfra/html/.

Here's a little crash course on how to use Pyinfra: The pyinfra CLI tool is used on the command line like this, deploy scripts, single operations or facts (see below) could be used on a single server or a multitude of remote servers:

$ pyinfra -i <inventory script/single host> <deploy script>
$ pyinfra -i <inventory script/single host> --run <operation>
$ pyinfra -i <inventory script/single host> --facts <fact>

Remote servers which are operated on must provide a working shell and they must be reachable by SSH. For connecting, --port, --user, --password, --key/--key-password and --sudo flags are available, --sudo to gain superuser rights. Root access or sudo rights of course have to be already set up. By the way, localhost could be operated on the same way.

Single operations are organized in modules like "apt", "files", "init", "server" etc. With the --run option they could be used individually on servers like follows, e.g. server.user adds a new user on a single targeted system (-v adds verbosity to the pyinfra run):

$ pyinfra -i 192.0.2.10 --run server.user sam --user root --key ~/.ssh/sshkey --key-password 123456 -v

Multiple servers can be grouped in inventories, which hold the targeted hosts and data associated with them, like e.g. an inventory file farm1.py would contain lists like this:

COMPUTE_SERVERS = ['192.0.2.10', '192.0.2.11']
DATABASE_SERVERS = ['192.0.2.20', '192.0.2.21']

Group designators must be all caps. A higher level of grouping are the file names of inventory scripts, thus COMPUTE_SERVERS and DATABASE_SERVERS can be referenced to at the same time by the group designator farm1. Plus, all servers are automatically added to the group all. And, inventory scripts should be stored in the subfolder inventory/ in the project directory. Inventory files then could be used instead of specific IP addresses like this, the single operation then gets performed on all given machines in farm1.py:

$ pyinfra -i inventory/farm1.py  --run server.user sam --user root --key ~/.ssh/sshkey --key-password=123456 -v

Deployment scripts could be used together with group data files in the subfolder group_data/ in the project directory. For example, a group_data/farm1.py designates all servers given in inventory/farm1.py (by the way, all.py designates all servers), and contains the random attribute user_name (attributes must be lowercase), next to authentication data for the whole inventory group:

user_name = 'sam'

ssh_user = 'root'
ssh_key = '~/.ssh/sshkey'
ssh_key_password = '123456'

The random attribute can be picked up by a deployment script using host.data() like follows, user_name could be used again for e.g. server.user(), like this:

from pyinfra import host
from pyinfra.modules import server

server.user(host.data.user_name)

This deploy, the ensemble of inventory file, group data file and deployment script (usually placed top level in the project folder) then could be run that way:

$ pyinfra -i inventory/farm1.py deploy.py

You have guessed it, since deployment scripts are Python scripts they are fully programmable (please regard that Pyinfra is build & runs on Python 3 on Debian), and that's the main advantage point with this piece of software.

Quite handy for that come Pyinfra facts, functions which check different things on remote systems and return information as Python data. Like e.g. deb_packages returns a dictionary of installed packages from a remote apt based server:

$ pyinfra -i 192.0.2.10 --fact deb_packages --user root --key ~/.ssh/sshkey --key-password=123456
{
    "192.0.2.10": {
        "libdebconfclient0": "0.192",
        "python-debian": "0.1.27",
        "libavahi-client3": "0.6.31-5",
        "dbus": "1.8.20-0+deb8u1",
        "libustr-1.0-1": "1.0.4-3+b2",
        "sed": "4.2.2-4+b1",

Using facts, Pyinfra reveals its full potential. For example, a deployment script could go like this, linux.distribution() returns a dict containing the installed distribution:

from pyinfra import host
from pyinfra.modules import apt

if host.fact.linux_distribution['name'] == 'Debian':
   apt.packages(packages='gummi', present=True, update=True)
elif host.fact.linux_distribution['name'] == 'CentOS':
   pass

I'll spare more sophisticated examples to keep this introduction simple. Beyond fancy deployment scripts, Pyinfra features an own API by which it could be programmed from the outside, and much more. But maybe that's enough to introduce Pyinfra. That are the usage basics.

Pyinfra is a brand new project and it remains to be seen whether the developer can keep on further developing the tool like he does these days. For a private project it's insane to attempt to become a contender for the established "big" free configuration management tools and frameworks, but, if Puppet has become too complex in the meanwhile or not3, I really don't think that's the point here. Pyinfra follows an own approach in being programmable the way which it is. And it's definitely not harm to have it in the toolbox already, not trying to replace nothing.

Brainstorm

After the first package has been in experimental, the Brainstorm library from Swiss AI research institute IDSIA4 is now available as python3-brainstorm in unstable. Brainstorm is a lean, easy-to-use library for setting up deep learning networks (multiple layered artificial neural networks) for machine learning applications like for image and speech recognition or natural language processing. To set up a working training network for a classifier for handwritten digits like the MNIST dataset (a usual "hello world") just takes a couple of lines, like an example demonstrates. The package is maintained within the Debian Python Modules Team.

The Debian package ships a couple of examples in /usr/share/python3-brainstorm/examples (the data/ and examples/ folders of the upstream tarball are combined here). Among them there are5:

  • scripts for creating proper HDF5 training data of the MNIST database of handwritten digits and for training a simple neural network on it (create_mnist.py, mnist_pi.py),

  • examples for setting up data and training a convolutional neural network (CNN) on the CIFAR-10 dataset of pictures (create_cifa10.py, cifar10_cnn.py),

  • as well as example scripts for setting up training data and creating a LSTM (Long short-term memory) recurrent neural network (RNN) on test data used in the Hutter Prize competition (create_hutter.py, hutter_lstm.py).

  • And there's also another example script for creating training data of the CIFAR-100 dataset (create_cifar100.py).

The current documentation in /usr/share/doc/python3-brainstorm/html/ isn't complete yet (several chapters are under construction), but there's a walkthrough on the CIFAR-10 example. The MNIST example has been extended by Github user pinae, and has been explained in German C't recently6.

What are the perspectives for further development? Like Zhou Mo confirmed, there are a couple of deep learning frameworks around having a rather poor outlook since there have been abandoned after being completed as PhD projects. There's really no point for thriving to have them all in Debian, like the ITP of Minerva has been given up partly for this reason, there weren't any commits since 08/2015 (and because cuDNN isn't available and most likely won't). Brainstorm, 0.5 have been released 05/2015, also was a PhD project as IDSIA. It's stated on Github that the project is "under active development", but the rather sparse project page on the other side expresses the "hope the community will help us to further improve Brainstorm". This sentence much often implies that the developers are not actively working on the project. But there are recent commits and it looks that upstream is active and could be reached when there are problems, and that the project is active. So I don't think we're riding a dead horse, here.

The downside for Brainstorm in Debian is, it seems that the libraries which are needed for GPU accelerated processing can't be fully provided. Pycuda is available, but scikit-cuda (an additional library which provides wrappers for CUDA features like CUBLAS, CUFFT and CUSOLVER) is not and won't be, because the CULA Dense Toolkit (scikit-cuda also contains wrappers for also that) is not available freely as source. Because of that, a dependency against pycuda, not even as Suggests (it's non-free), has been spared. Without GPU acceleration, Brainstorm computes the matrices on openBLAS using a Cython wrapper on the NumpyHandler, and the PyCudaHandler couldn't be used. openBLAS makes pretty good use of the available hardware (it distributes over all available CPU cores), but it's not yet possible to run Brainstorm full throttle using available floating point devices to reduce training times, which becomes crucial when the projects are getting bigger.

The state of deep learning in Debian

Brainstorm belongs to the number of deep learning frameworks already being or becoming available in Debian. Currently there is:

  • Caffe for image recognition resp. classification7 is just around the corner (#823140).

  • Theano is currently in experimental, and will be ready together with libgpuarray (OpenCL based GPU accelerated processing) and Keras (abstraction layer) for Stretch. It could already run on NVIDIA graphics card via CUDA8 (limited to amd64 and ppc64el, though).

  • Lasagne, the more higher-leveled abstraction layer for Theano is RFP (#818641).

  • Google's Tensorflow, the free successor of Dist-Belief, is currently on ITP (#804612). It's waiting for Google's build system Bazel to become available.

  • Torch is also ITP (#794634). It's blocked by a wishlist bug on dh-lua to get closed.

  • Amazon's own machine learning workhorse dsstne ("destiny") is now also put under a free license and also will becoming available (#824692) in the near future for Debian (contrib). It's not yet for image recognition applications, though (lacks CNN).

  • Mxnet is RFP (#808235).

I've checked over Microsoft's CNTK, but although it's also set free recently I have my doubts if that could be included. Apparently there are dependencies against non-free software and most likely other issues. So much for a little update on the state of deep learning in Debian, please excuse if my radar misses something.


  1. Tim Schürmann: "Schlangenöl: Automatisiertes Service-Deployment mit Pyinfra". In: IT-Administrator 05/2016, pp. 90-95. 

  2. For a comparison of configuration management software like this, see Bößwetter/Johannsen/Steig: "Baukastensysteme: Konfigurationsmanagement mit Open-Source-Software". In: iX 04/2016, pp. 94-99 (please excuse the prevalence of German articles in the pointers, I've just have them at hand). 

  3. On the points of critique on Puppet, see Martin Loschwitz: "David gegen Goliath – Zwei Welten treffen aufeinander: Puppet und Ansible". In Linux-Magazin 01/2016, 50-54. 

  4. See the interview with IDSIA's deep learning guru Jürgen Schmidhuber in German C't 2014/09, p. 148 

  5. The examples scripts need some more finetuning. To run the data creation scripts in place the environment variable BRAINSTORM_DATA_DIR could be set, but the trained networks are currently tried to write in place. So please copy the scripts into some workspace if you want to try them out. I'll patch the example scripts to run out-of-the-box, soon. 

  6. Johannes Merkert: "Ziffernlerner. Ein künstliches neuronales Netz selber gebaut". In: C't 2016/06, p. 142-147. Web: http://www.heise.de/ct/ausgabe/2016-6-Ein-kuenstliches-neuronales-Netz-selbst-gebaut-3118857.html (hehe, was ist "Gradientenabstieg" für ein Tag?) 

  7. See Ramon Wartala: "Tiefenschärfe: Deep learning mit NVIDIAs Jetson-TX1-Board und dem Caffe-Framework". In: iX 06/2016, pp. 100-103 

  8. https://lists.debian.org/debian-science/2016/03/msg00016.html 

30 May, 2016 11:55AM by Daniel Stender

hackergotchi for Steve Kemp

Steve Kemp

A mixed weekend

This past seven days have been a little mixed:

  • I updated documentation on my simple object store.
  • I created a simplified alerting system.
    • Heavily inspired by something we use at work.
    • My version is much much simpler, but still useful enough to alert me of outages (via hearbeats) and unread email. (Both of which are sent via pushover notifications.)
  • I bought a pair of cheap USB "game controllers"
    • And have spend several hours playing SNES games such as Bomberman 2, and Super Mario Brothers 3.
    • I'm using mednafan, as it supports cheats, fullscreen, sound, and is pretty easy to drive.

Finally I spent the tail end of the weekend being a little red, sore, and itchy. . I figured this was a surprising outbreak of Dyshidrosis on my hands, and eczema on my body. Instead I received a diagnosis of Scarlet Fever. So now I feel somewhat Dickensian!

Apparently this infection is on the rise!

30 May, 2016 03:26AM

Russ Allbery

Review: Empires of EVE

Review: Empires of EVE, by Andrew Groen

Publisher: Andrew Groen
Copyright: 2015
Printing: 2016
ISBN: 0-9909724-0-2
Format: Hardcover
Pages: 171

The version of this book I read was the hardcover Kickstarter campaign reward, since I was a backer. I believe it's the same as the hardcover currently available for sale from the author's site for those who didn't back the project. There are also softcover and Kindle versions. I've lost track of whether they have less sidebar content, or just less high-quality artwork.

EVE Online ("EVE" is not an acronym, just the developer's way of writing the name, so you'll also see "Eve" both for the game and for this book) is a massively multiplayer on-line role-playing game (MMORPG) based on interstellar mining, manufacturing, and combat. Its Icelandic developer takes a different, more emergent approach than most MMORPG developers: rather than fill the world exclusively with pre-scripted adventures and enemies (although there is some of that for those who want it), vast regions of EVE's world are left open to the players to govern, exploit, or fight over as they see fit. Player versus player combat plays a large role in that aspect of the game, and many actions that would be prohibited or made impossible in other games (stealing from other players, betrayals, tricking other players into fatal situations) are permitted and sometimes core components of the game. EVE is best-known for its economy, which is almost entirely player-driven and requires extensive mining and manufacturing work by large teams of players to build the largest and most powerful ships in the game.

Empires of EVE is an unusual type of book, one that I'm not sure would have been possible ten years ago. Subtitled "a history of the great wars of EVE Online," it's a history of a virtual world, but not one sponsored by the developer or part of the marketing or lore of the game. I love seeing this (which is also why I backed the Kickstarter). Video games have developed beyond just games to play into games to watch other people play (successfully competing, for me and for many others, for the role previously filled by professional sports), and now into emergent events that are complex enough, and dramatic enough, to warrant their own third-party history. I was quite surprised and delighted by how broad the audience for this sort of writing is.

And this is not a shallow effort. Andrew Groen is a freelance writer who does not, himself, play EVE. He approaches the complex political and in-game fighting with the attitude of a reporter and historian, cites sources (to the standards I would expect for long-form journalism, if not quite at the level of academic history), discloses where history has been lost or one side of some fight could not be contacted, and puts substantial effort into explaining the political strengths and weaknesses of the shifting in-game alliances. It's a proper political history of an imaginary world, including objective (so far as I can tell) reporting of times when developers were accused of assisting one of the factions.

If, like me, you don't play EVE and are primarily interested in this book to get a feel for the game, there are a few caveats to be aware of. EVE is divided into regions with game-enforced security levels. The ones near the center of the game galaxy are heavily policed by the game to prevent most of the player versus player combat and let new players get their feet. Those regions also offer various built-in missions that don't require interacting with other players. As one moves out from that central area, the game-provided security (and I believe the game-provided interactions) drops off. Empires of EVE deals exclusively with nullsec space: the outer regions of the game where the richest resources are, and where there is no law or policing except what's done by the players themselves. The game here, and as described in the book, is relentlessly blood-thirsty, but this isn't representative of the entire game.

Second, most of this book is devoted to ship-to-ship combat and missions of conquest and reprisal. But combat is built on top of a vast "civilian" infrastructure of mining and manufacturing, and there are players who focus on those aspects of the game and rarely, if ever, fight. Groen talks about this in passing, since it can have significant influence on the politics of the game, but spends little time describing the day-to-day life in the game for those players. The focus here is on the resulting combat.

Finally, Groen starts his history at the EVE beta in 2003 but ends it in 2009. Maneuvering and wars have, of course, continued ever since, but he has to stop somewhere. As he mentioned during the Kickstarter, it takes some years for events to become history, and for people to be willing to talk about them. The developer has kept changing the game, so some of the mechanics here will be mildly stale and the current political alliances are quite likely far different (although many of the players discussed in this book are still playing).

With those caveats, though, this is a fascinating book, even if this isn't your sort of game. Personally, I have a deep-seated dislike of games like Diplomacy where alliances, betrayal, and political maneuvering is sanctioned and encouraged by the game. I've never played EVE and I can't imagine ever wanting to, particularly after reading this book. But it's still a fascinating war history and analysis of slightly skew human politics. The alliances and backstabbing are reminiscent of real human history, but the game setting adds some significant twists: players can just quit if they're not having fun, the largest threat to strong alliances is players just not bothering to show up because they're not having fun or because the risk is too high, and without the stability and momentum of real-world institutions, in-game corporations and alliances can collapse overnight when players lose faith in them. From a distance, it's quite entertaining to see how those factors reshape politics and propaganda. (If sadly somewhat reminiscent of the sort of personalities that killed Usenet. I shouldn't have been surprised that organized Internet trolls made an appearance in EVE, although the nature of EVE politics meant they fit right in rather than doing much to spoil other people's fun, at least in nullsec.)

I can't speak to the other formats, but the hardcover is also a gorgeous book. The artwork isn't quite my style, and the in-game screen shots are a bit muddy and confusing (not Groen's fault), but the maps are clear and invaluable, the propaganda posters are amazing, and the hardcover printing is clearly of a very high quality. I felt like I got my $50 worth in terms of presentation and quality printing, and have a book that isn't going to fall apart in a few years. The one quibble that I have is that Groen picked a fairly thin and light font for the main text, which my eyes had some trouble with on high-gloss paper. Something thicker and darker may not have looked as good on the page, but it would have been easier to read. The font choice for the sidebars was a lovely high-contrast white on black, but I would have appreciated something a bit larger. If you have vision issues, you may find the ebook more readable, if not as beautiful.

Empires of EVE has apparently been very successful within its niche, which makes me happy. I would love to read more books of this type. I know I'm not the only person who grew up with gaming but has been increasingly drawn into watching other people game or reading about other people game. I love watching or reading about people who have put the effort into becoming very good at something they love, and when that comes with emergent history, creative propaganda, and a lot of assholes to root against (although sadly not many people to root for), the history of EVE makes compelling reading, at least for me. Recommended if this is at all your sort of thing; the ebook version of the book is fairly inexpensive.

Rating: 8 out of 10

30 May, 2016 03:26AM

hackergotchi for Sean Whitton

Sean Whitton

Skype inside Firejail version 0.9.40-rc1

Since my PGP key is on its way into the Debian Maintainers keyring, I feel that I should be more careful about computer security. This week I find that I need to run Skype in order to make some calls to some landlines. With the new release candidate of Firejail, it’s really easy to minimise the threat from its non-free code.

Firstly, check that the Skype .deb you download from their website merely installs files and does not run any prerm or postinst scripts. You can run dpkg-deb --control skype-debian_4.3.0.37-1_i386.deb and confirm that there’s nothing executable in there. You should also list the contents with dpkg-deb --contents skype-debian_4.3.0.37-1_i386.deb, and confirm that it doesn’t install anything to places that will be executed by the system, such as to /etc/cron.d. For my own reference the safe .deb has sha256 hash a820e641d1ee3fece3fdf206f384eb65e764d7b1ceff3bc5dee818beb319993c, but you should perform these checks yourself.

Then install Firejail and Xephyr. You can hook Firejail and Xephyr together manually, but Firejail version 0.9.40-rc1 can do it for you, which is very convenient, so we install that from the Debian Experimental archive:

# apt-get install xserver-xephyr firejail/experimental

Here’s an invocation to use the jail:

$ firejail --x11=xephyr --private --private-tmp openbox
$ DISPLAY=$(firemon --x11 | grep "DISPLAY" | sed 's/   DISPLAY //') \
  firejail --private --private-tmp skype

This takes advantage of Firejail’s existing jail profile for Skype. We get the following:

  • A private /home/you so that Skype cannot access any of your files (disadvantage is that Skype can’t remember your username and password; you can look at --private=directory to do something persistent).
  • A private /tmp to avoid it going near any sockets.
  • A private X11 server so that Skype cannot access the contents of any of your other windows (X11 inter-application security is virtually non-existent).
  • The Firejail profile for Skype restricts the hardware it can access to only what it needs i.e. network, camera, microphone etc.
  • The openbox window manager so you can close overlapping windows.

This isn’t perfect. An annoyance is that the Xephyr window sticks around when you close Skype. More seriously, computer security is always an attacker’s advantage game, so this is just an attempt at reducing (optimistically: minimising) the threat posed by non-free code.

Update 2016/vi/1: use openbox

30 May, 2016 01:31AM

hackergotchi for Norbert Preining

Norbert Preining

OpenPHT 1.5.2 for Debian/sid

I have updated the openpht repository with builds of OpenPHT 1.5.2 for Debian/sid for both amd64 and i386 architecture. For those who have forgotten it, OpenPHT is the open source fork of Plex Home Theater that is used on RasPlex, see my last post concerning OpenPHT for details.

plex-debian-new

The repository also contains packages (source and amd64/i386) for shairplay which is necessary for building and running OpenPHT.

sid and testing

For sid use the following lines:

deb http://www.preining.info/debian/ openpht-sid main
deb-src http://www.preining.info/debian/ openpht-sid main

You can also grab the binary for amd64 directly here for amd64 and i386, you can get the source package with

dget http://www.preining.info/debian/pool/main/o/openpht/openpht_1.5.2.514-1.dsc

Note that if you only get the binary deps, you also need libshairplay0 from amd64 or i386.

jessie

Builds for Debian stable release jessie are not done by now.

The release file and changes file are signed with my official Debian key 0x860CDC13.

Now be ready for enjoying the next movie!

30 May, 2016 01:15AM by Norbert Preining

May 29, 2016

Iustin Pop

Mind versus body: time perception

Mind versus body: time perception

Since mid-April I'm playing a new game. It's really awesome, and I learned some surprising things.

The game—Zwift—is quite different from the games I'm usually playing. While it does have all or most of the elements of a game, more precisely an MMO, the main point of the game if physical exercise (in the real world). The in-game performance if the result of the (again, real-world) power output.

Playing the game is more or less like many other games: very nice graphics, varied terrain (or not), interaction, or better said competition, with other players, online leader boards, races, gear "upgrade" (only cosmetic AFAIK), etc. The game more or less progresses like usual, but the fact that the main driver is body changes, to my surprise, the time component of the game.

For me, with a normal game—let's say one of Bioware's Dragon Age games, or one of CD Red's Witcher games—a short gaming session is 2-3 hours, a reasonable session 6-8 hours, and longer ones are for "marathon" gaming sessions. Playing a good game for one hour feels like you've been cheated—one barely starts and has to stop.

On Zwift, things are different. A short session is 20-30 minutes, but this already feels good. A good one is more than one hour, and for me, the longest rides I had were three hours. A three hour session, if done at or near Functional Threshold Power (see here for another article about it), leaves me spent. I just had today such a long ride (at around 85% FTP) and it took me an hour afterwards (and eating) to recover.

The interesting part is that, body exertion aside, the brain sees a 3 hour Zwift equivalent to an 8-10 hour gaming session. Both are tiring, and the perception of passed time is the same (long). Same with shorter sessions: if I do a 40 minutes ride, it feels subjectively as rewarding as a 2-3 hour normal gaming session. I wonder what mechanism is that influences this perception. Is it just effort level? But there's no real effort (as in increased heart rate) for computer games. Is it the fact that so much blood is needed for the muscles when cycling that the brain gets comparatively little, so it enters slow-speed mode (hey, who pressed the Turbo button)? In any case, using Zwift results in a much more efficient use of my time when I'm playing just to decompress/relax.

Another interesting difference is how much importance a good night sleep has on body performance. With computer games, it makes a difference, but not a huge one, and it usually goes away a couple of hours in the game, at least subjectively. With cycling, a bad night results in persistent lower performance all around (for me at least), and one that you easily feel (e.g. for max 5-second average power).

And the last thing I learned, although this shouldn't be a surprise: my FTP is way lower than it's supposed to be (according to the internet). I guess the hundreds of hours I put into pure computer games didn't do anything to my fitness, to my "surprise". I'm curious to see, if I can keep this going on, how things will look like in ~6 months or so.

29 May, 2016 09:33PM

hackergotchi for Christoph Berg

Christoph Berg

vcswatch is now looking for tags

About a week ago, I extended vcswatch to also look at tags in git repositories.

Previously, it was solely paying attention to the version number in the top paragraph in debian/changelog, and would alert if that version didn't match the package version in Debian unstable or experimental. The idea is that "UNRELEASED" versions will keep nagging the maintainer (via DDPO) not to forget that some day this package needs an upload. This works for git, svn, bzr, hg, cvs, mtn, and darcs repositories (in decreasing order of actual usage numbers in Debian. I had actually tried to add arch support as well, but that VCS is so weird that it wasn't worth the trouble).

There are several shortcomings in that simple approach:

  • Some packages update debian/changelog only at release time, e.g. auto-generated from the git changelog using git-dch
  • Missing or misplaced release tags are not detected

The new mechanism fixes this for git repositories by also looking at the output of git describe --tags. If there are any commits since the last tag, and the vcswatch status according to debian/changelog would otherwise be "OK", a new status "COMMITS" is set. DDPO will report e.g. "1.4-1+2", to be read as "2 commits since the tag [debian/]1.4-1".

Of the 16644 packages using git in Debian, currently 7327 are "OK", 2649 are in the new "COMMITS" state, and 4227 are "NEW". 723 are "OLD" and 79 are "UNREL" which indicates that the package in Debian is ahead of the git repository. 1639 are in an ERROR state.

So far the new mechanism works for git only, but other VCSes could be added as well.

29 May, 2016 05:49PM

May 28, 2016

Russ Allbery

Another small book haul

Book reading is happening, and more book review posting will be happening. I'm a bit behind in writing reviews, but the holiday weekend is a good opportunity to do a bit of catching up.

In the meantime, here are some new acquisitions:

Roxanne J. Coady & Joy Johannessen (ed.) — The Books That Changed My Life (nonfiction)
James S.A. Corey — Caliban's War (sff)
James S.A. Corey — Abaddon's Gate (sff)
Max Gladstone — Full Fathom Five (sff)
Max Gladstone — Last First Snow (sff)
N.K. Jemisin — The Fifth Season (sff)
Guy Gavriel Kay — Children of Earth and Sky (sff)
Naomi Novik — Uprooted (sff)
Ada Palmer — Too Like the Lightning (sff)
Graydon Saunders — Safely You Deliver (sff)
Neal Stephenson — Seveneves (sff)
Jeff VanderMeer — Annihilation (sff)

This is mostly catching up on books that were nominated for awards. I want to read the (legitimate) nominees for Hugo best novel this year if I can find the time, and VanderMeer won the Nebula last year. The rest of Gladstone's series to date was on sale, and I really liked the first book. And of course a new Guy Gavriel Kay is buy on sight.

I'm currently re-reading The Sarantine Mosaic, since I read that before I started writing reviews and Children of Earth and Sky is apparently set in historical contact with it. (It's possible all of Kay's historical fantasies are set in the same universe, but they're usually fairly disconnected.)

28 May, 2016 06:38PM

hackergotchi for Evgeni Golov

Evgeni Golov

how to accidentally break DNS for 15 domains or why you maybe could not send mail to me

TL;DR: DNS for golov.de and other (14) domains hosted on my infra was flaky from 15th to 17th of May, which may have resulted in undelivered mail.

Yeah, I know, I haven't blogged for quite some time. Even not after I switched the engine of my blog from WordPress to Nikola. Sorry!

But this post is not about apologizing or at least not for not blogging.

Last Tuesday, mika sent me a direct message on Twitter (around 13:00) that read „problem auf deiner Seite?“ or “problem on your side/page?”. Given side and page are the same word in German, I thought he meant my (this) website, so I quickly fired up a browser, checked that the site loads (I even checked both, HTTP and HTTPS! :-)) and as everything seemed to be fine and I was at a customer I only briefly replied “?”. A couple messages later we found out that mika tried to send a screenshot (from his phone) but that got lost somewhere. A quick protocol change later (yay, Signal!) and I got the screenshot. It said "<evgeni+grml@golov.de>: Host or domain name not found. Name service error for name=golov.de type=AAAA: Host found, but no data record of requested type". Well, yeah, that looks like an useful error message. And here the journey begins.

For historical nonsense golov.de currently does not have any AAAA records, so it looked odd that Postfix tried that. Even odder was that dig MX golov.de and dig mail.golov.de worked just fine from my laptop.

Still, the message looked worrying and I decided to dig deeper. golov.de is served by three nameservers: ns.die-welt.net, ns2.die-welt and ns.inwx.de and dig was showing proper replies from ns2.die-welt.net and ns.inwx.de but not from ns.die-welt.net, which is the master. That was weird, but gave a direction to look at, and explained why my initial tests were OK. Another interesting data-point was that die-welt.net was served just fine from all three nameservers.

Let's quickly SSH into that machine and look what's happening… Yeah, but I only have my work laptop with me, which does not have my root key (and I still did not manage to setup a Yubikey/Nitrokey/whatver). Thankfully my key was allowed to access the hypervisor, yay console!

Now let's really look. golov.de is served from from the bind backend of my PowerDNS, while die-welt.net is served from the MySQL backend. That explains why one domain didn't work while the other did. The relevant zone file looked fine, but the zones.conf was empty. WTF?! That zones.conf is autogenerated by Froxlor and I had upgraded it during the weekend to get Let's Encrypt support. Oh well, seems I hit a bug, damn. A few PHP hacks later and I got my zones.conf generated properly again and all was good.

But what had really happened?

  • On Saturday (around 17:00) I upgraded to Froxlor 0.9.35.1 to get Let's Encrypt support and hit Froxlor bug 1615 without noticing as PowerDNS re-reads zones.conf only when told.

  • On Sunday PowerDNS was restarted because of upgraded packages, thus re-reading zones.conf and properly logging:

    May 15 08:10:59 shokki pdns[2210]: [bindbackend] Parsing 0 domain(s), will report when done
    
  • On Tuesday the issue hit a friend who cared and notified me

  • On Tuesday the issue was fixed (first by a quick restore from etckeeper, later by fixing the generating code):

    May 17 14:56:08 shokki pdns[24422]: [bindbackend] Parsing 15 domain(s), will report when done
    

And the lessons learned?

  • Monitor all your domains, on all your nameservers. (I didn't)
  • Have emergency access to all you servers. (I did, but it was complicated)
  • Use etckeeper, it's easier to use than backups in such cases.
  • When hitting bugs, look in the bugtracker before solving the issue yourself. (I didn't)
  • Have friends who care :-)

28 May, 2016 04:15PM by evgeni

Stig Sandbeck Mathisen

Puppet 4 uploaded to Debian unstable

Puppet 4 has been uploaded to Debian unstable. This is a major upgrade from Puppet 3.

If you are using Puppet, chances are that it is handling important bits of your infrastructure, and you should upgrade with care.

Here are some points to consider.

Read Puppet’s upgrade checklist

First, there are a number of changes in Puppet itself.

There is an upgrade checklist for Puppet (the software) published by Puppet (the company).

Please read this before you upgrade, and not after.

Using exported resources?

In Puppet 4, using exported resources requires PuppetDB, which is not packaged in Debian.

Getting PuppetDB

Puppet provides good installation documentation for PuppetDB, and an apt software repository where you you can get the packages you require.

Editor support packages

The packages vim-puppet and puppet-el provide support for editors Vim and Emacs. These are no longer built from the puppet source package.

The sources for these have moved to separate repositories, and will get individual source packages.

For vim-puppet, there are two alternatives for new packaging, rodjek/vim-puppet, and puppetlabs/puppet-syntax-vim

For puppet-el puppetlabs/puppet-syntax-emacs and lunaryorn/puppet-mode are available.

28 May, 2016 02:23PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.7.100.3.0

armadillo image

The first Armadillo release of the 7.* series is out: a new version 7.100.3. We uploaded RcppArmadillo 0.7.100.3.0 to CRAN and Debian. This followed the usual thorough reverse-dependecy checking of by now 230 packages using it.

This release now requires a recent enough compiler. As g++ is so common, we explicitly test for version 4.6 or newer. So if you happen to be on an older RHEL or CentOS release, you may need to get yourself a more modern compiler. R on Windows is now at 4.9.3 which is decent (yet stable) choice; the 4.8 series of g++ will also do. For reference, the current LTS of Ubuntu is at 5.3.1, and we have g++ 6.1 available in Debian testing.

This new upstream release adds a few new helper functions (which are particularly useful in statistics, but were of course already available to us via Rcpp), more slicing of Cube data structures and a brand new sparse matrix decomposition module courtesy of Yixuan Qiu -- whom R users know as the author of the RSpectra package (which replaces his older rArpack package) and of course all the most excellent work he provided to RcppEigen.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

Changes in this release are as follows:

Changes in RcppArmadillo version 0.7.100.3.0 (2016-05-25)

  • Upgraded to Armadillo test release 7.100.3

    • added erf(), erfc(), lgamma()

    • added .head_slices() and .tail_slices() to subcube views

    • spsolve() now requires SuperLU 5.2

    • eigs_sym(), eigs_gen() and svds() now use a built-in reimplementation of ARPACK for real (non-complex) matrices (code contributed by Yixuan Qiu)

  • The configure code now checks against old g++ version which are no longer sufficient to build the package.

Courtesy of CRANberries, there is also a diffstat report for this release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 May, 2016 02:23PM

Petter Reinholdtsen

Tor - from its creators mouth 11 years ago

A little more than 11 years ago, one of the creators of Tor, and the current President of the Tor project, Roger Dingledine, gave a talk for the members of the Norwegian Unix User group (NUUG). A video of the talk was recorded, and today, thanks to the great help from David Noble, I finally was able to publish the video of the talk on Frikanalen, the Norwegian open channel TV station where NUUG currently publishes its talks. You can watch the live stream using a web browser with WebM support, or check out the recording on the video on demand page for the talk "Tor: Anonymous communication for the US Department of Defence...and you.".

Here is the video included for those of you using browsers with HTML video and Ogg Theora support:

I guess the gist of the talk can be summarised quite simply: If you want to help the military in USA (and everyone else), use Tor. :)

28 May, 2016 12:20PM

Scarlett Clark

Debian: Outreachy, Debian Reproducible builds Week 1 Progress Report

It has been an exciting first week of my internship. I was able to produce a few patches,
and submitted upstream, as well as into debian packaging. I am hopeful they will get accepted,
preferably upstream so all can benefit!

kapptemplate:
https://bugs.kde.org/show_bug.cgi?id=363448
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825122

choqok:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825322

However, after speaking with Lisandro ( choqok maintainer ) I decided a better course of action
is to try and fix the actual source of the problem, kconfig_compiler from kde4libs is generating
non utf-8 cpp and header files under certain conditions like an environment that does not have a
locale set. Of course, I have some help with this from wonderful folks in KDE, which is good
because kde4libs codebase is HUGE! So I hope to have two new bugs early next week for choqok.

I checked up on my existing kdevplatform bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=815962

And noticed it has not received attention, so I created upstream bug:
https://bugs.kde.org/show_bug.cgi?id=363615

And updated the debian bug patch DEP 3 with upstream bug url.

I have been working on kdevelop-php without success yet, looks like build-id
( I still need to find the source ) and embedded kernel which I think I found,
though I will reach out to my awesome mentor to get some help on this one.

I did not knock out quite as many builds as I wanted, but I picked some hard ones
that are new to me 🙂 So in the end it was a very successful first week, because I
have learned several new things that will help me with future reproducible builds.

Have a great weekend.

28 May, 2016 12:00AM by Scarlett Clark

May 27, 2016

Mike Gabriel

MATE 1.14 landing in Debian unstable...

I just did a bundle upload of all MATE 1.14 related packages to Debian unstable. Packages are currently building for the 23 architectures supported by Debian, build status can be viewed on the DDPO page of the Debian MATE Packaging Team [1]

Credits

Again a big thanks to the packaging team. Martin Wimpress again did a fabulous job in bumping all packages towards the 1.14 release series during the last weeks. During last week, I reviewed his work and uploaded all binary packages to a staging repository.

Also a big thanks to Vangelis Mouhtsis, who recently added more hardening support to all those MATE packages that do some sort of C compilation at build time.

After testing all MATE 1.14 packages on a Debian unstable system, I decided to do a bundle upload today. Packages should be falling out of the build daemons within the next couple of hours/days (depending on the architecture being built for).

GTK2 -> GTK3

The greatest change for this release of MATE to Debian is the switch over from GTK2 to GTK3.

People using the MATE desktop environment on Debian systems are invited to test the new MATE 1.14 packages and give feedback via the Debian bug tracker, esp. on the user experience regarding the switch over to GTK3.

Thanks to all who help getting MATE 1.14 in Debian better every day!!!

Known issues when running in NXv3 sessions

The new GTK3 build of MATE works fine locally (against local X.org server). However, it causes some trouble (i.e. graphical glitches) when running in an NXv3 based remote desktop session. Those issues have to be addressed by me (while wearing my NXv3 upstream hat), I guess (sigh...).

light+love,
Mike

[1] https://qa.debian.org/developer.php?login=pkg-mate-team@lists.alioth.deb...

27 May, 2016 01:11PM by sunweaver

Patrick Matthäi

Packages updates from may

There are some news on my packaging work from may:

  • OTRS
    • I have updated it to version 5.0.10
    • Also I have updated the jessie backports version from 5.0.8 to 5.0.10
    • I have to test the new issue #825291 (database update with Postgres fails UTF-8 Perl error), maybe someone has got an idea?
  • needrestart
    • Thanks to Thomas Liske (upstream author) for adressing mostly all open bugs and wishes from the Debian BTS and Github. Version 2.8 fixes 6 Debian bugs
    • Already available in jessie-backports :)
  • geoip-database
    • As usual package updated and uploaded to jessie-backports and wheezy-backports-sloppy
  • geoip
    • Someone here interested in fixing #811767 with GCC 6? I were not able to fix it
    • .. and if it compiles, the result segfaults :(
  • fglrx-driver
    • I have removed the fglrx-driver from the Debian sid/stretch repository
    • This means that fglrx in Debian is dead
    • You should use the amdgpu driver instead :)
  • icinga2
    • After some more new upstream releases I have updated the jessie-backports version to 2.4.10 and it works like a charm :)

27 May, 2016 09:18AM by the-me

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 0.1.9

rfoaas greed example

Time for new release! We just updated rfoaas on CRAN, and it now corresponds to version 0.1.9 of the FOAAS API.

The rfoaas package provides an interface for R to the most excellent FOAAS service--which provides a modern, scalable and RESTful web service for the frequent need to tell someone to f$#@ off.

Release 0.1.9 brings three new access point functions: greed(), me() and morning(). It also adds an S3 print method for the returned object. A demo of first of these additions in shown in the image in this post.

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 May, 2016 02:02AM

May 26, 2016

Iustin Pop

First run in 2016

Today I finally ran a bit outside, for the first time in 2016. Actually, for even longer—the first run since May 2015. I have been only biking in the last year, so this was a very pleasant change of pace (hah), even if just a short run (below 4K).

The funny thing is that since I've been biking consistently (and hard) in the last two months, my fitness level is reasonable, so I managed to beat my all-time personal records for 1 Km and 1 mile (I never sprint, so these are just 'best of' segments out of longer runs). It's probably because I only did ~3.8Km, but still, I was very surprised, since I planned and did an easy run. How could I beat my all-time PR, even better than the times back in 2012 when I was doing regular running?

Even the average pace over the entire run was better than my last training runs (~5Km) back in April/May 2015, by 15-45s.

I guess cross-training does work after all, at least when competing against myself ☺

26 May, 2016 09:06PM

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Do you want Qt5's QWebEngine in Debian? Do you have library packaging skills? If so, step up!

So far the only missing submodule in Debian's Qt5 stack is QtWebEngine. None of us the current Qt maintainers have the time/will to do the necessary stuff to have it properly packaged.

So if you would like to have QtWebEngine in Debian and:

  • You have C++ libraries' packaging skills.
  • You have a powerful enough machine/enough patience to do the necessary builds (8+ GB RAM+swap required).
  • You are willing to deal with 3rd party embedded software.
  • You are willing to keep up with security fixes.
  • You are accessible through IRC and have the necessary communications skills to work together with the rest of the team.
Then you are the right person for this task. Do not hesitate in pinging me on #debian-kde, irc.oftc.net.

26 May, 2016 02:51PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

hackergotchi for Michael Prokop

Michael Prokop

My talk at OSDC 2016: Continuous Integration in Data Centers – Further 3 Years Later

Open Source Data Center Conference (OSDC) was a pleasure and great event, Netways clearly knows how to run a conference.

This year at OSDC 2016 I gave a talk titled “Continuous Integration in Data Centers – Further 3 Years Later“. The slides from this talk are available online (PDF, 6.2MB). Thanks to Netways folks also a recording is available:

This embedded video doesn’t work for you? Try heading over to YouTube.

Note: my talk was kind of an update and extension for the (german) talk I gave at OSDC 2013. If you’re interested, the slides (PDF, 4.3MB) and the recording (YouTube) from my talk in 2013 are available online as well.

26 May, 2016 07:06AM by mika

hackergotchi for Norbert Preining

Norbert Preining

Shotwell vs. digiKam

How to manage your photos? – That is probably the biggest question for anyone doing anything with a photo camera. As resolutions of cameras grow, the data we have to manage is growing ever. In my case I am talking about more than 50000 photos and videos measuring up to about 200Gb of disk space, constantly growing. There are several photo management softwares out there, I guess the most commonly used ones are Shotwell for the Gnome desktop, digiKam for the KDE world, and FotoXX. I have not used Shotwell and digiKam for quite some time, and collect here my experiences of strength and weaknesses of the two programs. FotoXX seems to be very powerful, too, but I haven’t tested it till now.
shotwell-digikam

There is no clear winner here, unfortunately. Both have their strength and their weaknesses. And as a consequence I am using both in parallel.

Before I start a clear declaration: I have been using Shotwell for many years, and have myself contributed considerable code to Shotwell, in particular the whole comment system (comments for photos and events), as well as improved the Piwigo upload features. I started using digiKam some month ago when I started to look for offloading parts of my photo library to external devices. Since then I have used both in parallel.

Let us start with what these programs say about themselves:

Shotwell is declared as a Photo Manager for Gnome 3, with the following features:

  • Import from disk or camera
  • Organize by time-based Events, Tags (keywords), Folders, and more
  • View your photos in full-window or fullscreen mode
  • Crop, rotate, color adjust, straighten, and enhance photos
  • Slideshow
  • Video and RAW photo support
  • Share to major Web services, including Facebook, Flickr, and YouTube

digiKam says about itself that it is an advanced digital photo management application for Linux, Windows, and Mac-OSX. It has a very long feature page with a short list at the beginning:

  • import pictures
  • organize your collection
  • view items
  • edit and enhance
  • create (slideshows, calendar, print, …)
  • share your creations (using social web services, email, your own web allery, …)

Now that sounds like they are very similar, but upon using them it turns out that there are huge differences, that can easily be summed up in a short statement:

Shotwell is Gnome 3 – that means – get rid of functionality.

digiKam is KDE – that means – provide as much functionality as possible.

Now before you run after me with a knife because you do not agree with me on the above, either read on, or stop reading. I am not interested in flame wars over Gnome versus KDE philosophy. I have been using Gnome since many years, and tried to convince myself to G3 for more than a year – until I threw out all of it but selected programs – but their number is going down.

Let us look at those aspects I am using: organization, offline, sharing, editing.

Organization

In Shotwell, your photos are organized into events, independent from their location on disk. These events can have title and comment and collect a set of related photos. In my case I often have photos from two or more cameras (my camera and mobile, photos of friends), which I keep in separate directories within a main directory for the event. For example I have a folder 2016/05.21-22.Climbing.Tanigakadake with two sub-folders Norbert (for my photos) and Friend (for my friends photos).

In Shotwell all the photos are in the same event, which is shown with title 05.21-22 Tanigakawadake Climbing within the 2016 year and May month.
shotwell-events

So in short – Shotwell distinguishes between disk layout and album/event names.

In digiKam there is a strict connection between disk layout and album names – 1:1. Albums are directories. One can adjust the viewer to show all photos of sub-albums in the main album, and by this one can achieve the same effect of merging all photos of my friend and myself. The good thing in this approach is that one can easily have sub-albums: Imagine a trip to three different islands of Hawaii during one trip. This is something easy to achieve in digiKam, but hard in Shotwell.
digikam-albums

Other organization methods

Both Shotwell and digiKam support tags, including hierarchical tags and rating (0-5 stars). Shotwell has in addition a quick flag action that I used quite often for initial selecting photos, as well as accepted and rejected. digiKam also has so called “picks” (no pick, reject, pending, accepted), and “colors” (not used by now). Both programs have some face detection support, but also this I haven’t used.

So in the organization respect there is no clear winner. I like the Event idea of Shotwell, or better, the separation of events from the disk structure. But on the other hand, Shotwell does not allow for sub-albums, which is also a pain.

No clear winner – draw

Offline storage

That is simple: Shotwell: forget it, not reasonably possible. One can move parts to an external HD, then unplug it, and Shotwell will tell you that all the photos are missings. And when you plug the external HD in, it will redetect them. But this is not proper support, just a consequence of hash sum storage. Also separation into several libraries (online and offline) is not supported.

On the other hand, digiKam supports multiple libraries, partly offline, without a hinch. I would love to have this feature in Shotwell, because I need to free disk space, urgently!!!

Clear winner: digiKam

Sharing

Again here my testing is very restricted – I am using my own Piwigo installation exclusively. Here Shotwell is excellent in providing support for various features: upload to existing category, create new category, resize, optionally remove tags, add comments to albums and photos, etc. (partly implemented by me 😉
shotwell-piwigo

On the other hand digiKam has a very barebone interface to Piwigo – you can upload photos to an existing album and resize them, but that is already everything. One also needs to say that the list of supported services in digiKam is by far longer than the one in Shotwell, but the main services (usual suspects) are supported in both.
digikam-piwigo

Clear winner: Shotwell (but I haven’t tested other upload services).

Editing

The editing capabilities of Shotwell are again very restricted: red eyes, resize, levels, ….
shotwell-editing

digiKam here is more like a photo editor software with loads of tools and features. I haven’t even explored all the options – maybe I can get rid of GIMP, too?
digikam-editing

Clear winner: digiKam

Conclusions

While I am still working with both, I actually would love to move completely to digiKam. I simply cannot stand the Gnome 3 philosophy of reducing functionality to a minimal for dummy users. There is a market for that, sure, but I am not one of it. Unfortunately, the missing Event support, and much more the completely minimal support for Piwigo sharing in digiKam is a big show stopper at the moment. Even after testing the upcoming digiKam 5.0 version for some time, I didn’t see any improvement wrt to sharing.

That leaves me with the only option to continue working with both programs, and hope to get a new bigger SSD before the current one runs out of space. Of course I could start hacking into the digiKam source – maybe I do this when I have a bit more time – to add proper Piwigo support.

26 May, 2016 03:33AM by Norbert Preining

May 25, 2016

Petter Reinholdtsen

Isenkram with PackageKit support - new version 0.23 available in Debian unstable

The isenkram system is a user-focused solution in Debian for handling hardware related packages. The idea is to have a database of mappings between hardware and packages, and pop up a dialog suggesting for the user to install the packages to use a given hardware dongle. Some use cases are when you insert a Yubikey, it proposes to install the software needed to control it; when you insert a braille reader list it proposes to install the packages needed to send text to the reader; and when you insert a ColorHug screen calibrator it suggests to install the driver for it. The system work well, and even have a few command line tools to install firmware packages and packages for the hardware already in the machine (as opposed to hotpluggable hardware).

The system was initially written using aptdaemon, because I found good documentation and example code on how to use it. But aptdaemon is going away and is generally being replaced by PackageKit, so Isenkram needed a rewrite. And today, thanks to the great patch from my college Sunil Mohan Adapa in the FreedomBox project, the rewrite finally took place. I've just uploaded a new version of Isenkram into Debian Unstable with the patch included, and the default for the background daemon is now to use PackageKit. To check it out, install the isenkram package and insert some hardware dongle and see if it is recognised.

If you want to know what kind of packages isenkram would propose for the machine it is running on, you can check out the isenkram-lookup program. This is what it look like on a Thinkpad X230:

% isenkram-lookup 
bluez
cheese
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tleds
tp-smapi-dkms
tp-smapi-source
tpb
%p

The hardware mappings come from several places. The preferred way is for packages to announce their hardware support using the cross distribution appstream system. See previous blog posts about isenkram to learn how to do that.

25 May, 2016 08:20AM

May 24, 2016

Carl Chenet

Tweet your database with db2twitter

Follow me also on Diaspora*diaspora-banner or Twitter 

You have a database (MySQL, PostgreSQL, see supported database types), a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter.

A quick example of a tweet generated by db2twitter:

db2twitter

The new version 0.6 offers the support of tweets with an image. How cool is that?

 

db2twitter is developed by and run for LinuxJobs.fr, the job board of th french-speaking Free Software and Opensource community.

banner-linuxjobs-small

db2twitter also has cool options like;

  • only tweet during user-specified time (e.g 9AM-6PM)
  • use user-specified SQL filter in order to get data from the database (e.g only fetch rows where status == « edited »)

db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and  Tweepy. The official documentation is available on readthedocs.

 


24 May, 2016 10:00PM by Carl Chenet

Mike Gabriel

Arctica Project: The Telekinesis Framework, coming soon...

The Arctica Project is a task force of people reinventing the realm of remote desktop computing on Linux. One core component for multimedia experience in remote desktop / application scenarios is the to-be-reloaded / upcoming Telekinesis Framework.

Telekinesis provides a framework for developing GUI applications that have a client and server side component. Those applications are visually merged and presented to the end user in such a way that the end user's “user experience” is the same as if the user was interacting with a strictly server side application. Telekinesis mediates the communication between those server side and client side application parts.

As a reference implementation you can imagine a server side media player GUI (TeKi-aware application) and a client side video overlay (corresponding TeKi-aware service). The media player GUI "remote-controls" the client side video overlay. The video overlay receives its video stream from the server. All these interactions are mediated through Telekinesis.

A proof of concept has been developed for X2Go in 2012. For the Arctica Server, we are currently doing a (much cleaner!) rewrite of the original prototype [1]. See [2] for the first whitepaper describing how to integrate Telekinesis into existing remote desktop solutions. See [3] for a visual demonstration of the potentials of Telekinesis (still using X2Go underneath and the original Telekinesis prototype).

The heavy lifting around Telekinesis development and conceptual design is performed by my project partner Lee from GZ Nianguan FOSS Team [4]. Thanks for continuously giving your time and energy into the co-founding of the Arctica Project. Thanks for always reminding me of doing benchmarks!!!

light+love,
Mike

[1] http://code.x2go.org/gitweb?p=telekinesis.git;a=summary
[2] https://github.com/ArcticaProject/ArcticaDocs/blob/master/Telekinesis/Te...
[3] https://www.youtube.com/watch?v=57AuYOxXPRU
[4] https://github.com/gznget

24 May, 2016 08:21PM by sunweaver

Thorsten Alteholz

Debian and the Internet of Things

Everybody is talking about the Internet of Things. Unfortunately there is no sign of it in Debian yet. Besides some smaller packages like sispmctl, usbrelay or the 1-wire support in digitemp and owfs, there is not much software to control devices over a network.

With the recent upload of alljoyn-core-1504 this might change.

The Alljoyn Framework, where the Alljoyn Core is just one of several modules, lets devices and applications detect each other and communicate with one another over a D-Bus like message bus. The development of the framework has been started by Qualcomm some years ago and is meanwhile managed by the AllSeen Alliance, a nonprofit consortium. The software is licensed under the ISC license.

This first upload is just the first step of a long journey. Other modules that compose the framework and already have a released tarball are related to lightning products, gateways to overcome the boundaries of the local network and much more. In the near future it is also planned to have modules that attach Z-Wave-, ZigBee- or Bluetooth-devices to the Alljoyn bus.

So all in all, this looks like an exciting task and everybody is invited to help maintaining the software in Debian.

24 May, 2016 06:04PM by alteholz

hackergotchi for Michal Čihař

Michal Čihař

Gammu release day

There has been some silence on the Gammu release front and it's time to change that. Today all Gammu, python-gammu and Wammu have been released. As you might guess all are bugfix releases.

List of changes for Gammu 1.37.3:

  • Improved support for Huawei E398.
  • Improved support for Huawei/Vodafone K4505.
  • Fixed possible crash if SMSD used in library.
  • Improved support for Huawei E180.

List of changes for python-gammu 2.6:

  • Fixed error when creating new contact.
  • Fixed possible testsuite errors.

List of changes for Wammu 0.41:

  • Fixed crash with unicode home directory.
  • Fixed possible crashes in error handler.
  • Improved error handling when scanning for Bluetooth devices.

All updates are also on their way to Debian sid and Gammu PPA.

Would you like to see more features in Gammu family? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu python-gammu Wammu | 0 comments

24 May, 2016 04:00PM

hackergotchi for Alberto García

Alberto García

I/O bursts with QEMU 2.6

QEMU 2.6 was released a few days ago. One new feature that I have been working on is the new way to configure I/O limits in disk drives to allow bursts and increase the responsiveness of the virtual machine. In this post I’ll try to explain how it works.

The basic settings

First I will summarize the basic settings that were already available in earlier versions of QEMU.

Two aspects of the disk I/O can be limited: the number of bytes per second and the number of operations per second (IOPS). For each one of them the user can set a global limit or separate limits for read and write operations. This gives us a total of six different parameters.

I/O limits can be set using the throttling.* parameters of -drive, or using the QMP block_set_io_throttle command. These are the names of the parameters for both cases:

-drive block_set_io_throttle
throttling.iops-total iops
throttling.iops-read iops_rd
throttling.iops-write iops_wr
throttling.bps-total bps
throttling.bps-read bps_rd
throttling.bps-write bps_wr

It is possible to set limits for both IOPS and bps at the same time, and for each case we can decide whether to have separate read and write limits or not, but if iops-total is set then neither iops-read nor iops-write can be set. The same applies to bps-total and bps-read/write.

The default value of these parameters is 0, and it means unlimited.

In its most basic usage, the user can add a drive to QEMU with a limit of, say, 100 IOPS with the following -drive line:

-drive file=hd0.qcow2,throttling.iops-total=100

We can do the same using QMP. In this case all these parameters are mandatory, so we must set to 0 the ones that we don’t want to limit:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0
     }
   }

I/O bursts

While the settings that we have just seen are enough to prevent the virtual machine from performing too much I/O, it can be useful to allow the user to exceed those limits occasionally. This way we can have a more responsive VM that is able to cope better with peaks of activity while keeping the average limits lower the rest of the time.

Starting from QEMU 2.6, it is possible to allow the user to do bursts of I/O for a configurable amount of time. A burst is an amount of I/O that can exceed the basic limit, and there are two parameters that control them: their length and the maximum amount of I/O they allow. These two can be configured separately for each one of the six basic parameters described in the previous section, but here we’ll use ‘iops-total’ as an example.

The I/O limit during bursts is set using ‘iops-total-max’, and the maximum length (in seconds) is set with ‘iops-total-max-length’. So if we want to configure a drive with a basic limit of 100 IOPS and allow bursts of 2000 IOPS for 60 seconds, we would do it like this (the line is split for clarity):

   -drive file=hd0.qcow2,
          throttling.iops-total=100,
          throttling.iops-total-max=2000,
          throttling.iops-total-max-length=60

Or with QMP:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0,
        "iops_max": 2000,
        "iops_max_length": 60,
     }
   }

With this, the user can perform I/O on hd0.qcow2 at a rate of 2000 IOPS for 1 minute before it’s throttled down to 100 IOPS.

The user will be able to do bursts again if there’s a sufficiently long period of time with unused I/O (see below for details).

The default value for ‘iops-total-max’ is 0 and it means that bursts are not allowed. ‘iops-total-max-length’ can only be set if ‘iops-total-max’ is set as well, and its default value is 1 second.

Controlling the size of I/O operations

When applying IOPS limits all I/O operations are treated equally regardless of their size. This means that the user can take advantage of this in order to circumvent the limits and submit one huge I/O request instead of several smaller ones.

QEMU provides a setting called throttling.iops-size to prevent this from happening. This setting specifies the size (in bytes) of an I/O request for accounting purposes. Larger requests will be counted proportionally to this size.

For example, if iops-size is set to 4096 then an 8KB request will be counted as two, and a 6KB request will be counted as one and a half. This only applies to requests larger than iops-size: smaller requests will be always counted as one, no matter their size.

The default value of iops-size is 0 and it means that the size of the requests is never taken into account when applying IOPS limits.

Applying I/O limits to groups of disks

In all the examples so far we have seen how to apply limits to the I/O performed on individual drives, but QEMU allows grouping drives so they all share the same limits.

This feature is available since QEMU 2.4. Please refer to the post I wrote when it was published for more details.

The Leaky Bucket algorithm

I/O limits in QEMU are implemented using the leaky bucket algorithm (specifically the “Leaky bucket as a meter” variant).

This algorithm uses the analogy of a bucket that leaks water constantly. The water that gets into the bucket represents the I/O that has been performed, and no more I/O is allowed once the bucket is full.

To see the way this corresponds to the throttling parameters in QEMU, consider the following values:

  iops-total=100
  iops-total-max=2000
  iops-total-max-length=60
  • Water leaks from the bucket at a rate of 100 IOPS.
  • Water can be added to the bucket at a rate of 2000 IOPS.
  • The size of the bucket is 2000 x 60 = 120000.
  • If iops-total-max is unset then the bucket size is 100.

bucket

The bucket is initially empty, therefore water can be added until it’s full at a rate of 2000 IOPS (the burst rate). Once the bucket is full we can only add as much water as it leaks, therefore the I/O rate is reduced to 100 IOPS. If we add less water than it leaks then the bucket will start to empty, allowing for bursts again.

Note that since water is leaking from the bucket even during bursts, it will take a bit more than 60 seconds at 2000 IOPS to fill it up. After those 60 seconds the bucket will have leaked 60 x 100 = 6000, allowing for 3 more seconds of I/O at 2000 IOPS.

Also, due to the way the algorithm works, longer burst can be done at a lower I/O rate, e.g. 1000 IOPS during 120 seconds.

Acknowledgments

As usual, my work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the QEMU development team.

igalia-outscale

Enjoy QEMU 2.6!

24 May, 2016 11:47AM by berto

May 23, 2016

hackergotchi for Daniel Pocock

Daniel Pocock

PostBooks, PostgreSQL and pgDay.ch talk

PostBooks 4.9.5 was recently released and the packages for Debian (including jessie-backports), Ubuntu and Fedora have been updated.

Postbooks at pgDay.ch in Rapperswil, Switzerland

pgDay.ch is coming on Friday, 24 June. It is at the HSR Hochschule für Technik Rapperswil, at the eastern end of Lake Zurich.

I'll be making a presentation about Postbooks in the business track at 11:00.

Getting started with accounting using free, open source software

If you are not currently using a double-entry accounting system or if you are looking to move to a system that is based on completely free, open source software, please see my comparison of free, open source accounting software.

Free and open source solutions offer significant advantages: flexibility, businesses can choose any programmer to modify the code, and use of SQL back-ends, multi-user support and multi-currency support are standard. These are all things that proprietary vendors charge extra money for.

Accounting software is the lowest common denominator in the world of business software, people keen on the success of free and open source software may find that encouraging businesses to use one of these solutions is a great way to lay a foundation where other free software solutions can thrive.

PostBooks new web and mobile front end

xTuple, the team behind Postbooks, has been busy developing a new Web and Mobile front-end for their ERP, CRM and accounting suite, powered by the same PostgreSQL backend as the Linux desktop client.

More help is needed to create official packages of the JavaScript dependencies before the Web and Mobile solution itself can be packaged.

23 May, 2016 05:35PM by Daniel.Pocock

Enrico Zini

I chipped in

I clicked on a random link and I found myself again in front of a wired.com popup that wanted to explain to me what I have to think about adblockers.

This time I was convinced, and I took my wallet out.

I finally donated $35 to AdBlock.

(And then somebody pointed me to uBlock Origin and I switched to that.)

23 May, 2016 12:45PM

Petter Reinholdtsen

Discharge rate estimate in new battery statistics collector for Debian

Yesterday I updated the battery-stats package in Debian with a few patches sent to me by skilled and enterprising users. There were some nice user and visible changes. First of all, both desktop menu entries now work. A design flaw in one of the script made the history graph fail to show up (its PNG was dumped in ~/.xsession-errors) if no controlling TTY was available. The script worked when called from the command line, but not when called from the desktop menu. I changed this to look for a DISPLAY variable or a TTY before deciding where to draw the graph, and now the graph window pop up as expected.

The next new feature is a discharge rate estimator in one of the graphs (the one showing the last few hours). New is also the user of colours showing charging in blue and discharge in red. The percentages of this graph is relative to last full charge, not battery design capacity.

The other graph show the entire history of the collected battery statistics, comparing it to the design capacity of the battery to visualise how the battery life time get shorter over time. The red line in this graph is what the previous graph considers 100 percent:

In this graph you can see that I only charge the battery to 80 percent of last full capacity, and how the capacity of the battery is shrinking. :(

The last new feature is in the collector, which now will handle more hardware models. On some hardware, Linux power supply information is stored in /sys/class/power_supply/ACAD/, while the collector previously only looked in /sys/class/power_supply/AC/. Now both are checked to figure if there is power connected to the machine.

If you are interested in how your laptop battery is doing, please check out the battery-stats in Debian unstable, or rebuild it on Jessie to get it working on Debian stable. :) The upstream source is available from github. Patches are very welcome.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

23 May, 2016 07:35AM

May 22, 2016

Reproducible builds folks

Reproducible builds: week 56 in Stretch cycle

What happened in the Reproducible Builds effort between May 15th and May 21st 2016:

Media coverage

Blog posts from our GSoC and Outreachy contributors:

Documentation update

Ximin Luo clarified instructions on how to set SOURCE_DATE_EPOCH.

Toolchain fixes

  • Joao Eriberto Mota Filho uploaded txt2man/1.5.6-4, which honours SOURCE_DATE_EPOCH to generate reproducible manpages (original patch by Reiner Herrmann).
  • Dmitry Shachnev uploaded sphinx/1.4.1-1 to experimental with improved support for SOURCE_DATE_EPOCH (original patch by Alexis Bienvenüe).
  • Emmanuel Bourg submitted a patch against debhelper to use a fixed username while building ant packages.

Other upstream fixes

  • Doxygen merged a patch by Ximin Luo, which uses UTC as timezone for embedded timestamps.
  • CMake applied a patch by Reiner Herrmann in their next branch, which sorts file lists obtained with file(GLOB).
  • GNU tar 1.29 with support for --clamp-mtime has been released upstream, closing #816072, which was the blocker for #759886 "dpkg-dev: please make mtimes of packaged files deterministic" which we now hope will be closed soon.

Packages fixed

The following 18 packages have become reproducible due to changes in their build dependencies: abiword angband apt-listbugs asn1c bacula-doc bittornado cdbackup fenix gap-autpgrp gerbv jboss-logging-tools invokebinder modplugtools objenesis pmw r-cran-rniftilib x-loader zsnes

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

  • bzr/2.7.0-6 by Jelmer Vernooij.
  • libsdl2/2.0.4+dfsg2-1 by Manuel A. Fernandez Montecelo.
  • pvm/3.4.5-13 by James Clarke.
  • refpolicy/2:2.20140421-11 by Laurent Bigonville.
  • subvertpy/0.9.3-4 by Jelmer Vernooij.

Patches submitted that have not made their way to the archive yet:

  • #824413 against binutils by Chris Lamb: filter build user and date from test log case-insensitively
  • #824452 against python-certbot by Chris Lamb: prevent PID from being embedded into documentation (forwarded upstream)
  • #824453 against gtk-gnutella by Chris Lamb: use SOURCE_DATE_EPOCH for deterministic timestamp (merged upstream)
  • #824454 against python-latexcodec by Chris Lamb: fix for parsing the changelog date
  • #824472 against torch3 by Alexis Bienvenüe: sort object files while linking
  • #824501 against cclive by Alexis Bienvenüe: use SOURCE_DATE_EPOCH as embedded build date
  • #824567 against tkdesk by Alexis Bienvenüe: sort order of files which are parsed by mkindex script
  • #824592 against twitter-bootstrap by Alexis Bienvenüe: use shell-independent printing
  • #824639 against openblas by Alexis Bienvenüe: sort object files while linking
  • #824653 against elkcode by Alexis Bienvenüe: sort list of files locale-independently
  • #824668 against gmt by Alexis Bienvenüe: use SOURCE_DATE_EPOCH for embedded timestamp (similar patch by Bas Couwenberg already applied and forwarded upstream)
  • #824808 against gdal by Alexis Bienvenüe: sort object files while linking
  • #824951 against libtomcrypt by Reiner Herrmann: use SOURCE_DATE_EPOCH for timestamp embedded into metadata

Reproducibility-related bugs filed:

  • #824420 against python-phply by ceridwen: parsetab.py file is not included when building with DEB_BUILD_OPTIONS="nocheck"
  • #824572 against dpkg-dev by XImin Luo: request to export SOURCE_DATE_EPOCH in /usr/share/dpkg/*.mk.

Package reviews

51 reviews have been added, 19 have been updated and 15 have been removed in this week.

22 FTBFS bugs have been reported by Chris Lamb, Santiago Vila, Niko Tyni and Daniel Schepler.

tests.reproducible-builds.org

Misc.

  • During the discussion on debian-devel about PIE, an archive rebuild was suggested by Bálint Réczey, and Holger Levsen suggested to coordinate this with a required archive rebuild for reproducible builds.
  • Ximin Luo improved misc.git/reports (=the tools to help writing the weekly statistics for this blog) quite a bit, h01ger contributed a little too.

This week's edition was written by Reiner Herrmann and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

22 May, 2016 09:44PM

Antonio Terceiro

Adopting pristine-tar

As of yesterday, I am the new maintainer of pristine-tar. As it is the case for most of Joey Hess’ creations, it is an extremely useful tool, and used in a very large number of Debian packages which are maintained in git.

My first upload was most of a terrain recognition nature: I did some housekeeping tasks, such as making the build idempotent and making sure all binaries are built with security hardening flags, and wrote a few automated test cases to serve as build-time and run-time regression test suite. No functional changes have been made.

As Joey explained when he orphaned it, there are a few technical challenges involved in making sure pristine-tar stays useful in the future. Although I did read some of the code, I am not particularly familiar with the internals yet, and will be more than happy to get co-maintainers. If you are interested, please get in touch. The source git repository is right there.

22 May, 2016 02:02PM

May 21, 2016

Petter Reinholdtsen

French edition of Lawrence Lessigs book Cultura Libre on Amazon and Barnes & Noble

A few weeks ago the French paperback edition of Lawrence Lessigs 2004 book Cultura Libre was published. Today I noticed that the book is now available from book stores. You can now buy it from Amazon ($19.99), Barnes & Noble ($?) and as always from Lulu.com ($19.99). The revenue is donated to the Creative Commons project. If you buy from Lulu.com, they currently get $10.59, while if you buy from one of the book stores most of the revenue go to the book store and the Creative Commons project get much (not sure how much less).

I was a bit surprised to discover that there is a kindle edition sold by Amazon Digital Services LLC on Amazon. Not quite sure how that edition was created, but if you want to download a electronic edition (PDF, EPUB, Mobi) generated from the same files used to create the paperback edition, they are available from github.

21 May, 2016 08:50AM

May 20, 2016

Zlatan Todorić

4 months of work turned into GNOME, Debian testing based tablet

Huh, where do I start. I started working for a great CEO and great company known as Purism. What is so great about it? First of all, CEO (Todd Weaver), is incredible passionate about Free software. Yes, you read it correctly. Free software. Not Open Source definition, but Free software definition. I want to repeat this like a mantra. In Purism we try to integrate high-end hardware with Free software. Not only that, we want our hardware to be Free as much as possible. No, we want to make it entirely Free but at the moment we don't achieve that. So instead going the way of using older hardware (as Ministry of Freedom does, and kudos to them for making such option available), we sacrifice this bit for the momentum we hope to gain - that brings growth and growth brings us much better position when we sit at negotiation table with hardware producers. If negotiations even fail, with growth we will have enough chances to heavily invest in things such as openRISC or freeing cellular modules. We want to provide in future entirely Free hardware&software device that has integrated security and privacy focus while it is easy to use and convenient as any other mainstream OS. And we choose to currently sacrifice few things to stay in loop.

Surely that can't be the only thing - and it isn't. Our current hardware runs entirely on Free software. You can install Debian main on it and all will work out of box. I know I did this and enjoy my Debian more than ever. We also have margin share program where part of profit we donate to Free software projects. We are also discussing a lot of new business model where our community will get a lot of influence (stay tuned for this). Besides all this, our OS (called PureOS - yes, a bit misfortune that we took the name of dormant distribution), was Trisquel based but now it is Debian testing based. Current PureOS 2.0 is coming with default DE as Cinnamom but we are already baking PureOS 3.0 which is going to come with GNOME Shell as default.

Why is this important? Well, around 12 hours ago we launched a tablet campaign on Indiegogo which comes with GNOME Shell and PureOS as default. Not one, but two tablets actually (although we heavily focus on 11" one). This is the product of mine 4 months dedicated work at Purism. I must give kudos to all Purism members that pushed their parts in preparation for this campaign. It was hell of a ride.

Librem11

I have also approached (of course!) Debian for creation of OEM installations ISOs for our Librem products. This way, with every sold Librem that ships with Debian preinstalled, Debian will get donation. It is our way to show gratitude to Debian for all the work our community does (yes, I am still extremely proud Debian dude and I will stay like that!). Oh yes, I am the chief technology person at Purism, and besides all goals we have, I also plan (dream) about Purism being the company that has highest number of Debian Developers. In that terms I am very proud to say that Matthias Klumpp became part of Purism. Hopefully we soon extend the number of Debian population in Purism.

Of course, I think it is fairly known that I am easy to approach so if anyone has any questions (as I didn't want this post to be too long) feel free to contact me. Also - in Free software spirit - we welcome any community engagement, suggestion and/or feedback.

20 May, 2016 01:47PM by Zlatan Todoric

Reproducible builds folks

Improving the process for testing build reproducibility

Hi! I'm Ceridwen. I'm going to be one of the Outreachy interns working on Reproducible Builds for the summer of 2016. My project is to create a tool, tentatively named reprotest, to make the process of verifying that a build is reproducible easier.

The current tools and the Reproducible Builds site have limits on what they can test, and they're not very user friendly. (For instance, I ended up needing to edit the rebuild.sh script to run it on my system.) Reprotest will automate some of the busywork involved and make it easier for maintainers to test reproducibility without detailed knowledge of the process involved. A session during the Athens meeting outlines some of the functionality and command-line and configuration file API goals for reprotest. I also intend to use some ideas, and command-line and config processing boilerplate, from autopkgtest. Reprotest, like autopkgtest, should be able to interface with more build environments, such as schroot and qemu. Both autopkgtest and diffoscope, the program that the Reproducible Builds project uses to check binaries for differences, are written in Python, and as Python is the scripting language I'm most familiar with, I will be writing reprotest in Python too.

One of my major goals is to get a usable prototype released in the first three to four weeks. At that point, I want to try to solicit feedback (and any contributions anyone wants to make!). One experience I've had in open source software is that connecting people with software they might want to use is often the hardest part of a project. I've reimplemented existing functionality myself because I simply didn't know that someone else had already written something equivalent, and seen many other people do the same. Once I have the skeleton fleshed out, I'm going to be trying to find and reach out to any other communities, outside the Debian Reproducible Builds project itself, who might find reprotest useful.

20 May, 2016 03:20AM

May 19, 2016

hackergotchi for Matthew Garrett

Matthew Garrett

Your project's RCS history affects ease of contribution (or: don't squash PRs)

Github recently introduced the option to squash commits on merge, and even before then several projects requested that contributors squash their commits after review but before merge. This is a terrible idea that makes it more difficult for people to contribute to projects.

I'm spending today working on reworking some code to integrate with a new feature that was just integrated into Kubernetes. The PR in question was absolutely fine, but just before it was merged the entire commit history was squashed down to a single commit at the request of the reviewer. This single commit contains type declarations, the functionality itself, the integration of that functionality into the scheduler, the client code and a large pile of autogenerated code.

I've got some familiarity with Kubernetes, but even then this commit is difficult for me to read. It doesn't tell a story. I can't see its growth. Looking at a single hunk of this diff doesn't tell me whether it's infrastructural or part of the integration. Given time I can (and have) figured it out, but it's an unnecessary waste of effort that could have gone towards something else. For someone who's less used to working on large projects, it'd be even worse. I'm paid to deal with this. For someone who isn't, the probability that they'll give up and do something else entirely is even greater.

I don't want to pick on Kubernetes here - the fact that this Github feature exists makes it clear that a lot of people feel that this kind of merge is a good idea. And there are certainly cases where squashing commits makes sense. Commits that add broken code and which are immediately followed by a series of "Make this work" commits also impair readability and distract from the narrative that your RCS history should present, and Github present this feature as a way to get rid of them. But that ends up being a false dichotomy. A history that looks like "Commit", "Revert Commit", "Revert Revert Commit", "Fix broken revert", "Revert fix broken revert" is a bad history, as is a history that looks like "Add 20,000 line feature A", "Add 20,000 line feature B".

When you're crafting commits for merge, think about your commit history as a textbook. Start with the building blocks of your feature and make them one commit. Build your functionality on top of them in another. Tie that functionality into the core project and make another commit. Add client support. Add docs. Include your tests. Allow someone to follow the growth of your feature over time, with each commit being a chapter of that story. And never, ever, put autogenerated code in the same commit as an actual functional change.

People can't contribute to your project unless they can understand your code. Writing clear, well commented code is a big part of that. But so is showing the evolution of your features in an understandable way. Make sure your RCS history shows that, otherwise people will go and find another project that doesn't make them feel frustrated.

(Edit to add: Sarah Sharp wrote on the same topic a couple of years ago)

comment count unavailable comments

19 May, 2016 11:52PM

Antoine Beaupré

My free software activities, May 2016

Debian Long Term Support (LTS)

This is my 6th month working on Debian LTS, started by Raphael Hertzog at Freexian. This is my largest month so far, for which I had requested 20 hours of work.

Xen work

I spent the largest amount of time working on the Xen packages. We had to re-roll the patches because it turned out we originally just imported the package from Ubuntu as-is. This was a mistake because that package forked off the Debian packaging a while ago and included regressions in the packaging itself, not just security fixes.

So I went ahead and rerolled the whole patchset and tested it on Koumbit's test server. Brian May then completed the uploaded, which included about 40 new patches, mostly from Ubuntu.

Frontdesk duties

Next up was the frontdesk duties I had taken this week. This was mostly uneventful, although I had forgotten how to do some of the work and thus ended up doing extensive work on the contributor's documentation. This is especially important since new contributors joined the team! I also did a lot of Debian documentation work in my non-sponsored work below.

The triage work involved chasing around missing DLAs, triaging away OpenJDK-6 (for which, let me remind you, security support has ended in LTS), raised the question of Mediawiki maintenance.

Other LTS work

I also did a bunch of smaller stuff. Of importance, I can note that I uploaded two advisories that were pending from April: NSS and phpMyAdmin. I also reviewed the patches for the ICU update, since I built the one for squeeze (but didn't have time to upload before squeeze hit end-of-life).

I have tried to contribute to the NTP security support but that was way too confusing to me, and I have left it to the package maintainer which seemed to be on top of things, even if things mean complete chaos and confusion in the world of NTP. I somehow thought that situation had improved with the recent investments in ntpsec and ntimed, but unfortunately Debian has not switched to the ntpsec codebase, so it seems that the NTP efforts have diverged in three different projects instead of closing into a single, better codebase.

Future LTS work

This is likely to be my last month of work on LTS until September. I will try to contribute a few hours in June, but July and August will be very busy for me outside of Debian, so it's unlikely that I contribute much to the project during the summer. My backlog included those packages which might be of interest to other LTS contributors:

  • libxml2: no upstream fix, but needs fixing!
  • tiff{,3}: same mess
  • libgd2: maintainer contacted
  • samba regression: mailed bug #821811 to try to revive the effort
  • policykit-1: to be investigated
  • p7zip: same

Other free software work

Debian documentation

I wrote an detailed short guide to Debian package development, something I felt was missing from the existing corpus, which seems to be too focus in covering all alternatives. My guide is opinionated: I believe there is a right and wrong way of doing things, or at least, there are best practices, especially when just patching packages. I ended up retroactively publishing that as a blog post - now I can simply tag an item with blog and it shows up in the blog.

(Of course, because of a mis-configuration on my side, I have suffered from long delays publishing to Debian planet, so all the posts dates are off in the Planet RSS feed. This will hopefully be resolved around the time this post is published, but this allowed me to get more familiar with the Planet Venus software, as detailed in that other article.)

Apart from the guide, I have also done extensive research to collate information that allowed me to create workflow graphs of the various Debian repositories, which I have published in the Debian Release section of the Debian wiki. Here is the graph:

It helps me understand how packages flow between different suites and who uploads what where. This emerged after I realized I didn't really understand how "proposed updates" worked. Since we are looking at implementing a similar process for the security queue, I figured it was useful to show what changes would happen, graphically.

I have also published a graph that describes the relations between different software that make up the Debian archive. The idea behind this is also to provide an overview of what happens when you upload a package in the Debian archive, but it is more aimed at Debian developers trying to figure out why things are not working as expected.

The graphs were done with Graphviz, which allowed me to link to various components in the graph easily, which is neat. I also prefered Graphviz over Dia or other tools because it is easier to version and I don't have to bother (too much) about the layout and tweaking the looks. The downside is, of course, that when Graphviz makes the wrong decision, it's actually pretty hard to make it do the right thing, but there are various workarounds that I have found that made the graphs look pretty good.

The source is of course available in git but I feel all this documentation (including the guide) should go in a more official document somewhere. I couldn't quite figure out where. Advice on this would be of course welcome.

Ikiwiki

I have made yet another plugin for Ikiwiki, called irker, which enables wikis to send notifications to IRC channels, thanks to the simple irker bot. I had trouble with Irker in the past, since it was not quite reliable: it would disappear from channels and not return when we'd send it a notification. Unfortunately, the alternative, the KGB bot is much heavier: each repository needs a server-side, centralized configuration to operate properly.

Irker's design is simpler and more adapted to a simple plugin like this. Let's hope it will work reliably enough for my needs.

I have also suggested improvements to the footnotes styles, since they looked like hell in my Debian guide. It turns out this was an issue with the multimarkdown plugin that doesn't use proper semantic markup to identify footnotes. The proper fix is to enable footnotes in the default Discount plugin, which will require another, separate patch.

Finally, I have done some improvements (I hope!) on the layout of this theme. I made the top header much lighter and transparent to work around an issue where followed anchors would be hidden under the top header. I have also removed the top menu made out of the sidebar plugin because it was cluttering the display too much. Those links are all on the frontpage anyways and I suspect people were not using them so much.

The code is, as before, available in this git repository although you may want to start from the new ikistrap theme that is based on Bootstrap 4 and that may eventually be merged in ikiwiki directly.

DNS diagnostics

Through this interesting overview of various *ping tools, I got found out about the dnsdiag tool which currently allows users to do DNS traces, tampering detection and ping over DNS. In the hope of packaging it into Debian, I have requested clarifications regarding a modification to the DNSpython library the tool uses.

But I went even further and boldly opened a discussion about replacing DNSstuff, the venerable DNS diagnostic tools that is now commercial. It is somewhat surprising that there is no software that has even been publicly released that does those sanity checks for DNS, given how old DNS is.

Incidentally, I have also requested smtpping to be packaged in Debian as well but httping is already packaged.

Link checking

In the process of writing this article, I suddenly remembered that I constantly make mistakes in the various links I post on my site. So I started looking at a link checker, another tool that should be well established but that, surprisingly, is not quite there yet.

I have found this neat software written in Python called LinkChecker. Unfortunately, it is basically broken in Debian, so I had to do a non-maintainer upload to fix that old bug. I managed to force myself to not take over maintainership of this orphaned package but I may end up doing just that if no one steps up the next time I find issues in the package.

One of the problems I had checking links in my blog is that I constantly refer to sites that are hostile to bots, like the Debian bugtracker and MoinMoin wikis. So I published a patch that adds a --no-robots flag to be able to crawl those sites effectively.

I know there is the W3C tool but it's written in Perl, and there's probably zero chance for me to convince those guys to bypass robots exclusion rules, so I am sticking to Linkchecker.

Other Debian packaging work

At my request, Drush has finally been removed from Debian. Hopefully someone else will pick up that work, but since it basically needs to be redone from scratch, there was no sense in keeping it in the next release of Debian. Similarly, Semanticscuttle was removed from Debian as well.

I have uploaded new versions of tuptime, sopel and smokeping. I have also file a Request For Help for Smokeping. I am happy to report there was a quick response and people will be stepping up to help with the maintenance of that venerable monitoring software.

Background radiation

Finally, here's the generic background noise of me running around like a chicken with his head cut off:

Finally, I should mention that I will be less active in the coming months, as I will be heading outside as the summer finally came! I somewhat feel uncomfortable documenting publicly my summer here, as I am more protective of my privacy than I was before on this blog. But we'll see how it goes, maybe you'll hear non-technical articles here again soon!

19 May, 2016 10:49PM

hackergotchi for Steve Kemp

Steve Kemp

Accidental data-store .. is go!

A couple of days ago I wrote::

The code is perl-based, because Perl is good, and available here on github:

..

TODO: Rewrite the thing in #golang to be cool.

I might not be cool, but I did indeed rewrite it in golang. It was quite simple, and a simple benchmark of uploading two million files, balanced across 4 nodes worked perfectly.

https://github.com/skx/sos/

19 May, 2016 06:38PM

Valerie Young

Summer of Reproducible Builds

 

Hello friend, family, fellow Outreachy participants, and the Debian community!

This blog's primary purpose will be to track the progress of the Outreachy project in which I'm participating this summer 🙂  This post is to introduce myself and my project (working on the Debian reproducible builds project).

What is Outreachy? You might not know! Let me empower you: Outreachy is an organization connecting woman and minorities to mentors in the free (as in freedom) software community, /and/ funding for three months to work with the mentors and contribute to a free software project.  If you are a woman or minority human that likes free software, or if you know anyone in this situation, please tell them about Outreachy 🙂 Or put them in touch with me, I'd happily tell them more.

So who am I?

My name is Valerie Young. I live in the Boston Metropolitan Area (any other outreachy participants here?) and hella love free software. 

Some bullet pointed Val facts in rough reverse chronological order:
- I run Debian but only began contributing during the Outreachy application process
- If you went to DebConf2015, you might have seen me dye nine people's hair blue, blond or Debian swirl.
- If you stop through Boston I could be easily convinced to dye your hair.
- I worked on electronic medical records web application for the last two years (lotsa Javascriptin' and Perlin' at athenahealth)
- Before that I taught a programming summer program at University of Moratuwain Sri Lanka.
- Before that I got a degrees in physics and computer science at Boston University.
- At BU I helped start a hackerspace where my interest in technology, free software, hacker culture, anarchy, the internet all began.
- I grew up in the very fine San Francisco Bay Area.

What will I be working on?

Reproducible builds!

In the near future I'll write a “What is reproducible builds? Why is it so hot right now?” post.  For now, from a high (and not technical) level, reproducible builds is a broad effort to verify that the computer executable binary programs you run on your computer come from the human readable source code they claim to. It is not presently /impossible/ to do this verification, but it's not easy, and there are a lot of nuanced computer quirks that make it difficult for the most experienced programmer and straight-up impossible for a user with no technical expertise. And without this ability to verify -- the state we are in now -- any executable piece of software could be hiding secret code. 

The first step towards the goal of verifiability is to make reproducibility a essential part of software development. Reproducible builds means this: when you compile a program from the source code, it should always be identical, bit by bit. If the program is always identical, you can compare your version of the software to any trusted programmer with very little effort. If it is identical, you can trust it -- if it's not, you have reason to worry.

The Debian project is undergoing an effort to make the entire Debian operating system verifiable reproducible (hurray!). My outreachy-funded summer contribution involves the improving and updating tests.reproducible-builds.org – a site that presently presently surfaces the results of reproducibility testing of several free software projects (including Debian, Fedora, coreboot, OpenWrt, NetBSD, FreeBSD and ArchLinux). However, the design of test.r-b.org is a bit confusing, making it difficult for a user to find how to check on the reproducibility of a given package for one of the aforementioned projects, or understand the reasons for failure. Additional, the backend test results of Debian are outgrowing the original SQLite database, and many projects do not log the results of package testing at all. I hope, by the end of the summer, we'll have a more beefed-out and pretty site as well as better organized backend data 🙂

This summer there will be 3 other Outreachy participants working on the Debian reproducible builds project! Check out their blogs/projects:
Scarlett
Satyam
Ceridwen

Thanks to our Debian mentors -- Lunar, Holger Levsen, and Mattia Rizzolo -- for taking us on 🙂 

 

19 May, 2016 06:02PM by spectranaut

hackergotchi for Michal Čihař

Michal Čihař

wlc 0.3

wlc 0.3, a command line utility for Weblate, has been just released. This is probably first release which is worth using so it's probably also worth of bigger announcement.

It is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version will be 2.7 (current git is okay as well, it is now running on both demo and hosting servers).

How to use it? First you will probably want to store the credentials, so that your requests are authenticated (you can do unauthenticated requests as well, but obviously only read only and on public objects), so lets create ~/.config/weblate:

[weblate]
url = https://hosted.weblate.org/api/

[keys]
https://hosted.weblate.org/api/ = APIKEY

Now you can do basic commands:

$ wlc show weblate/master/cs
...
last_author: Michal Čihař
last_change: 2016-05-13T15:59:25
revision: 62f038bb0bfe360494fb8dee30fd9d34133a8663
share_url: https://hosted.weblate.org/engage/weblate/cs/
total: 1361
total_words: 6144
translate_url: https://hosted.weblate.org/translate/weblate/master/cs/
translated: 1361
translated_percent: 100.0
translated_words: 6144
url: https://hosted.weblate.org/api/translations/weblate/master/cs/
web_url: https://hosted.weblate.org/projects/weblate/master/cs/

You can find more examples in wlc documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

19 May, 2016 04:00PM

Petter Reinholdtsen

I want the courts to be involved before the police can hijack a news site DNS domain (#domstolkontroll)

I just donated to the NUUG defence "fond" to fund the effort in Norway to get the seizure of the news site popcorn-time.no tested in court. I hope everyone that agree with me will do the same.

Would you be worried if you knew the police in your country could hijack DNS domains of news sites covering free software system without talking to a judge first? I am. What if the free software system combined search engine lookups, bittorrent downloads and video playout and was called Popcorn Time? Would that affect your view? It still make me worried.

In March 2016, the Norwegian police seized (as in forced NORID to change the IP address pointed to by it to one controlled by the police) the DNS domain popcorn-time.no, without any supervision from the courts. I did not know about the web site back then, and assumed the courts had been involved, and was very surprised when I discovered that the police had hijacked the DNS domain without asking a judge for permission first. I was even more surprised when I had a look at the web site content on the Internet Archive, and only found news coverage about Popcorn Time, not any material published without the right holders permissions.

The seizure was widely covered in the Norwegian press (see for example Hegnar Online and ITavisen and NRK), at first due to the press release sent out by Økokrim, but then based on protests from the law professor Olav Torvund and lawyer Jon Wessel-Aas. It even got some coverage on TorrentFreak.

I wrote about the case a month ago, when the Norwegian Unix User Group (NUUG), where I am an active member, decided to ask the courts to test this seizure. The request was denied, but NUUG and its co-requestor EFN have not given up, and now they are rallying for support to get the seizure legally challenged. They accept both bank and Bitcoin transfer for those that want to support the request.

If you as me believe news sites about free software should not be censored, even if the free software have both legal and illegal applications, and that DNS hijacking should be tested by the courts, I suggest you show your support by donating to NUUG.

19 May, 2016 12:00PM

May 18, 2016

Stig Sandbeck Mathisen

Puppet 4 uploaded to Debian experimental

I’ve uploaded puppet 4.4.2-1 to Debian experimental.

Please test with caution, and expect sharp corners. This is a new major version of Puppet in Debian, with many new features and potentially breaking changes, as well as a big rewrite of the .deb packaging. Bug reports for src:puppet are very welcome.

As previously described in #798636, the new package names are:

  • puppet (all the software)

  • puppet-agent (package containing just the init script and systemd unit for the puppet agent)

  • puppet-master (init script and systemd unit for starting a single master)

  • puppet-master-passenger (This package depends on apache2 and libapache2-mod-passenger, and configures a puppet master scaled for more than a handful of puppet agents)

Lots of hugs to the authors, keepers and maintainers of autopkgtest, debci, piuparts and ruby-serverspec for their software. They helped me figure out when I had reached “good enough for experimental”.

Some notes:

  • To use exported resources with puppet 4, you need a puppetdb installation and a relevant puppetdb-terminus package on your puppet master. This is not available in Debian, but is available from Puppet’s repositories.

  • Syntax highlighting for Emacs and Vim are no longer built from the puppet package. Standalone packages will be made.

  • The packaged puppet modules need an overhaul of their dependencies to install alongside this version of puppet. Testing would probably also be great to see if they actually work.

I sincerely hope someone finds this useful. :)

18 May, 2016 10:00PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

First steps with the ATtiny45

1 port USB Relay

These days the phrase “embedded” usually means no console (except, if you’re lucky, console on a UART for debugging) and probably busybox for as much of userspace as you can get away with. You possibly have package management from OpenEmbedded or similar, though it might just be a horrible kludged together rootfs if someone hates you. Either way it’s rare for it not to involve some sort of hardware and OS much more advanced than the 8 bit machines I started out programming on.

That is, unless you’re playing with Arduinos or other similar hardware. I’m currently waiting on some ESP8266 dev boards to arrive, but even they’re quite advanced, with wifi and a basic OS framework provided. A long time ago I meant to get around to playing with PICs but never managed to do so. What I realised recently was that I have a ready made USB relay board that is powered by an ATtiny45. First step was to figure out if there were suitable programming pins available, which turned out to be all brought out conveniently to the edge of the board. Next I got out my trusty Bus Pirate, installed avrdude and lo and behold:

$ avrdude -p attiny45 -c buspirate -P /dev/ttyUSB0
Attempting to initiate BusPirate binary mode...
avrdude: Paged flash write enabled.
avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.01s

avrdude: Device signature = 0x1e9206 (probably t45)

avrdude: safemode: Fuses OK (E:FF, H:DD, L:E1)

avrdude done.  Thank you.

Perfect. I then read the existing flash image off the device, disassembled it, worked out it was based on V-USB and then proceeded to work out that the only interesting extra bit was that the relay was hanging off pin 3 on IO port B. Which led to me knocking up what I thought should be a functionally equivalent version of the firmware, available locally or on GitHub. It’s worked with my basic testing so far and has confirmed to me I understand how the board is set up, meaning I can start to think about what else I could do with it…

18 May, 2016 09:25PM

Andy Simpkins

OpenTAC sprint, Cambridge

Last weekend saw a small group get togeather in Cambridge to hack on the OpenTAC.  OpenTAC is an OpenHardware OpenSoftware test platform, designed specificly to aid automated testing and continious intergration.

Aimed at small / mobile / embedded targets OpenTAC v1 provides all of the  support infrastructure to drive up to 8 DUTs (Device Under Test) to your test or CI system.
Each of the 8 EUT ports provides:

  • A serial port (either RS232 levels on an DB9 socket, or 3V3 TTL on a molex kk plug)
  • USB Power (up-to 2A with a software defined fuse, and alarm limits)
  • USB data interconnect
  • Ethernet

All ports on the EUT interface are relay issolated, this means that cables to your EUT can be ‘unplugged’ under software control (we are aware of several SoC development boards that latch up if there is a serial port connected before power is applied).

Additionly there are 8 GPIO lines that can be used as switch controls to any EUT (perhaps to put a specific EUT into a programming mode, reboot it or even start it)

 

Anyway, back to the hacking weekend. ..

 

Joining Steve McIntyre and myself were Mark Brown, and Michael Grzeschik  (sorry Michael, I couldn’t find a homepage).  Mark traveled down from Scotland whilst Michael flew in from Germany for the weekend.  Gents we greatly apprecate you taking the time and expence to join us this weekend.  I should also thank my employer Toby Churchill Ltd. for allowing us to use the office to host the event.

A lot of work got done, and I beleive we have now fully tested and debugged the hardware.  We have also made great progress with the device tree and dvice drivers for the platform.  Mark got the EUT power system working as proof of concept, and has taken an OpenTAC board back with him to turn this into suitable drivers and hopfully push them up stream.  Meanwhile Michael spent his time working on the system portion of the device tree; OpenTAC’s internal power sequancing, thermal managment subsystem, and USB hub control.  Steve  got to grips with the USB serial converters (including how to read and program their internal non-volatile settings).  Finally I was able to explain hardware sequancing to everyone, and to modify boards to overcome some of my design mistakes (the biggest was by far the missing sence resistors for the EUT power managment)

 

 

18 May, 2016 09:00PM by andy

hackergotchi for Steve Kemp

Steve Kemp

Accidental data-store ..

A few months back I was looking over a lot of different object-storage systems, giving them mini-reviews, and trying them out in turn.

While many were overly complex, some were simple. Simplicity is always appealing, providing it works.

My review of camlistore was generally positive, because I like the design. Unfortunately it also highlighted a lack of documentation about how to use it to scale, replicate, and rebalance.

How hard could it be to write something similar, but also paying attention to keep it as simple as possible? Well perhaps it was too easy.

Blob-Storage

First of all we write a blob-storage system. We allow three operations to be carried out:

  • Retrieve a chunk of data, given an ID.
  • Store the given chunk of data, with the specified ID.
  • Return a list of all known IDs.

 

API Server

We write a second server that consumers actually use, though it is implemented in terms of the blob-storage server listed previously.

The public API is trivial:

  • Upload a new file, returning the ID which it was stored under.
  • Retrieve a previous upload, by ID.

 

Replication Support

The previous two services are sufficient to write an object storage system, but they don't necessarily provide replication. You could add immediate replication; an upload of a file could involve writing that data to N blob-servers, but in a perfect world servers don't crash, so why not replicate in the background? You save time if you only save uploaded-content to one blob-server.

Replication can be implemented purely in terms of the blob-servers:

  • For each blob server, get the list of objects stored on it.
  • Look for that object on each of the other servers. If it is found on N of them we're good.
  • If there are fewer copies than we like, then download the data, and upload to another server.
  • Repeat until each object is stored on sufficient number of blob-servers.

 

My code is reliable, the implementation is almost painfully simple, and the only difference in my design is that rather than having an API-server which allows both "uploads" and "downloads" I split it into two - that means you can leave your "download" server open to the world, so that it can be useful, and your upload-server can be firewalled to only allow a few hosts to access it.

The code is perl-based, because Perl is good, and available here on github:

TODO: Rewrite the thing in #golang to be cool.

18 May, 2016 06:49PM

Bits from Debian

Imagination accelerates Debian development for 64-bit MIPS CPUs

Imagination Technologies recently donated several high-performance SDNA-7130 appliances to the Debian Project for the development and maintenance of the MIPS ports.

The SDNA-7130 (Software Defined Network Appliance) platforms are developed by Rhino Labs, a leading provider of high-performance data security, networking, and data infrastructure solutions.

With these new devices, the Debian project will have access to a wide range of 32- and 64-bit MIPS-based platforms.

Debian MIPS ports are also possible thanks to donations from the aql hosting service provider, the Eaton remote controlled ePDU, and many other individual members of the Debian community.

The Debian project would like to thank Imagination, Rhino Labs and aql for this coordinated donation.

More details about GNU/Linux for MIPS CPUs can be found in the related press release at Imagination and their community site about MIPS.

18 May, 2016 07:30AM by Laura Arjona Reina

May 17, 2016

Reproducible builds folks

Reproducible builds: week 55 in Stretch cycle

What happened in the Reproducible Builds effort between May 8th and May 14th 2016:

Documentation updates

Toolchain fixes

  • dpkg 1.18.7 has been uploaded to unstable, after which Mattia Rizzolo took care of rebasing our patched version.
  • gcc-5 and gcc-6 migrated to testing with the patch to honour SOURCE_DATE_EPOCH
  • Ximin Luo started an upstream discussion with the Ghostscript developers.
  • Norbert Preining has uploaded a new version of texlive-bin with these changes relevant to us:
    • imported Upstream version 2016.20160512.41045 support for suppressing timestamps (SOURCE_DATE_EPOCH) (Closes: #792202)
    • add support for SOURCE_DATE_EPOCH also to luatex
  • cdbs 0.4.131 has been uploaded to unstable by Jonas Smedegaard, fixing these issues relevant to us:
    • #794241: export SOURCE_DATE_EPOCH. Original patch by akira
    • #764478: call dh_strip_nondeterminism if available. Original patch by Holger Levsen
  • libxslt 1.1.28-3 has been uploaded to unstable by Mattia Rizzolo, fixing the following toolchain issues:
    • #823857: backport patch from upstream to provide stable IDs in the genrated documents.
    • #791815: Honour SOURCE_DATE_EPOCH when embedding timestamps in docs. Patch by Eduard Sanou.

Packages fixed

The following 28 packages have become newly reproducible due to changes in their build dependencies: actor-framework ask asterisk-prompt-fr-armelle asterisk-prompt-fr-proformatique coccinelle cwebx d-itg device-tree-compiler flann fortunes-es idlastro jabref konclude latexdiff libint minlog modplugtools mummer mwrap mxallowd mysql-mmm ocaml-atd ocamlviz postbooks pycorrfit pyscanfcs python-pcs weka

The following 9 packages had older versions which were reproducible, and their latest versions are now reproducible again due to changes in their build dependencies: csync2 dune-common dune-localfunctions libcommons-jxpath-java libcommons-logging-java libstax-java libyanfs-java python-daemon yacas

The following packages have become newly reproducible after being fixed:

The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed:

  • klibc/2.0.4-9 by Ben Hutchings.

Some uploads have fixed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

  • #787424 against emacs24 by Alexis Bienvenüe: order hashes when generating .el files
  • #823764 against sen by Daniel Shahaf: render the build timestamp in a consistent timezone
  • #823797 against openclonk by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH
  • #823961 against herbstluftwm by Fabian Wolff: honour SOURCE_DATE_EPOCH
  • #824049 against emacs24 by Alexis Bienvenüe: make start value of gensym-counter reproducible
  • #824050 against emacs24 by Alexis Bienvenüe: make autoloads files reproducible
  • #824182 against codeblocks by Fabian Wolff: honour SOURCE_DATE_EPOCH
  • #824263 against cmake by Reiner Herrmann: sort file lists from file(GLOB ...)

Package reviews

344 reviews have been added, 125 have been updated and 20 have been removed in this week.

14 FTBFS bugs have been reported by Chris Lamb.

tests.reproducible-builds.org

Misc.

Dan Kegel sent a mail to report about his experiments with a reproducible dpkg PPA for Ubuntu. According to him sudo add-apt-repository ppa:dank/dpkg && sudo apt-get update && sudo apt-get install dpkg should be enough to get reproducible builds on Ubuntu 16.04.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

17 May, 2016 11:09PM

hackergotchi for Mehdi Dogguy

Mehdi Dogguy

Newmaint — Call for help

The process leading to acceptation of new Debian Maintainers is mainly administrative today and is handled by the Newmaint team. In order to simplify this process further, the team wants to integrate their workflow into nm.debian.org's interface so that prospective maintainers can send their application online and the Newmaint team review it from within the website.

We need your help to implement the missing pieces into nm.debian.org. It is written in Python and using Django. If you have some experience with that, you should definitely join the newmaint-site mailing list and ask for the details. Enrico or someone else in the list will do their best to share their vision and explain the needed work in order to get this properly implemented!

It doesn't matter if you're already a Debian Developer to be able to contribute to this project. Anyone can step up and help!

17 May, 2016 09:49PM by Mehdi (noreply@blogger.com)

hackergotchi for Sean Whitton

Sean Whitton

seoulviasfo

I spent last night in San Francisco on my way from Tucson to Seoul. This morning as I headed to the airport, I caught the end of a shouted conversation between a down-and-out and a couple of middle school-aged girls, who ran away back to the Asian Art museum as the conversation ended. A security guard told the man that he needed him to go away. The wealth divide so visible here just isn’t something you really see around Tucson.

I’m working on a new module for Propellor that’s complicated enough that I need to think carefully about the Haskell in order to write produce a flexible and maintainable module. I’ve only been doing an hour or so of work on it per day, but the past few days I wake up each day with an idea for restructuring yesterday’s code. These ideas aren’t anything new to me: I think I’m just dredging up the understanding of Haskell I developed last year when I was studying it more actively. Hopefully this summer I can learn some new things about Haskell.

Riding on the “Bay Area Rapid Transit” (BART) feels like stepping back in time to the years of Microsoft’s ascendency, before we had a tech world dominated by Google and Facebook: the platform announcements are in a computerised voice that sounds like it was developed in the nineties. They’ll eventually replace the old trains—apparently some new ones are coming in 2017—so I feel privileged to have been able to ride the older ones. I feel the same about the Tube in London.

I really appreciate old but supremely reliable and effective public transport. It reminds me of the Debian toolchain: a bit creaky, but maintained over a sufficiently long period that it serves everyone a lot better than newer offerings, which tend to be produced with ulterior corporate motives.

17 May, 2016 07:54PM

Mark Brown

OpenTAC sprint

This weekend Toby Churchill kindly hosted a hacking weekend for OpenTAC – myself, Michael Grzeschik, Steve McIntyre and Andy Simpkins got together to bring up the remaining bits of the hardware on the current board revision and get some of the low level tooling like production flashing for the FTDI serial ports on the board up and running. It was a very productive weekend, we verified that everything was working with only few small mods needed for the board . Personally the main thing I worked on was getting most of an initial driver for the EMC1701 written. That was the one component without Linux support and allowed us to verify that the power switching and measurement for the systems under test was working well.

There’s still at least one more board revision and quite a bit of software work to do (I’m hoping to get the EMC1701 upstream for v4.8) but it was great to finally see all the physical components of the system working well and see it managing a system under test, this board revision should support all the software development that’s going to be needed for the final board.

Thanks to all who attended, Pengutronix for sponsoring Michael’s attendance and Toby Churchill for hosting!

IMG_2194
IMG_20160515_192336628

17 May, 2016 03:11PM by broonie

Mike Gabriel

NXv3 Rebase: Build nxagent against X.org 7.0

As already hinted in my previous blog post, here comes a short howto that explains how to test-build nxagent (v3) against a modularized X.org 7.0 source tree.

WARNING: Please note that mixing NX code and X.org code partially turns the original X.org code base into GPL-2 code. We are aware of this situation and work on moving all NXv3 related GPL-2 code into the nxagent DDX code (xserver-xorg/hw/nxagent) or--if possible--dropping it completely. The result shall be a range of patches against X.org (licensable under the same license as the respective X.org files) and a GPL-2 licensed DDX (i.e. nxagent).

How to build this project

For the Brave and Playful

$ git clone https://git.arctica-project.org/nx-X11-rebase/build.git .
$ bash populate.sh sources.lst
$ ./buildit.sh

You can find the built tree in the _install/ sub-directory.

Please note that cloning Git repositories over the https protocol can be considerably slow. If you want to speed things up, consider signing up with our GitLab server.

For Developers...

... who have registered with our GitLab server.

$ git clone git@git.arctica-project.org:nx-X11-rebase/build.git .
$ bash populate.sh sources-devs.lst
$ ./buildit.sh

You will find the built tree in the _install/ sub-directory.

The related git repositories are in the repos/ sub-directory. All repos modified for NX have been cloned from the Arctica Project's GitLab server via SSH. Thus, you as a developer can commit changes on those repos and push back your changes to the GitLab server.

Required tools for building

Debian/Ubuntu and alike

  • build-essential
  • automake
  • gawk
  • git
  • pkg-config
  • libtool
  • libz-dev
  • libjpeg-dev
  • libpng-dev

In a one-liner command:

$ sudo apt-get install build-essential automake gawk git pkg-config libtool libz-dev libjpeg-dev libpng-dev

Fedora

If someone tries this out in a clean Fedora chroot environment, please let us know about build dependent packages.

openSUSE

If someone tries this out in a clean openSUSE chroot environment, please let us know about build dependent packages.

Testing the built nxagent and nxproxy

The tests/ subdir contains some scripts which can be used to test the compile results.

  • run-nxagent runs an nxagent and starts an nxproxy connection to it (do this as normal non-root user):
    $ tests/run-nxagent
    $ export DISPLAY=:9
    # launch e.g. MATE desktop environment on Debian, adapt session type and Xsession startup to your system / distribution
    $ STARTUP=mate-session /etc/X11/Xsession
    
  • run-nxproxy2nxproxy-test connects to nxproxys using the nx compression protocol:
    $ tests/run-nxproxy2nxproxy-test
    $ export DISPLAY=:8
    # launch e.g. xterm and launch other apps from within that xterm process
    $ xterm &
    
  • more to come...

Notes on required X.org changes (NX_MODIFICATIONS)

For this build workflow to work, we (i.e. mostly Ulrich Sibiller) had to work several NoMachine patches into original X.org 7.0 code. Here is a list of modified X11 components with URLs pointing to the branch containing those changes:

xkbdata                            xorg/data/xkbdata                       rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/xkbdata.git
libfontenc                         xorg/lib/libfontenc                     rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/libfontenc.git
libSM                              xorg/lib/libSM                          rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libSM.git
libX11                             xorg/lib/libX11                         rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libX11.git
libXau                             xorg/lib/libXau                         rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libXau.git
libXfont                           xorg/lib/libXfont                       rebasenx  1.3.1     https://git.arctica-project.org/nx-X11-rebase/libXfont.git
libXrender                         xorg/lib/libXrender                     rebasenx  0.9.0.2   https://git.arctica-project.org/nx-X11-rebase/libXrender.git
xtrans                             xorg/lib/libxtrans                      rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libxtrans.git
kbproto                            xorg/proto/kbproto                      rebasenx  1.0.2     https://git.arctica-project.org/nx-X11-rebase/kbproto.git
xproto                             xorg/proto/xproto                       rebasenx  7.0.4     https://git.arctica-project.org/nx-X11-rebase/xproto.git
xorg-server                        xorg/xserver                            rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/xserver.git
mesa                               mesa/mesa                               rebasenx  6.4.1     https://git.arctica-project.org/nx-X11-rebase/mesa.git

Credits

Nearly all of this has been achieved by Ulrich Sibiller. Thanks a lot for giving your time and energy to that. As the rebasing of NXv3 is currently a funded project supported by the Qindel Group, we are currently negotiating ways of monetarily appreciating Ulrich's intensive work on this. Thanks a lot, once more!!!

Feedback

If anyone of you feels like trying out the test build as described above, please consider signing up with the Arctica Project's GitLab server and reporting your issues there directly (against the repository nx-X11-rebase/build). Alternatively, feel free to contact us on IRC (Freenode): #arctica or subscribe to our developers' mailing list. Thank you.

light+love
Mike Gabriel

17 May, 2016 02:27PM by sunweaver

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, April 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 116.75 work hours have been dispatched among 9 paid contributors. Their reports are available:

  • Antoine Beaupré did 16h.
  • Ben Hutchings did 12.25 hours (out of 15 hours allocated + 5.50 extra hours remaining, he returned the remaining 8.25h to the pool).
  • Brian May did 10 hours.
  • Chris Lamb did nothing (instead of the 16 hours he was allocated, his hours have been redispatched to other contributors over May).
  • Guido Günther did 2 hours (out of 8 hours allocated + 3.25 remaining hours, leaving 9.25 extra hours for May).
  • Markus Koschany did 16 hours.
  • Santiago Ruano Rincón did 7.50 hours (out of 12h allocated + 3.50 remaining, thus keeping 8 extra hours for May).
  • Scott Kitterman posted a report for 6 hours made in March but did nothing in April. His 18 remaining hours have been returned to the pool. He decided to stop doing LTS work for now.
  • Thorsten Alteholz did 15.75 hours.

Many contributors did not use all their allocated hours. This is partly explained by the fact that in April Wheezy was still under the responsibility of the security team and they were not able to drive updates from start to finish.

In any case, this means that they have more hours available over May and since the LTS period started, they should hopefully be able to make a good dent in the backlog of security updates.

Evolution of the situation

The number of sponsored hours reached a new record with 132 hours per month, thanks to two new gold sponsors (Babiel GmbH and Plat’Home). Plat’Home’s sponsorship was aimed to help us maintain Debian 7 Wheezy on armel and armhf (on top of already supported amd64 and i386). Hopefully the trend will continue so that we can reach our objective of funding the equivalent of a full-time position.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file lists 44 packages awaiting an update.

This is a bit more than the 15-20 open entries that we used to have at the end of the Debian 6 LTS period.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

17 May, 2016 01:57PM by Raphaël Hertzog

May 16, 2016

Bits from Debian

New Debian Developers and Maintainers (March and April 2016)

The following contributors got their Debian Developer accounts in the last two months:

  • Sven Bartscher (kritzefitz)
  • Harlan Lieberman-Berg (hlieberman)

Congratulations!

16 May, 2016 10:10PM by Ana Guerrero Lopez