August 31, 2016

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.8

Quite on schedule (just one day later), Weblate 2.7 is out today. This release brings Subversion support or improved zen mode.

Full list of changes:

  • Documentation improvements.
  • Translations.
  • Updated bundled javascript libraries.
  • Added list_translators management command.
  • Django 1.8 is no longer supported.
  • Fixed compatibility with Django 1.10.
  • Added Subversion support.
  • Separated XML validity check from XML mismatched tags.
  • Fixed API to honor HIDE_REPO_CREDENTIALS settings.
  • Show source change in zen mode.
  • Alt+PageUp/PageDown/Home/End now works in zen mode as well.
  • Add tooltip showing exact time of changes.
  • Add option to select filters and search from translation page.
  • Added UI for translation removal.
  • Improved behavior when inserting placeables.
  • Fixed auto locking issues in zen mode.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English Gammu phpMyAdmin SUSE Weblate | 0 comments

31 August, 2016 09:30AM

hackergotchi for Joey Hess

Joey Hess

late summer

With days beginning to shorten toward fall, my house is in initial power saving mode. Particularly, the internet gateway is powered off overnight. Still running electric lights until bedtime, and still using the inverter and other power without much conservation during the day.

Indeed, I had two laptops running cpu-melting keysafe benchmarks for much of today and one of them had to charge up from empty too. That's why the house power is a little low, at 11.0 volts now, despite over 30 amp-hours of power having been produced on this mostly clear day. (1 week average is 18.7 amp-hours)

September/October is the tricky time where it's easy to fall off a battery depletion cliff and be stuck digging out for a long time. So time to start dusting off the conservation habits after summer's excess.


I think this is the first time I've mentioned any details of living off grid with a bare minimum of PV capacity in over 4 years. Solar has a lot of older posts about it, and I'm going to note down the typical milestones and events over the next 8 months.

31 August, 2016 01:15AM

August 30, 2016

Mike Gabriel

credential-sheets: User Account Credential Sheets Tool

Preface

This little piece of work has been pending on my todo list for about two years now. For our local school project "IT-Zukunft Schule" I wrote the little tool credential-sheets. It is a little Perl script that turns a series of import files (CSV format) as they have to be provided for user mass import into GOsa² (i.e. LDAP) into a series of A4 sheets with little cards on them, containing initial user credential information. The upstream sources are on Github and I have just uploaded this little tool to Debian.

Introduction

After mass import of user accounts (e.g. into LDAP) most site administrators have to create information sheets (or snippets) containing those new credentials (like username, password, policy of usage, etc.). With this tiny tool, providing these pieces of information to multiple users, becomes really simple. Account data is taken from a CSV file and the sheets are output as PDF using easily configurable LaTeX template files.

Usage

Synopsis: credential-sheets [options] <CSV-file-1> [<CSV-file-2> [...]]

Options

The credential-sheets command accepts the following command-line options:
   --help Display a help with all available command line options and exit.

   --template=<tpl-name>
          Name of the template to use.

   --cols=<x>
          Render <x> columns per sheet.

   --rows=<y>
          Render <y> rows per sheet.

   --zip  Do create a ZIP file at the end.

   --zipfilename=<zip-file-name>
          Alternative ZIP file name (default: name of parent folder).

   --debug
          Don't remove temporary files.

CSV File Column Arrangement

The credential-sheets tool can handle any sort of column arrangement in given CSV file(s). It expects the CSV file(s) to have column names in their first line. The given column names have to map to the VAR-<column-name> placeholders in credential-sheets's LaTeX templates. The shipped-with templates (students, teachers) can handle these column names:
  • login -- The user account's login id (uid)
  • lastName -- The user's last name(s)
  • firstName -- The user's first name(s)
  • password -- The user's password
  • form -- The form name/ID (student template only)
  • subjects -- A list of subjects taught by a teacher (teacher template only)
If you create your own templates, you can be very flexible in using your own column names and template names. Only make sure that the column names provided in the CSV file(s)'s first line match the variables used in the customized LaTeX template.

Customizations

The shipped-with credential sheets templates are expected to be installed in /usr/share/credential-sheets/ for system-wide installations. When customizing templates, simply place a modified copy of any of those files into ~/.credential-sheets/ or /etc/credential-sheets/. For further details, see below. The credential-sheets tool uses these configuration files:
  • header.tex (LaTeX file header)
  • <tpl-name>-template.tex (where as <tpl-name> students and teachers is provided on default installations, this is extensible by defining your own template files, see below).
  • footer.tex (LaTeX file footer)
Search paths for configuration files (in listed order):
  • $HOME/.credential-sheets/
  • ./
  • /etc/credential-sheets/
  • /usr/local/share/credential-sheets/
  • /usr/share/credential-sheets/
You can easily customize the resulting PDF files generated with this tool by placing your own template files, header and footer where appropriate.

Dependencies

This project requires the following dependencies:
  • Text::CSV Perl module
  • Archive::Zip Perl module
  • texlive-latex-base
  • texlive-fonts-extra

Copyright and License

Copyright © 2012-2016, Mike Gabriel <mike.gabriel@das-netzwerkteam.de>. Licensed under GPL-2+ (see COPYING file).

30 August, 2016 08:05PM by sunweaver

Daniel Stender

My work for Debian in August

Here's again a little list of my humble off-time contributions I'm happy to add to the large amount of work we're completing all together each month. Then there is one more "new in Debian" (meaning: "new in unstable") announcement. First, the uploads (a few of them are from July):

  • afl/2.21b-1
  • djvusmooth/0.2.17-1
  • python-bcrypt/3.1.0-1
  • python-latexcodec/1.0.3-4 (closed #830604)
  • pylint-celery/0.3-2 (closed #832826)
  • afl/2.28b-1 (closed #828178)
  • python-afl/0.5.4-1
  • vulture/0.10-1
  • afl/2.30b-1
  • prospector/0.12.2-1
  • pyinfra/0.1.1-1
  • python-afl/0.5.4-2 (fix of elinks_dump_varies_output_with_locale)
  • httpbin/0.5.0-1
  • python-afl/0.5.5-1 (closed #833675)
  • pyinfra/0.1.2-1
  • afl/2.33b-1 (experimental, build/run on llvm 3.8)
  • pylint-flask/0.3-2 (closed #835601)
  • python-djvulibre/0.8-1
  • pylint-flask/0.5-1
  • pytest-localserver/0.3.6-1
  • afl/2.33b-2
  • afl/2.33b-3

New packages:

  • keras/1.0.7-1 (initial packaging into experimental)
  • lasagne/0.1+git20160728.8b66737-1

Sponsored uploads:

  • squirrel3/3.1-4 (closed #831210)

Requested resp. suggested for packaging:

  • yapf: Python code formatter
  • spacy: industrial-strength natural language processing for Python
  • ralph: asset management and DCIM tool for data centers
  • pytest-cookies: Pytest plugin for testing Cookiecutter templates
  • blocks: another deep learning framework build on the top of Theano
  • fuel: data provider for Blocks and Python DNN frameworks in general

New in Debian: Lasagne (deep learning framework)

Now that the mathematical expression compiler Theano is available in Debian, deep learning frameworks resp. toolkits which have been build on top of it can become available within Debian, too (like Blocks, mentioned before). Theano is an own general computing engine which has been developed with a focus on machine learning resp. neural networks, which features an own declarative tensor language. The toolkits which have build upon it vary in the way how much they abstract the bare features of Theano, if they are "thick" or "thin" so to say. When the abstraction gets higher you gain more end user convenience up to the level that you have the architectural components of neural networks available for combination like in a lego box, while the more complicated things which are going on "under the hood" (like how the networks are actually implemented) are hidden. The downside is, thick abstraction layers usually makes it difficult to implement novel features like custom layers or loss functions. So more experienced users and specialists might to seek out for the lower abstraction toolkits, where you have to think more in terms of Theano.

I've got an initial package of Keras in experimental (1.0.7-1), it runs (only a Python 3 package is available so far) but needs some more work (e.g. building the documentation with mkdocs). Keras is a minimalistic, high modular DNN library inspired by Torch1. It has a clean, rather easy API for experimenting and fast prototyping. It can also run on top of Google's TensorFlow, and we're going to have it ready for that, too.

Lasagne follows a different approach. It's, like Keras and Blocks, a Python library to create and train multi-layered artificial neural networks in/on Theano for applications like image recognition resp. classification, speech recognition, image caption generation or other purposes like style transfers from paintings to pictures2. It abstracts Theano as little as possible, and could be seen rather like an extension or an add-on than an abstraction3. Therefore, knowledge on how things are working in Theano would be needed to make full use out of this piece of software.

With the new Debian package (0.1+git20160728.8b66737-1)4, the whole required software stack (the corresponding Theano package, NumPy, SciPy, a BLAS implementation, and the nividia-cuda-toolkit and NVIDIA kernel driver to carry out computations on the GPU5) could be installed most conveniently just by a single apt-get install python{,3}-lasagne command6. If wanted with the documentation package lasagne-doc for offline use (no running around on remote airports seeking for a WIFI spot), either in the Python 2 or the Python 3 branch, or both flavours altogether7. While others have to spend a whole weekend gathering, compiling and installing the needed libraries you can grab yourself a fresh cup of coffee. These are the advantages of a fully integrated system (sublime message, as always: desktop users switch to Linux!).

When the installation of packages has completed, the MNIST example of Lasagne could be used for a quick check if the whole library stack works properly8:

$ THEANO_FLAGS=device=gpu,floatX=float32 python /usr/share/doc/python-lasagne/examples/mnist.py mlp 5
Using gpu device 0: GeForce 940M (CNMeM is disabled, cuDNN 5005)
Loading data...
Downloading train-images-idx3-ubyte.gz
Downloading train-labels-idx1-ubyte.gz
Downloading t10k-images-idx3-ubyte.gz
Downloading t10k-labels-idx1-ubyte.gz
Building model and compiling functions...
Starting training...
Epoch 1 of 5 took 2.488s
  training loss:        1.217167
  validation loss:      0.407390
  validation accuracy:      88.79 %
Epoch 2 of 5 took 2.460s
  training loss:        0.568058
  validation loss:      0.306875
  validation accuracy:      91.31 %

The example on how to train a neural network on the MNIST database of handwritten digits is refined (it also provides --help) and explained in detail in the Tutorial section of the documentation in /usr/share/doc/lasagne-doc/html/. Very good starting points are also the IPython notebooks that are available from the tutorials by Eben Olson9 and Geoffrey French on the PyData London 201610. There you have Theano basics, examples for employing convolutional neural networks (CNN) and recurrent neural networks (RNN) for a range of different purposes, how to use pre-trained networks for image recognition, etc.


  1. For a quick comparison of Keras and Lasagne with other toolkits, see Alex Rubinsteyn's PyData NYC 2015 presentation on using LSTM (long short term memory) networks on varying length sequence data like Grimm's fairy tales (https://www.youtube.com/watch?v=E92jDCmJNek 27:30 sq.) 

  2. https://github.com/Lasagne/Recipes/tree/master/examples/styletransfer 

  3. Great introduction to Theano and Lasagne by Eben Olson on the PyData NYC 2015: https://www.youtube.com/watch?v=dtGhSE1PFh0 

  4. The package is "freelancing" currently being in collab-maint, to set up a deep learning packaging team within Debian is in the stage of discussion. 

  5. Only available for amd64 and ppc64el. 

  6. You would need "testing" as package source in /etc/apt/sources.list to install it from the archive at the present time (I have that for years, but if Debian Testing could be advised as productive system is going to be discussed elsewhere), but it's coming up for Debian 9. The cuda-toolkit and pycuda are in the non-free section of the archive, thus non-free (mostly used in combination with contrib) must be added to main. Plus, it's a mere suggestion of the Theano packages to keep Theano in main, so --install-suggests is needed to pull it automatically with the same command, or this must be given explicitly. 

  7. For dealing with Theano in Debian, see this previous blog posting 

  8. Like suggested in the guideline From Zero to Lasagne on Ubuntu 14.04. cuDNN isn't available as official Debian package yet, but could be downloaded as a .deb package after registration at https://developer.nvidia.com/cudnn. It integrates well out of the box. 

  9. https://github.com/ebenolson/pydata2015 

  10. https://github.com/Britefury/deep-learning-tutorial-pydata2016, video: https://www.youtube.com/watch?v=DlNR1MrK4qE 

30 August, 2016 05:42PM by Daniel Stender

hackergotchi for Christoph Egger

Christoph Egger

DANE and DNSSEC Monitoring

At this year's FrOSCon I repeted my presentation on DNSSEC. In the audience, there was the suggestion of a lack of proper monitoring plugins for a DANE and DNSSEC infrastructure that was easily available. As I already had some personal tools around and some spare time to burn I've just started a repository with some useful tools. It's available on my website and has mirrors on Gitlab and Github. I intent to keep this repository up-to-date with my personal requirements (which also means adding a xmpp check soon) and am happy to take any contributions (either by mail or as "pull requests" on one of the two mirrors). It currently has smtp (both ssmtp and starttls) and https support as well as support for checking valid DNSSEC configuration of a zone.

While working on it it turned out some things can be complicated. My language of choice was python3 (if only because the ssl library has improved since 2.7 a lot), however ldns and unbound in Debian lack python3 support in their bindings. This seems fixable as the source in Debian is buildable and useable with python3 so it just needs packaging adjustments. Funnily the ldns module, which is only needed for check_dnssec, in debian is currently buggy for python2 and python3 and ldns' python3 support is somewhat lacking so I spent several hours hunting SWIG problems.

30 August, 2016 05:11PM

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Thomas D

It's not often that an artist touches you deeply, but Thomas D managed to do so to the point of that I am (only half) jokingly saying that if there would be a church of Thomas D I would absolutely join it. His lyrics always did stand out for me in the context of the band I found about him, and the way he lives his life is definitely outstanding. And additionally there are these special songs that give so much and share a lot. I feel sorry for the people who don't understand German to be able to appreciate him.

Here are three songs that I suggest you to listen to closely:

  • Fluss: This song gave me a lot of strengh in a difficult time of my life. And it still works wonders when I feel down to get my ass up from the floor again.
  • Gebet an den Planeten: This songs gives me shivers. Let the lyrics touch you. And take the time to think about it.
  • An alle Hinterbliebenen: This song might be a bit difficult to deal with. It's about loss and how to deal with suffering.

Like always, enjoy!

/music | permanent link | Comments: 1 | Flattr this

30 August, 2016 04:12PM by Rhonda

hackergotchi for Joachim Breitner

Joachim Breitner

Explicit vertical alignment in Haskell

Chris Done’s automatic Haskell formatter hindent is released in a new version, and getting quite a bit of deserved attention. He is polling the Haskell programmers on whether two or four spaces are the right indentation. But that is just cosmetics…

I am in principle very much in favor of automatic formatting, and I hope that a tool like hindent will eventually be better at formatting code than a human.

But it currently is not there yet. Code is literature meant to be read, and good code goes at length to be easily readable, and formatting can carry semantic information.

The Haskell syntax was (at least I get that impression) designed to allow the authors to write nicely looking, easy to understand code. One important tool here is vertical alignment of corresponding concepts on different lines. Compare

maze :: Integer -> Integer -> Integer
maze x y
| abs x > 4  || abs y > 4  = 0
| abs x == 4 || abs y == 4 = 1
| x ==  2    && y <= 0     = 1
| x ==  3    && y <= 0     = 3
| x >= -2    && y == 0     = 4
| otherwise                = 2

with

maze :: Integer -> Integer -> Integer
maze x y
| abs x > 4 || abs y > 4 = 0
| abs x == 4 || abs y == 4 = 1
| x == 2 && y <= 0 = 1
| x == 3 && y <= 0 = 3
| x >= -2 && y == 0 = 4
| otherwise = 2

The former is a quick to grasp specification, the latter (the output of hindent at the moment) is a desert of numbers and operators.

I see two ways forward:

  • Tools like hindent get improved to the point that they are able to detect such patterns, and indent it properly (which would be great, but very tricky, and probably never complete) or
  • We give the user a way to indicate intentional alignment in a non-obtrusive way that gets detected and preserved by the tool.

What could such ways be?

  • For guards, it could simply detect that within one function definitions, there are multiple | on the same column, and keep them aligned.
  • More general, one could take the approach lhs2Tex (which, IMHO, with careful input, a proportional font and with the great polytable LaTeX backend, produces the most pleasing code listings) takes. There, two spaces or more indicate an alignment point, and if two such alignment points are in the same column, their alignment is preserved – even if there are lines in between!

    With the latter approach, the code up there would be written

    maze :: Integer -> Integer -> Integer
    maze x y
    | abs x > 4   ||  abs y > 4   = 0
    | abs x == 4  ||  abs y == 4  = 1
    | x ==  2     &&  y <= 0      = 1
    | x ==  3     &&  y <= 0      = 3
    | x >= -2     &&  y == 0      = 4
    | otherwise                   = 2

    And now the intended alignment is explicit.

(This post is cross-posted on reddit.)

30 August, 2016 01:35PM by Joachim Breitner (mail@joachim-breitner.de)

Petter Reinholdtsen

First draft Norwegian Bokmål edition of The Debian Administrator's Handbook now public

In April we started to work on a Norwegian Bokmål edition of the "open access" book on how to set up and administrate a Debian system. Today I am happy to report that the first draft is now publicly available. You can find it on get the Debian Administrator's Handbook page (under Other languages). The first eight chapters have a first draft translation, and we are working on proofreading the content. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors. A good way to contribute is to proofread the text and update weblate if you find errors.

Our goal is still to make the Norwegian book available on paper as well as electronic form.

30 August, 2016 08:10AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.5: now with protobuf v2 and v3!

A few short weeks after the 0.4.4 release of RProtoBuf, we are happy to announce a new version 0.4.5 which appeared on CRAN earlier today.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This release brings support to the recently-release 'version 3' Protocol Buffers standard, used e.g. by the (very exciting) gRPC project (which was just released as version 1.0). RProtoBuf continues to supportv 'version 2' but now also cleanly support 'version 3'.

Changes in RProtoBuf version 0.4.5 (2016-08-29)

  • Support for version 3 of the Protcol Buffers API

  • Added 'syntax = "proto2";' to all proto files (PR #17)

  • Updated Travis CI script to test against both versions 2 and 3 using custom-built .deb packages of version 3 (PR #16)

  • Improved build system with support for custom CXXFLAGS (Craig Radcliffe in PR #15)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 August, 2016 02:55AM

August 29, 2016

hackergotchi for David Moreno

David Moreno

Webhook Setup with Facebook::Messenger::Bot

The documentation for the Facebook Messenger API points out how to setup your initial bot webhook. I just committed a quick patch that would make it very easy to setup a quick script to get it done using the unreleased and still in progress Perl’s Facebook::Messenger::Bot:

use Facebook::Messenger::Bot;

use constant VERIFY_TOKEN => 'imsosecret';

my $bot = Facebook::Messenger::Bot->new(); # no config specified!
$bot->expect_verify_token( VERIFY_TOKEN );
$bot->spin();

This should get you sorted. What endpoint would that be, though? Well that depends on how you’re giving Facebook access to your Plack’s .psgi application.

29 August, 2016 06:49PM

hackergotchi for Michal Čihař

Michal Čihař

motranslator 1.1

Four months after 1.0 release, motranslator 1.1 is out. If you happen to use it for untrusted data, this might be as well called security release, though this is still not good idea until we remove usage of eval() used to evaluate plural formula.

Full list of changes:

  • Improved handling of corrupted mo files
  • Minor performance improvements
  • Stricter validation of plural expression

The motranslator is a translation library used in current phpMyAdmin master (upcoming 4.7.0) with focus on speed and memory usage. It uses Gettext MO files to load the translations. It also comes with testsuite (100% coverage) and basic documentation.

Recommended way to install it is using composer from Packagist repository:

composer require phpmyadmin/motranslator

The Debian package will be available probably at point phpMyAdmin 4.7.0 will be out, but if you see need to have it earlier, just let me know.

Filed under: Debian English phpMyAdmin | 0 comments

29 August, 2016 04:00PM

Zlatan Todorić

Support open source motion comic

There is an ongoing campaign for motion comic. It will be done entirely with FLOSS tools (Blender, Krita, GNU/Linux) and besides that, it really looks great (and no, it is not only for the kids!). Please support this effort if you can because it also shows the power of Free software tools. All will be released Creative Commons Atribution-ShareAlike license together with all sources.

29 August, 2016 01:25PM by Zlatan Todoric

hackergotchi for Michal Čihař

Michal Čihař

Improving phpMyAdmin Docker container

Since I've created the phpMyAdmin container for Docker I've always felt strange about using PHP's built in web server there. It really made it poor choice for any production setup and probably was causing lot of problems users saw with this container. During the weekend, I've changed it to use more complex setup with Supervisor, nginx and PHP FPM.

As building this container is one of my first experiences with Docker (together with Weblate container), it was not as straightforward as I'd hope for, but in the end is seems to be working just fine. While touching the code, I've also improved testing of the Docker container to tests all supported setups and to better report in case of test fails.

The nice side effect of this is that the PHP code is no longer being executed under root in the container, so that should make it more sane for production use as well (honestly I never liked this approach that almost everything is executed as root in Docker containers).

Filed under: Debian English phpMyAdmin | 4 comments

29 August, 2016 08:00AM

hackergotchi for Gergely Nagy

Gergely Nagy

Recruitment mistakes: part 3

It has been a while that I have been contacted by a recruiter, and the last few ones were fairly decent conversations, where they made an effort to research me first, and even if they did not get everything right, they still listened, and we had a productive talk. But four days ago, I had another recruiter reach out to me, from a company I know oh so well: one I ranted about before: Google. Apparently, their recruiters still do carpet-bombing style outreach. My first thought was "what took them so long?" - it has been five years since my last contact with a Google recruiter. I almost started missing them. Almost. To think that Google is now powerful enough to read my mind, is scary. Yet, I believe, this is not the case; rather, it's just another embarrassing mistake.

To make my case, let me quote the full e-mail I was sent, with the name of the sender redacted, and my comments - which I'm also sending to the recruiter:

Hi Gergely,

Hi!

How are you? Hope you're well.

Thank you, I'm fine, just back from vacation, and I was thrilled to read your e-mail. Although, I did find it surprising too, and considering past events, I thought it to be spam first.

My name is XXX and I am recruiting for the Google Engineering team. I have just found your details on Github...

I'm happy that you found me through GitHub, but I'm curious why you mailed me at my debian.org address then? That address is not on my GitHub profile, and even though I have some code signed with that address, that's not what I use normally. My GitHub profile also links to a page I created especially for recruiters, which I don't think you have read - but more on that below.

...and your experience with software development combined with your open source contributions is particularly relevant to Google

Which part of my recent contributions are relevant to Google? For the past few months, the vast majority (over 90%) of my open source work has been on keyboard firmware. If Google is developing a keyboard, then this may be relevant, otherwise, I find it doubtful.

Some of my past OSS contributions may be more relevant, but that's in the past, and it would take some digging to see those from GitHub. And if you did that kind of digging, you would have found the page on my site for recruiters, and would not have e-mailed me at my Debian address, either.

We are always interested in talking to top engineers with your mix of skills and I was wondering if you are at all open to exploring roles with Google in EMEA.

To make things short, my stance is the same as it was five years ago, when I wrote - and I quote - "So, here's a public request: do not email me ever again about job opportunities within Google. I do not wish to work for Google. Not now, not tomorrow, not ever."

This still stands. Please do not ever e-mail me about job opportunities within Google. I do not wish to work for Google, not now, not tomorrow, not five years from now, not ever. This will not change. Many things may change, this however, will not.

But even if I ignore this, let me ask you mix of skills are you exactly interested in? Keyboard firmware hacking, mixed with Emacs Lisp, some Clojure, and hacking on Hy (which, if you have explored my GitHub profile, will surely know) from time to time? Or is it my Debian Developer hat that got your interest? If so, why didn't you say so? Not that it would change anything, but I'm curious.

Is it my participation in 24pullrequests? Or my participation in GSoC in years past?

Nevertheless, my list of conditions I mentioned above, apply. Is Google able to fulfill all my requirements, and the preferences too? Last time I heard, working for Google required one to relocate, which I clearly said on that page, I'm not willing to do.

The teams I recruit for are responsible for Google's planet-scale systems. Just to give you more of an idea, some of the projects we work on that might be of interest to you would include:

  • MapReduce
  • App Engine
  • Mesa
  • Maglev

And how would these be interesting to me, considering my recent OSS work? (Hint: none of them interest me, not a bit. Ok perhaps MapReduce, a little.)

I think you could be a great fit here, where you can develop a great career and at the same time you will be part of the most mission critical team, designing and developing systems which run Google Search, Gmail, YouTube, Google+ and many more as you can imagine.

My career is already developing fine, thank you, I do not need Google to do that. I am already working on things I consider important and interesting, if I wanted to change, I would. But I certainly would not consider a company which I had asked numerous times NOT to contact me about opportunities. If you can't respect this wish, and forget about this in mere five years, why would I trust you to keep any other promises you make now?

Thank you so much for your time and I look forward to hearing from you.

I wish you had spent as much time researching me - or even half that - as I have spent replying to you. I suppose this is not the reply you were expecting, or looking for, but this is the only one I'll ever give to anyone from Google.

And here, at the end, if you read this far, I'm asking you, and everyone at Google to never contact me about job opportunities. I will not work for Google. Not now, not tomorrow, not five years from now, not ever. Please do not e-mail me again, and do not reply to this e-mail either. I'm not interested in neither apologies, nor promises that you won't contact me - just simply don't do it.

Thank you.

29 August, 2016 08:00AM by Gergely Nagy

Russell Coker

Monitoring of Monitoring

I was recently asked to get data from a computer that controlled security cameras after a crime had been committed. Due to the potential issues I refused to collect the computer and insisted on performing the work at the office of the company in question. Hard drives are vulnerable to damage from vibration and there is always a risk involved in moving hard drives or systems containing them. A hard drive with evidence of a crime provides additional potential complications. So I wanted to stay within view of the man who commissioned the work just so there could be no misunderstanding.

The system had a single IDE disk. The fact that it had an IDE disk is an indication of the age of the system. One of the benefits of SATA over IDE is that swapping disks is much easier, SATA is designed for hot-swap and even systems that don’t support hot-swap will have less risk of mechanical damage when changing disks if SATA is used instead of IDE. For an appliance type system where a disk might be expected to be changed by someone who’s not a sysadmin SATA provides more benefits over IDE than for some other use cases.

I connected the IDE disk to a USB-IDE device so I could read it from my laptop. But the disk just made repeated buzzing sounds while failing to spin up. This is an indication that the drive was probably experiencing “stiction” which is where the heads stick to the platters and the drive motor isn’t strong enough to pull them off. In some cases hitting a drive will get it working again, but I’m certainly not going to hit a drive that might be subject to legal action! I recommended referring the drive to a data recovery company.

The probability of getting useful data from the disk in question seems very low. It could be that the drive had stiction for months or years. If the drive is recovered it might turn out to have data from years ago and not the recent data that is desired. It is possible that the drive only got stiction after being turned off, but I’ll probably never know.

Doing it Properly

Ever since RAID was introduced there was never an excuse for having a single disk on it’s own with important data. Linux Software RAID didn’t support online rebuild when 10G was a large disk. But since the late 90’s it has worked well and there’s no reason not to use it. The probability of a single IDE disk surviving long enough on it’s own to capture useful security data is not particularly good.

Even with 2 disks in a RAID-1 configuration there is a chance of data loss. Many years ago I ran a server at my parents’ house with 2 disks in a RAID-1 and both disks had errors on one hot summer. I wrote a program that’s like ddrescue but which would read from the second disk if the first gave a read error and ended up not losing any important data AFAIK. BTRFS has some potential benefits for recovering from such situations but I don’t recommend deploying BTRFS in embedded systems any time soon.

Monitoring is a requirement for reliable operation. For desktop systems you can get by without specific monitoring, but that is because you are effectively relying on the user monitoring it themself. Since I started using mon (which is very easy to setup) I’ve had it notify me of some problems with my laptop that I wouldn’t have otherwise noticed. I think that ideally for desktop systems you should have monitoring of disk space, temperature, and certain critical daemons that need to be running but which the user wouldn’t immediately notice if they crashed (such as cron and syslogd).

There are some companies that provide 3G SIMs for embedded/IoT applications with rates that are significantly cheaper than any of the usual phone/tablet plans if you use small amounts of data or SMS. For a reliable CCTV system the best thing to do would be to have a monitoring contract and have the monitoring system trigger an event if there’s a problem with the hard drive etc and also if the system fails to send a “I’m OK” message for a certain period of time.

I don’t know if people are selling CCTV systems without monitoring to compete on price or if companies are cancelling monitoring contracts to save money. But whichever is happening it’s significantly reducing the value derived from monitoring.

29 August, 2016 02:23AM by etbe

hackergotchi for Joey Hess

Joey Hess

hiking the Roan

Three moments from earlier this week..

Sprawled under a tree after three hours of hiking with a heavy, water-filled pack, I look past my feet at six ranges of mountains behind mountains behind flowers.

From my campsite, I can see the rest of the path of the Appalachian Trail across the Roan balds, to Big Hump mountain. It seems close enough to touch, but not this trip. Good to have a goal.

Near sunset, land and sky merge as the mist moves in.

29 August, 2016 02:10AM

August 28, 2016

Reproducible builds folks

Reproducible builds: week 70 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday August 21 and Saturday August 27 2016:

GSoC and Outreachy updates

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

10 package reviews have been added and 6 have been updated this week, adding to our knowledge about identified issues.

A large number of issue types have been updated:

Weekly QA work

29 FTBFS bugs have been reported by:

  • Chris Lamb (27)
  • Daniel Stender (1)
  • Santiago Vila (1)

diffoscope development

Holger also created another test job for diffoscope on jenkins.debian.net, so that now also all commits to branches other than master are being tested.

strip-nondeterminism development

strip-nondeterminism 0.023-1 was uploaded by Chris Lamb:

 * Support Android .apk files with the JAR normalizer.
 * handlers/png.pm: Drop unused Archive::Zip import
 * Remove hyphen from non-determinism and non-deterministic.
 * javaproperties.pm: Match more styles of .properties and loosen filename matching.
 * Improve tests:
   - Make fixture runner generic to all normalizer types.
   - Replace (single) pearregistry test with a fixture.
   - Set a canonical time for fixture tests.
   - Add gzip testcase fixture.
   - Replace t/javadoc.t with fixture
   - Replace t/ar.t with a fixture.
   - t/javaproperties: move pom.properties and version.properties tests to fixtures
   - t/fixtures.t: move to using subtests
   - t/fixtures.t: Explicitly test that we can find a normalizer
   - t/fixtures.t: Don't run normalizer if we didn't find one.

strip-nondeterminism 0.023-2 uploaded by Mattia Rizzolo to allow stderr in autopkgtest.

disorderfs development

tests.reproducible-builds.org

Debian:

  • Since we introduced build path variations for unstable and experimental last week, our IRC channel has been flooded with notifcations about packages becoming unreproducible - and you might have noticed some of your packages having become unreproducible recently too. To make our IRC more bearable again, notifications for status changes on i386 and armhf have been disabled, so that now we only get notifications for status changes in unstable. (h01ger)
  • Link to jenkins documentation in every page (h01ger)
  • The "pre build" check, whether a node is up, now also detects if a node has a read-only filesystem, which sometimes happens on some broken armhf nodes. (h01ger)
  • To further improve monitoring of those armhf nodes Work to make them send mails (through an ISP which is blocking outgoing mails) has been started and should be finished next week. (h01ger)
  • As one of the armhf nodes (opi2a) is acting strange, a workaround has been added to make it's deployment work despite that. (h01ger)
  • Collapse whitespace to avoid ugly "trailing underlines" in hyperlinks for diffoscope results and pkg sets (Chris Lamb)
  • Give details HTML elements "cursor: pointer" CSS property to highlight they are clickable. (Chris Lamb)
  • The db connection timeout has been raised to a minute when using SQLAlchemy too. (h01ger).

Somewhat related to reproducible builds there has been a first Debian jenkins team maintainance meeting on the #debian-qa IRC channel, to discuss current issues with the setup and to start the work of migrating jenkins.debian.net to jenkins.debian.org. The next meeting will take place on September 28th 2016 at 19 UTC.

Misc.

This week's edition was written by Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

28 August, 2016 11:01PM

hackergotchi for Martín Ferrari

Martín Ferrari

On speaking at community conferences

Many people reading this have already suffered me talking to them about Prometheus. In personal conversation, or in the talks I gave at DebConf15 in Heidelberg, the Debian SunCamp in Lloret de Mar, BRMlab in Prague, and even at a talk on a different topic at the RABS in Cluj-Napoca.

Since their public announcement, I have been trying to support the project in the ways I could: by packaging it for Debian, and by spreading the word.

Last week the first ever Prometheus conference took place, so this time I did the opposite thing and I spoke about Debian to Prometheus people: I gave a 5 minutes lightning talk on Debian support for Prometheus.

What blew me away was the response I've got: from this tiny non-talk I prepared in 30 minutes, many people stopped me later to thank me for packaging Prometheus, and for Debian in general. They told me they were using my packages, gave me feedback, and even some RFPs!

At the post-conference beers, I had quite a few interesting discussions about Debian, release processes, library versioning, and culture clashes with upstream. I was expecting some Debian-bashing (we are old-fashioned, slow, etc.), instead I had intelligent and enriching conversations.

To me, this enforces once again my support and commitment to community conferences, where nobody is a VIP and everybody has something to share. It also taught me the value of intersecting different groups, even when there seems to be little in common.

Comment

28 August, 2016 08:06PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 1.1.0

rfoaas greed example

FOAAS upstream came out with release 1.1.0 earlier last week. Tyler Hunt was kind enough to provide an almost immediate pull request adding support for the extended capabilties. Following yesterday's upload we now have version 1.1.0 of rfoaas on CRAN.

It brings six more accessors: maybe(), blackadder(), horse(), deraadt(), problem(), cocksplat() and too().

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 August, 2016 02:08PM

Craig Sanders

fakecloud

I wrote my first Mojolicious web app yesterday, a cloud-init meta-data server to enable running pre-built VM images (e.g. as provided by debian, ubuntu, etc) without having to install and manage a complete, full-featured cloud environment like openstack.

I hacked up something similar several years ago when I was regularly building VM images at home for openstack at work, with just plain-text files served by apache, but that had pretty-much everything hard-coded. fakecloud does a lot more and allows per-VM customisation of user-data (using the IP address of the requesting host). Not bad for a day’s hacking with a new web framework.

https://github.com/craig-sanders/fakecloud

fakecloud is a post from: Errata

28 August, 2016 05:05AM by cas

Hideki Yamane

please help to maintain net-snmp package

I stepped down from net-snmp package maintainers, and want someone to maintain it.

28 August, 2016 03:30AM by Hideki Yamane (noreply@blogger.com)

August 27, 2016

Arturo Borrero González

Why conntrackd in Debian is better with systemd


There has been some discussion [0] around my decision to drop sysvinit support in the conntrackd package in Debian [1] (version 1:1.4.4-2).

The rationale I used for such a movement was sent to the debian-devel [2] mailing list, and here it is:

  • the default init system in debian is systemd
  • I don't have any sysvinit system to keep sysvinit files under any kind of maintenance
  • sysvinit conntrackd script is really poor, to reliably use conntrackd as a system daemon you should use systemd
  • conntrackd & systemd are very good integrated (using libsystemd)
  • systemd is starting to drop support for some sysvinit mechanisms [3]
  • it's time to let sysvinit die

Lots of people have talked to me with both support and disagreement with the change.

Before reading the rest of this blogpost, please note that I'm not interested in the 'systemd vs sysvinit' war, and starting now I will focus mainly on the subject of building a firewall cluster with netfilter technology and the reasons why I think sysvinit here is irrelevant.

I started working with firewall clusters by the time of Debian Squeeze. By then, I used only sysvinit, because it was the Debian default init system and because I did not dive so much in the internals of the firewall cluster itself.

By then, the conntrackd debian package included support for sysvinit by means of two files (the two files that I dropped in 1:1.4.4-2) :

  • /etc/init.d/conntrackd
  • /etc/default/conntrackd

Using both files, you could start/stop conntrackd, and nothing more (for example, a proper status check was not implemented).

The conntrackd daemon is used in HA firewall clusters to replicate connection states between nodes of the cluster, so flow states are known in all nodes and they can properly perform stateful firewalling.

When you build HA clusters, there are 2 basic states of the cluster you may check and adjust: failover and failback.

With conntrackd, the failover situation is straight forward: the other node has the state information already from the dying node.
The failback situation is different: depending on the configuration of both the cluster and conntrackd, the new node may ask for a complete synchronisation to the other node when it become alive again (at boot time, for example) . This is the case if you are building a multi-master firewall cluster (i.e: both nodes are seeing traffic and filtering at the same time).

Asking for a complete synchronisation is done by means of a new instance of conntrackd, which communicates using a UNIX socket to the main conntrackd daemon (the one which is actually communicating with the other node). A failback boot procedure may look like this:

  1. system boots
  2. firewall ruleset loading
  3. networking up
  4. conntrackd up (main sync daemon: using 'conntrackd -d'; UNIX socket is opened)
  5. request sync with other node (using `conntrackd -n' which uses the UNIX socket opened in step 4)

In both sysvinit and systemd cases, the step 5 may be implemented using another dummy service which performs the required operations and that depends on the main service representing the step 4.

The point here is that steps 4 and 5 suffer a very bad race condition: the conntrackd daemon may open the UNIX socket (step 4) *after* the we try to use it (step 5).

As you could probably imagine, this is the worst possible scenario: as one of the nodes of the cluster is missing important flow state information the stateful firewall will drop packets and all the network monitoring alarms will start to ring... Ironically, just after a failback operation, when all of our cluster is supposed to be up an running.

To avoid the race conditions, some typical hacks could be implemented:

  • a wait loop for the socket
  • manual delay (sleep time) between step 4 and step 5
  • some kind of poll mechanism
  • conntrackd somehow notifies of UNIX socket being opened

After a bit of research, I found that the last point could be implemented very easily using libsystemd, so the daemon inform of its internal state to systemd [4].

The solution is elegant, simple and offers more direct benefits:

  • systemd watchdog support
  • no guess games with the PID of the daemon
  • shutdown detection by systemd (so if the daemon was killed with 'conntrackd -k', systemd detects it)
  • we get the service 'status' check (missing in sysvinit)
So the clear winner for conntrackd is to go with systemd, using an unit service file of Type=notify. Using sysvinit is prone to the described race condition. No serious firewall cluster with this architecture would use sysvinit.

It's obvious to me that systemd is a better technology than sysvinit. The sysvinit approach has been OK for a while, and for me it was fascinating when I started developing init scripts and learning how things worked.
But now we have systemd. It's the default in Debian. It's far better.

Conclusions: yes, I will probably reintroduce sysvinit files just to avoid any flamewar.


[0] https://lists.debian.org/debian-devel/2016/08/msg00448.html
[1] https://tracker.debian.org/pkg/conntrack-tools
[2] https://lists.debian.org/debian-devel/2016/08/msg00456.html
[3] https://sources.debian.net/src/systemd/231-4/debian/systemd.NEWS/
[4] http://git.netfilter.org/conntrack-tools/tree/src/systemd.c

27 August, 2016 06:12PM by Arturo Borrero Gonzalez (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCNPy 0.2.5

A maintenance release of the RcppCNPy package is now on CRAN.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers.

This release just updates a few package internals such as the vignette, the DESCRIPTION and the README.md file as well as under Travis script used for continuous integration.

Changes in version 0.2.5 (2016-08-26)

  • Synchronized code with the cnpy repository

  • Updated vignette

  • Expanded DESCRIPTION

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 August, 2016 01:12AM

August 26, 2016

hackergotchi for Clint Adams

Clint Adams

Any way the wind blows

NOAA decommissioning weather.noaa.gov led to breakage which reveals just how much duplication of code and effort there is for fetching and parsing weather data.

26 August, 2016 04:21PM

hackergotchi for Norbert Preining

Norbert Preining

ESSLLI 2016 Course: Algebraic Specification and Verification with CafeOBJ

During this year’s ESSLLI (European Summer School in Logic, Language and Information) I was teaching a course on Algebraic Specification and Verification with CafeOBJ. Now that the course is over I can relax a bit and enjoy the beauty of the surrounding North Tyrol.
ESSLLI-2016-logo-darkblue-small

For those who couldn’t attend the ESSLLI or the course, here are the course materials:

Thanks to the participants for the interesting questions and participation.

What follows is the original description of the course.

Abstract

Ensuring correctness of complex systems like computer programs or communication protocols is gaining ever increasing importance. For correctness it does not suffice to consider the finished system, but correctness already on the level of specification is necessary, that is, before the actual implementation starts. To this aim, algebraic specification and verification languages have been conceived. They are aiming at a mathematically correct description of the properties and behavior of the systems under discussion, and in addition often allow to prove (verify) correctness within the same system.

This course will give an introduction to algebraic specification, its history, logic roots, and actuality with respect to current developments. Paired with the theoretical background we give an introduction to CafeOBJ, a programming language that allows both specification and verification to be carried out together. The course should enable students to understand the theoretical background of algebraic specification, as well as enable them to read and write basic specifications in CafeOBJ.

Description

CafeOBJ is a specification language based on three-way extensions to many-sorted equational logic: the underlying logic is order-sorted, not just many-sorted; it admits unidirectional transitions, as well as equations; it also accommodates hidden sorts, on top of ordinary, visible sorts. A subset of CafeOBJ is executable, where the operational semantics is given by a conditional order-sorted term rewriting system.

The language system CafeOBJ has been under constant development at the institute of the lecturers since the late 80ies. It is closely related to other algebraic specification languages in the OBJ family, including Maude. The CafeOBJ language and the range of verification methods and tools it supports – including its support for inductive theorem proving, verification of behavioral specifications, deductive invariant proof, and reachability analysis of concurrent systems – has played a key role in both extending and bringing algebraic specification techniques into contact with many software engineering applications.

The following topics will be discussed:

  • algebraic foundations: many-sorted algebras, order-sorted algebras, behavioral specification
  • computational foundations: rewriting
  • programming with CafeOBJ: language elements, modules, simple programs
  • CloudSync: presentation of an example cloud synchronization protocol and its verification

To make the lectures not too `heavy’, we will structure each lecture into two parts: A first part providing an introduction of some theoretical concept, or framework, and a second part dealing with actual programming and implementation. Especially for the second part of each lecture students are encouraged to use their laptops to try out code and experiment.

26 August, 2016 09:35AM by Norbert Preining

hackergotchi for Michal Čihař

Michal Čihař

CI coverage from Windows, Linux and OSX

Once I got CI working on multiple platforms the obvious next step was to be able to aggregate coverage reports across them. This should not be that hard, right? Well I've spent couple of hours on that during last few days.

On Linux and OSX it was pretty much straightforward. Both GCC and Clang do support coverage, so it's just matter of configuring them properly and collect the coverage reports. I've used own solution for that in past and that was really far from working well (somehow I never managed to get coverage fully uploaded to Codecov). Fortunately there exists CMake script called CMake-codecov which does all needed work and works out of the box on GCC and Clang (even on OSX). Well it works on Travis only once you update the compilers and install llvm-cov tool.

The Windows part on AppVeyor was much harder for me. This can be heavily accounted to lack of my experience with Windows and especially development on Windows in past ten years (probably even more). First challenge was to find something what can generate code coverage there.

After lot of googling I've settled down on OpenCppCoverage what seems to be the only free solution I was able to find. The good thing is that it can generate coverage in Cobertura format that Codecov undestands. There are also bad things that I've learned. First of all it's quite hard to integrate this with CTest. There is no support for wrapping test calls in custom commands, so I've misused the memory checks for that purpose. I've written small python script which pretends the valgrind interface and does call OpenCppCoverage in the background.

Now I had around 800 coverage files (one for each test case) and we need to deal with them somehow. The Codeconv command line client doesn't deal wit this out of the box so the obvious choice was to merge them before upload. There even seems to be script doing that, but unfortunately trying that on our coverage data make it nowhere near completion within hour, so that's not really good choice. Second thing I've tried was merging binary coverage in OpenCppCoverage and then exporting to Cobertura format. Obviously Gammu is special project as all I got from this attempt was crashing OpenCppCoverage (it did merge some of the coverages, but it failed in the end without indicating any error).

In the end I've settled down to uploading files in chunks to Codecov. This seems to work quite okay, though is a bit slow, mostly due to way how Codecov bash uploader prepares data to upload (but this will be hopefully fixed soon).

Anyway the goal has been reached, both Windows and Linux code shows in coverage reports.

Filed under: Debian English Gammu | 0 comments

26 August, 2016 04:00AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.7.400.2.0

armadillo image

Another Armadillo 7.* release -- now at 7.400. We skipped the 7.300.* serie release as it came too soon after our most recent CRAN release. Releasing RcppArmadillo 0.7.400.2.0 now keeps us at the (roughly monthly) cadence which works as a good compromise between getting updates out at Conrad's sometimes frantic pace, while keeping CRAN (and Debian) uploads to about once per month.

So we may continue the pattern of helping Conrad with thorough regression tests by building against all (by now 253 (!!)) CRAN dependencies, but keeping release at the GitHub repo and only uploading to CRAN at most once a month.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

The new upstream release adds new more helper functions. Detailed changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.400.2.0 (2016-08-24)

  • Upgraded to Armadillo release 7.400.2 (Feral Winter Deluxe)

    • added expmat_sym(), logmat_sympd(), sqrtmat_sympd()

    • added .replace()

Changes in RcppArmadillo version 0.7.300.1.0 (2016-07-30)

  • Upgraded to Armadillo release 7.300.1

    • added index_min() and index_max() standalone functions

    • expanded .subvec() to accept size() arguments

    • more robust handling of non-square matrices by lu()

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 August, 2016 01:03AM

hackergotchi for Matthew Garrett

Matthew Garrett

Priorities in security

I read this tweet a couple of weeks ago:

and it got me thinking. Security research is often derided as unnecessary stunt hacking, proving insecurity in things that are sufficiently niche or in ways that involve sufficient effort that the realistic probability of any individual being targeted is near zero. Fixing these issues is basically defending you against nation states (who (a) probably don't care, and (b) will probably just find some other way) and, uh, security researchers (who (a) probably don't care, and (b) see (a)).

Unfortunately, this may be insufficient. As basically anyone who's spent any time anywhere near the security industry will testify, many security researchers are not the nicest people. Some of them will end up as abusive partners, and they'll have both the ability and desire to keep track of their partners and ex-partners. As designers and implementers, we owe it to these people to make software as secure as we can rather than assuming that a certain level of adversary is unstoppable. "Can a state-level actor break this" may be something we can legitimately write off. "Can a security expert continue reading their ex-partner's email" shouldn't be.

comment count unavailable comments

26 August, 2016 12:02AM

August 25, 2016

hackergotchi for Alessio Treglia

Alessio Treglia

The Breath of Time

 

For centuries man has hunted, he brought the animals to pasture, cultivated fields and sailed the seas without any kind of tool to measure time. Back then, the time was not measured, but only estimated with vague approximation and its pace was enough to dictate the steps of the day and the life of man. Subsequently, for many centuries, hourglasses accompanied the civilization with the slow flow of their sand grains. About hourglasses, Ernst Junger writes in “Das Sanduhrbuch – 1954” (no English translation): “This small mountain, formed by all the moments lost that fell on each other, it could be understood as a comforting sign that the time disappears but does not fade. It grows in depth”.

For the philosophers of ancient Greece, the time was just a way to measure how things move in everyday life and in any case there was a clear distinction between “quantitative” time (Kronos) and “qualitative” time (Kairòs). According to Parmenides, time is guise, because its existence…

<Read More…[by Fabio Marzocca]>

25 August, 2016 04:33PM by Fabio Marzocca

Joerg Jaspert

New gnupg-agent in Debian

In case you just upgraded to the latest gnupg-agent and used gnupg-agent as your ssh-agent you may find that ssh refuses to work with a simple but not helpful

sign_and_send_pubkey: signing failed: agent refused operation

This seems to come from systemd starting the agent, no longer a script at the start of the X session. And so it ends up with either no or an unusable tty. A simple

gpg-connect-agent updatestartuptty /bye

updates that and voila, ssh agent functionality is back in.

Note: This assumes you have “enable-ssh-support” in your ~/.gnupg/gpg-agent.conf

25 August, 2016 09:55AM

hackergotchi for Norbert Preining

Norbert Preining

Gaming: Deus Ex Go

Long flights and lazy afternoons relaxing from teaching, I tried out another game on my Android device, Deus Ex Go. It is a turn based game in the style of the Deus Ex series (long years ago I was beta tester for the Deus Ex version of LGP). A turn based game where you have to pass through a series of levels, each one consisting of an hexagonal grid with an entry and exit point, and some nasty villains or machines trying to kill you.

Deus-ex-go

Without any explanations given you are thrown into the game and it takes a few iterations until you understand what kind of attacks you are facing, but once you have figured that out, it is a more or less simple game of combination how to manage to get to the exit. I played through the around 50 levels of the story mode and I think it was only in the last five that I once or twice had to actually try and think hard to find a solution.

I found the game quite amusing at the beginning, but soon it became repetitive. But since you can play through the whole story mode in probably one long afternoon, that is not so much of a problem. More a problem is the apparently incredible battery usage of this game. Playing without checking for some time leaves you soon with a near empty battery.

Graphically well done, with more or less interesting gameplay, it still does not stand up to Monument Valley.

25 August, 2016 09:34AM by Norbert Preining

hackergotchi for Francois Marier

Francois Marier

Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search libwayland-egl.so.1

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

25 August, 2016 05:00AM

August 24, 2016

hackergotchi for Don Armstrong

Don Armstrong

H3ABioNet Hackathon (Workflows)

I'm in Pretoria, South Africa at the H3ABioNet hackathon which is developing workflows for Illumina chip genotyping, imputation, 16S rRNA sequencing, and population structure/association testing. Currently, I'm working with the imputation stream and we're using Nextflow to deploy an IMPUTE-based imputation workflow with Docker and NCSA's openstack-based cloud (Nebula) underneath.

The OpenStack command line clients (nova and cinder) seem to be pretty usable to automate bringing up a fleet of VMs and the cloud-init package which is present in the images makes configuring the images pretty simple.

Now if I just knew of a better shared object store which was supported by Nextflow in OpenStack besides mounting an NFS share, things would be better.

You can follow our progress in our git repo: [https://github.com/h3abionet/chipimputation]

24 August, 2016 02:40PM

Zlatan Todorić

Take that boredom

While I was bored on Defcon, I took the smallest VPS in DO offering (512MB RAM, 20GB disk), configured nginx on it, bought domain zlatan.tech and cp'ed my blog data to blog.zlatan.tech. I thought it will just be out of boredom and tear it apart in a day or two but it is still there.

Not only that, the droplet came with Debian 8.5 but I just added unstable and experimental to it and upgraded. Just to experiment and see what time will I need to break it. To make it even more adventurous (and also force me to not take it too much serious, at least at this point) I did something on what Lars would scream - I did not enable backups!

While having fun with it I added letsencrypt certificate to it (wow, that was quite easy).

Then I installed and configured Tor. Ende up adding an .onion domain for it! It is: pvgbzphm622hv4bo.onion

My main blog is still going to be zgrimshell.github.io (for now at least) where I push my Nikola (static site generator written in python) generated content as git commits. To my other two domains (on my server) I just rsync the content now. Simple and efficient.

I must admit I like my blog layout. It is simple, easy to read, efficient and fast, I don't bother with comments and writing a blog in markdown (inside terminal as all good behaving hacker citizen) while compiling it with Nikola is breeze (and yes, I did choose Nikola because of Nikola Tesla and python). Also I must admit that nginx is pretty nice webserver, no need to explain the beauty of git but I can't recommend enough of rsync.

If anyone is interested in doing the same I am happy to talk about it but these tools are really simple (as I enjoy simple things and by simple I mean small tools, no complicated configs and easy execution).

24 August, 2016 05:45AM by Zlatan Todoric

August 23, 2016

Reproducible builds folks

Reproducible Builds: week 69 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday August 14 and Saturday August 20 2016:

Fasten your seatbelts

Important note: we enabled build path variation for unstable now, so your package(s) might become unreproducible, while previously it was said to be reproducible… given a specific build path it probably still is reproducible but read on for the details below in the tests.reproducible-builds.org section! As said many times: this is still research and we are working to make it reality.

Media coverage

Daniel Stender blogged about python packaging and explained some caveats regarding reproducible builds.

Toolchain developments

Thomas Schmitt uploaded xorriso which now obeys SOURCE_DATE_EPOCH. As stated in its man pages:

ENVIRONMENT
[...]
SOURCE_DATE_EPOCH  belongs to the specs of reproducible-builds.org.  It
is supposed to be either undefined or to contain a decimal number which
tells the seconds since january 1st 1970. If it contains a number, then
it is used as time value to set the  default  of  --modification-date=,
--gpt_disk_guid,  and  --set_all_file_dates.  Startup files and program
options can override the effect of SOURCE_DATE_EPOCH.

Packages reviewed and fixed, and bugs filed

The following packages have become reproducible after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • vulkan/1.0.21.0+dfsg1-1 by Timo Aaltonen.

The following 2 packages were not changed, but have become reproducible due to changes in their build-dependencies: tagsoup tclx8.4.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Bug tracker house keeping:

  • Chris Lamb pinged 164 bugs he filed more than 90 days ago which have a patch and had no maintainer reaction.

Reviews of unreproducible packages

55 package reviews have been added, 161 have been updated and 136 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (16)
  • Santiago Vila (2)

diffoscope development

Chris Lamb, Holger Levsen and Mattia Rizzolo worked on diffoscope this week.

Improvements were made to SquashFS and JSON comparison, the https://try.diffoscope.org/ web service, documentation, packaging, and general code quality.

diffoscope 57, 58, and 59 were uploaded to unstable by Chris Lamb. Versions 57 and 58 were both broken, so Holger set up a job on jenkins.debian.net to test diffoscope on each git commit. He also wrote a CONTRIBUTING document to help prevent this from happening in future.

From these efforts, we were also able to learn that diffoscope is now reproducible even when built across multiple architectures:

< h01ger> | https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/diffoscope.html shows these packages were built on amd64:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb
< h01ger> | and on i386:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb
< h01ger> | and on armhf:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb

And those also match the binaries uploaded by Chris in his diffoscope 59 binary upload to ftp.debian.org, yay! Eating our own dogfood and enjoying it!

tests.reproducible-builds.org

Debian related:

  • show percentage of results in the last 24/48h (h01ger)
  • switch python database backend to SQLAlchemy (Valerie)
  • vary build path varitation for unstable and experimental for all architectures. (h01ger)

The last change probably will have an impact you will see: your package might become unreproducible in unstable and this will be shown on tracker.debian.org, while it will still be reproducible in testing.

We've done this, because we think reproducible builds are possible with arbitrary build paths. But: we don't think those are a realistic goal for stretch, where we still recommend to use ´.buildinfo´ to record the build patch and then do rebuilds using that path.

We are doing this, because besides doing theoretical groundwork we also have a practical goal: enable users to independently verify builds. And if they only can do this with a fixed path, so be it. For now :)

To be clear: for Stretch we recommend that reproducible builds are done in the same build path as the "original" build.

Finally, and just for our future references, when we enabled build path variation on Saturday, August 20th 2016, the numbers for unstable were:

suite all reproducible unreproducible ftbfs depwait not for this arch blacklisted
unstable/amd64 24693 21794 (88.2%) 1753 (7.1%) 972 (3.9%) 65 (0.2%) 95 (0.3%) 10 (0.0%)
unstable/i386 24693 21182 (85.7%) 2349 (9.5%) 972 (3.9%) 76 (0.3%) 103 (0.4%) 10 (0.0%)
unstable/armhf 24693 20889 (84.6%) 2050 (8.3%) 1126 (4.5%) 199 (0.8%) 296 (1.1%) 129 (0.5%)

Misc.

Ximin Luo updated our git setup scripts to make it easier for people to write proper descriptions for our repositories.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

23 August, 2016 12:56PM

August 22, 2016

hackergotchi for Sylvain Le Gall

Sylvain Le Gall

Release of OASIS 0.4.7

I am happy to announce the release of OASIS v0.4.7.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Drop support for OASISFormat 0.2 and 0.1.
  • New plugin "omake" to support build, doc and install actions.
  • Improve automatic tests (Travis CI and AppVeyor)
  • Trim down the dependencies (removed ocaml-gettext, camlp4, ocaml-data-notation)

Features:

  • findlib_directory (beta): to install libraries in sub-directories of findlib.
  • findlib_extra_files (beta): to install extra files with ocamlfind.
  • source_patterns (alpha): to provide module to source file mapping.

This version contains a lot of changes and is the achievement of a huge amount of work. The addition of OMake as a plugin is a huge progress. The overall work has been targeted at making OASIS more library like. This is still a work in progress but we made some clear improvement by getting rid of various side effect (like the requirement of using "chdir" to handle the "-C", which leads to propage ~ctxt everywhere and design OASISFileSystem).

I would like to thanks again the contributor for this release: Spiros Eliopoulos, Paul Snively, Jeremie Dimino, Christopher Zimmermann, Christophe Troestler, Max Mouratov, Jacques-Pascal Deplaix, Geoff Shannon, Simon Cruanes, Vladimir Brankov, Gabriel Radanne, Evgenii Lepikhin, Petter Urkedal, Gerd Stolpmann and Anton Bachin.

22 August, 2016 11:51PM by gildor

Luciano Prestes Cavalcanti

AppRecommender - Last GSoC Report

My work on Google Summer of Code is to create a new strategy on AppRecommender, where this strategy should be able to get a referenced package, or a list of referenced packages, then analyze the packages that the user has already installed and make a recommendation using the referenced packages as a base, for example: if the user runs "$ sudo apt install vim", the AppRecommender uses "vim" as the referenced package, and should recommend packages with relation between "vim" and the other packages that the user has installed. This work is done and added to the official AppRecommender repository.
 
During the GSoC program, more contributions were done with the AppRecommender project helping the system to improve the recommendations, installation and configurations to help Debian package.
 
The following link contains my commits on AppRecommender:
 
During the period destined to students get to know the community of the project, I talked with the Debian community about my project to get feedback and ideas. When talking to the Debian community on the IRC channels, we came up with the idea of using the popularity-contest data to improve the recommendations. I talked with my mentors, who approved the idea, then we increased the project scope to use the popularity-contest data to improve the AppRecommender recommendations.
 
The popularity-contest has several privacy political terms, then we did a research and published, on the Debian Planeta post that explains why we need the popularity-contest data to improve the recommendations and how we use this data. This post also contains an explanation about the risks and the measures taken to minimize them.
 
Then two activities were added to be made. One of them is to create a script to be added on popularity-contestThis script is destined to get the popularity-contest data, which is the users' packages, and generate clusters that group these packages analyzing similar users. The other activity is to add collaborative data into the AppRecommender, where this will download the clusters data and use it to improve the recommendations.
 
The popularity-contest cluster script was done and reviewed by my mentor, but was not integrated into popularity-contest yetThe usage of clusters data into AppRecommender has been already implemented, but still not added on official repository because it is waiting the cluster cript's acceptance into the popularity-contest. This work is not complete, but I will continue working with AppRecommender and Debian community, and with my mentorshelp, I will finish this work.
 
The following link contains my commits on repository with the popularity-contest cluster script's feature, as well as other scripts that I used to improve my work, but the only script that will be sent to popularity-contest is the create_popcon_clusters.py:
 
The following link contains my commits on repository with the AppRecommender collaborative data feature: 
 
Google Drive folder with the patch:

22 August, 2016 04:54PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Linux 25 jubilee symposium

I gave a talk about the early days of Linux at the jubilee symposium arranged by the University of Helsinki CS department. Below is an outline of what I meant to speak about, but the actual talk didn't follow it exactly. You can compare these to the video once it comes online.

  • Linus and I met at uni, the only 2 Swedish speaking new students that year, so we naturally migrated towards each other.
  • After a year away for military service, got back in touch, summer of
    1. .
  • C & Unix course fall of 1990; Minix.
  • Linus didn't think atime updates in real time were plausible, but I showed him; funnily enough, atime updates have been an issue in Linux until fairly recently, since they slow things down (without being particularly useful)
  • Jan 5, 1991 bought his first PC (i386 + i387 + 4 MiB RAM and a small hard disk); he had a Sinclair QL before that.
  • Played Prince of Persia for a couple of months.
  • Then wanted to learn i386 assembly and multitasking.
  • A/B threading demo.
  • Terminal emulation, Usenet access from home.
  • Hard disk driver, mistaking hard disk for a modem.
  • More ambition, announced Linux to the world for the first time
  • first ever Linux installation.
  • Upload to ftp.funet.fi, directory name by Ari Lemmke.
  • Originally not free software, licence changed early 1992.
  • First mailing list was created and introduced me to a flood of email (managed with VAX/VMS MAIL and later mush on Unix).
  • I talked a lot with Linus about design at this time, but never really participated in the kernel work (partly because disagreeing with Linus is a high-stress thing).
  • However, I did write the first sprintf for the kernel, since Linus hadn't learnt about varargs functions in C; he then ruined it and added the comment "Wirzenius wrote this portably..." (add google hit count for wirzenius+fucked).
  • During 1992 Linux grew fast, and distros happened, and a lot of packaging and porting of software; porting was easier because Linus was happy to add/change things in the kernel to accomodate software
  • A lot of new users during 1992 as well.
  • End of 1992 I and a few others founded the Linux Documentation Project to help all the new users, some of who didn't come from a Unix background.
  • In fact, things progressed so fast in 1992 that Linus thought he'd release 1.0 very soon, resulting in a silly sequence of version numbers: 0.12, 0.95, 0.96, 0.96b, 0.96c, 0.96c++2.
  • X server ported to Linux; almost immediate prediction of the year of the Linux desktop never happening unless ALL the graphics cards were supported immediately.
  • Linus was of the opinion that you needed one process (not thread) per window in X; I taught him event driven programming.
  • Bug in network code, resulting in ban on uni network.
  • Pranks in the shared office room.
  • We released 1.0 in an event at the CS dept in March, 1994; this included some talks and a ritual compilation of the release version during the event.

22 August, 2016 03:03PM

Satyam Zode

Google Summer of Code 2016 : Final Report

Project Title : Improving diffoscope tool and reproducibility of Debian packages

Project details

This project aims to improve diffoscope tool and fix Debian packages which are unreproducible in Reproducible builds testing framework. diffoscope recursively unpack archives of many kinds and transform various binary formats into more human readable form to compare them. As a part of this project I worked on argument completion feature and ignoring .buildinfo feature. This project is a part of Reproducible Builds effort

Mentor and Co-Mentor

  • Jérémy Bobbio (Lunar) : Mentor
  • Reiner Herrmann (deki) : Co-Mentor
  • Holger Levsen (h01ger) : Co-Mentor
  • Mattia Rizzolo (mapreri) : Co-Mentor

Project Discussion

  • Introduction to Reproducible Builds in Debian

    First time I came to know about Reproducible Builds was during Debconf 2015. I started to get involve from the start of March 2016. At the beginning Lunar suggested me to watch the talks given on Reproducible Builds wiki. I read documentation on Reproducible Builds site and started to participate in IRC discussions on #debian-reproducible.

  • Application Review Period

    During proposal discussion period we discussed the areas where work needs to be done. I wrote the proposal and got it reviewed by community on the mailing list. Simultaneously, I worked on bug #818111 and submitted patch for same. That not only helped me to understand the concept of Reproducible Builds but also helped me to setup testing environment required to check the reproducibility of Debian packages.

  • Community Bonding Period

    During community bonding period I studied the codebase of diffoscope and also spent enough amount of time for learning Python3 metaprogramming and other OOP concepts. We also discussed more about hiding differences and options for same. I couldn’t finish my project research work during this period since I had exams in May 2016 and it consumed almost half of community bonding period and week 1 of coding period.

Project Implementation

Challenges and Work Left

  • To understand the main purpose of diffoscope in the context of Reproducible Builds. I had to go through complete Reproducible Builds project. It consumed significant amount of time to understand what Reproducible Builds is, why it’s necessary important for Free software to build reproducibly. Diffoscope is the last tool in Reproducible Builds toolchain. It was a big challenge for me to understand whole process and objective of diffoscope.
  • Work Left:

Future work

  • Based on the research work and implementation done during Coding Period make diffoscope better and enhance ignoring capabilities of diffoscope.
  • Improve the parallel processing feature of diffoscope. This particular problem is hard to understand and implement.
  • Make diffoscope better by solving exsting bugs.

Acknowledgement

I would like to express my deepest gratitude to Lunar for mentoring me throughout Google Summer of Code program and for being cool. Lunar’s deep knowledge regarding diffoscope and Python skills helped me a lot throughout the project and we literally had great discussions. I would also like to thank Debaian community and Google for giving me this opportunity. Special thanks to Reproducible Builds folks for all the guidance!

22 August, 2016 02:02PM

hackergotchi for DebConf team

DebConf team

Proposing speakers for DebConf17 (Posted by DebConf17 team)

As you may already know, next DebConf will be held at Collège de Maisonneuve in Montreal from August 6 to August 12, 2017. We are already thinking about the conference schedule, and the content team is open to suggestions for invited speakers.

Priority will be given to speakers who are not regular DebConf attendees, who are more likely to bring diverse viewpoints to the conference.

Please keep in mind that some speakers may have very busy schedules and need to be booked far in advance. So, we would like to start inviting speakers in the middle of September 2016.

If you would like to suggest a speaker to invite, please follow the procedure described on the Inviting Speakers page of the DebConf wiki.


DebConf17 team

22 August, 2016 01:44PM by DebConf Organizers

Vincent Sanders

Down the rabbit hole

My descent began with a user reporting a bug and I fear I am still on my way down.

Like Alice I headed down the hole. https://commons.wikimedia.org/wiki/File:Rabbit_burrow_entrance.jpg
The bug was simple enough, a windows bitmap file caused NetSurf to crash. Pretty quickly this was tracked down to the libnsbmp library attempting to decode the file. As to why we have a heavily used library for bitmaps? I am afraid they are part of every icon file and many websites still have favicons using that format.

Some time with a hex editor and the file format specification soon showed that the image in question was malformed and had a bad offset header entry. So I was faced with two issues, firstly that the decoder crashed when presented with badly encoded data and secondly that it failed to deal with incorrect header data.

This is typical of bug reports from real users, the obvious issues have already been encountered by the developers and unit tests formed to prevent them, what remains is harder to produce. After a debugging session with Valgrind and electric fence I discovered the crash was actually caused by running off the front of an allocated block due to an incorrect bounds check. Fixing the bounds check was simple enough as was working round the bad header value and after adding a unit test for the issue I almost moved on.

Almost...

american fuzzy lop are almost as cute as cats https://commons.wikimedia.org/wiki/File:Rabbit_american_fuzzy_lop_buck_white.jpg
We already used the bitmap test suite of images to check the library decode which was giving us a good 75% or so line coverage (I long ago added coverage testing to our CI system) but I wondered if there was a test set that might increase the coverage and perhaps exercise some more of the bounds checking code. A bit of searching turned up the american fuzzy lop (AFL) projects synthetic corpora of bmp and ico images.

After checking with the AFL authors that the images were usable in our project I added them to our test corpus and discovered a whole heap of trouble. After fixing more bounds checks and signed issues I finally had a library I was pretty sure was solid with over 85% test coverage.

Then I had the idea of actually running AFL on the library. I had been avoiding this because my previous experimentation with other fuzzing utilities had been utter frustration and very poor return on investment of time. Following the quick start guide looked straightforward enough so I thought I would spend a short amount of time and maybe I would learn a useful tool.

I downloaded the AFL source and built it with a simple make which was an encouraging start. The library was compiled in debug mode with AFL instrumentation simply by changing the compiler and linker environment variables.

$ LD=afl-gcc CC=afl-gcc AFL_HARDEN=1 make VARIANT=debug test
afl-cc 2.32b by <lcamtuf@google.com>
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: src/libnsbmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 751 locations (64-bit, hardened mode, ratio 100%).
AR: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/libnsbmp.a
COMPILE: test/decode_bmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 52 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: test/decode_ico.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 65 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_ico
afl-cc 2.32b by <lcamtuf@google.com>
Test bitmap decode
Tests:606 Pass:606 Error:0
Test icon decode
Tests:392 Pass:392 Error:0
TEST: Testing complete

I stuffed the AFL build directory on the end of my PATH, created a directory for the output and ran afl-fuzz

afl-fuzz -i test/bmp -o findings_dir -- ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null

The result was immediate and not a little worrying, within seconds there were crashes and lots of them! Over the next couple of hours I watched as the unique crash total climbed into the triple digits.

I was forced to abort the run at this point as, despite clear warnings in the AFL documentation of the demands of the tool, my laptop was clearly not cut out to do this kind of work and had become distressingly hot.

AFL has a visualisation tool so you can see what kind of progress it is making which produced a graph that showed just how fast it managed to produce crashes and how much the return plateaus after just a few cycles. Although it was finding a new unique crash every ten minutes or so when aborted.

I dove in to analyse the crashes and it immediately became obvious the main issue was caused when the test tool attempted allocations of absurdly large bitmaps. The browser itself uses a heuristic to determine the maximum image size based on used memory and several other values. I simply applied an upper bound of 48 megabytes per decoded image which fits easily within the fuzzers default heap limit of 50 megabytes.

The main source of "hangs" also came from large allocations so once the test was fixed afl-fuzz was re-run with a timeout parameter set to 100ms. This time after several minutes no crashes and only a single hang were found which came as a great relief, at which point my laptop had a hard shutdown due to thermal event!

Once the laptop cooled down I spooled up a more appropriate system to perform this kind of work a 24way 2.1GHz Xeon system. A Debian Jessie guest vm with 20 processors and 20 gigabytes of memory was created and the build replicated and instrumented.

AFL master node display
To fully utilise this system the next test run would utilise AFL in parallel mode. In this mode there is a single "master" running all the deterministic checks and many "secondary" instances performing random tweaks.

If I have one tiny annoyance with AFL, it is that breeding and feeding a herd of rabbits by hand is annoying and something I would like to see a convenience utility for.

The warren was left overnight with 19 instances and by morning had generated crashes again. This time though the crashes actually appeared to be real failures.

$ afl-whatsup sync_dir/
Summary stats
=============

Fuzzers alive : 19
Total run time : 5 days, 12 hours
Total execs : 214 million
Cumulative speed : 8317 execs/sec
Pending paths : 0 faves, 542 total
Pending per fuzzer : 0 faves, 28 total (on average)
Crashes found : 554 locally unique

All the crashing test cases are available and a simple file command immediately showed that all the crashing test files had one thing in common the height of the image was -2147483648 This seemingly odd number is actually meaningful to a programmer, it is the largest negative number which can be stored in a 32bit integer (INT32_MIN) I immediately examined the source code that processes the height in the image header.

if ((width <= 0) || (height == 0))          
return BMP_DATA_ERROR;
if (height < 0) {
bmp->reversed = true;
height = -height;
}

The bug is where the height is made a positive number and results in height being set to 0 after the existing check for zero and results in a crash later in execution. A simple fix was applied and test case added removing the crash and any possible future failure due to this.

Another AFL run has been started and after a few hours has yet to find a crash or non false positive hang so it looks like if there are any more crashes to find they are much harder to uncover.

Main lessons learned are:
  • AFL is an easy to use and immensely powerful and effective tool. State of the art has taken a massive step forward.
  • The test harness is part of the test! make sure it does not behave in a poor manner and cause issues itself.
  • Even a library with extensive test coverage and real world users can benefit from this technique. But it remains to be seen how quickly the rate of return will reduce after the initial fixes.
  • Use the right tool for the job! Ensure you head the warnings in the manual as AFL uses a lot of resources including CPU, disc and memory.
I will of course be debugging any new crashes that occur and perhaps turning my sights to all the projects other unit tested libraries. I will also be investigating the generation of our own custom test corpus from AFL to replace the demo set, this will hopefully increase our unit test coverage even further.

Overall this has been my first successful use of a fuzzing tool and a very positive experience. I would wholeheartedly recommend using AFL to find errors and perhaps even integrate as part of a CI system.

22 August, 2016 12:24PM by Vincent Sanders (noreply@blogger.com)

hackergotchi for Michal Čihař

Michal Čihař

Continuous integration on multiple platforms

Over the weekend I've played with continuous integration for Gammu to make it run on more platforms. I had to remember many things from the Windows world on the way and the solution is not yet complete, but the basic build is working, the only problematic part are external dependencies.

First of all we already have Linux builds on Travis CI. These cover compilation with both GCC and Clang compilers, hopefully covering most of the possible problems.

Recently I've added OS X builds on Travis CI, what was pretty much painless and worked out of the box.

The next major architecture to support is Windows. Once I've discovered AppVeyor I thought it might be the way to go. The have free plans for open-source projects (though it has only one parallel build compared to four provided by Travis CI).

As our build system is cross platform based on CMake, it should work pretty much out of the box, right? Well almost, tweaking the basics took some time (unfortunately there is no CMake support on AppVeyor, so you have to script it a bit).

The most painful things on the way:

  • finding our correct way to invoke build and testsuite
  • our code was broken on Windows, making the testsuite to fail
  • how to work with power shell (no, I'm not going to like it)
  • how to download and install executable to PATH
  • test output integration with AppVeyor - done using XSLT transformation and uploading test results manually
  • 32-bit / 64-bit mess, CMake happily finds 32-bit libs during the 64-bit build and vice versa, what makes the build fail later when linking - fixed by trying if code can be built with given library
  • 64-bit code crashes in dummy driver, causing testsuite failures (this has to be something Windows specific as the code works fine on 64-bit Linux) - this seems to be caused by too big allocations on stack, moving them to heap will fix this

You can check our current appveyor.yml in case you're going to try something similar. Current build results are on AppVeyor.

As a nice side effect, we now have up to date Windows binaries for Gammu.

Filed under: Debian English Gammu | 0 comments

22 August, 2016 10:00AM

NOKUBI Takatsugu

The 9th typhoon looks like Debian swirl logo

According to my follower’s tweet:

The typhoon image and horizontal flipped Debian logo looks same.

22 August, 2016 09:01AM by knok

Zlatan Todorić

When you wake up with a feeling

I woke up at 5am. Somehow made myself to soon go back to sleep again. Woke up at 6am. Such is the life of jet-lag. Or I am just getting old for it.

But the truth wouldn't be complete with only those assertion. I woke inspired and tired and the same time. Tired because I am doing very time consumable things. Also in the same time very emotional things. AND at the exact same time things that inspire me.

On paper, I am technical leader of Purism. In reality, I have insanely good relations with my CEO for such a short time. So good that I for months were not leading the technical shift only, but also I overtook operations (getting orders and delivering them while working with our assembly line to automate most of the tasks in this field). I was playing also as first line of technical support (forums, IRC and email). Actually I was pretty much the only line of support for few months. I was doing some website changes: change some wording, updating bunch of plugins and making it sure all works, resolved (hopefully) Tor and Cloudflare issues for it, annoying caching system for forums, stopped forum spam and so on. I worked on better messaging for Purism public relations. I thought my team to use keys for signing and encryption. I interviewed (and read all mails) for people that were interested in working or helping Purism. In process of doing all that, I maybe wasn't the most speedy person for all our users needs but I hope they understand and forgive me.

I was doing all that while I was researching and developing tablets (which ended up not being the most successful campaign but we now do have them as product). I was doing all that while seeing (and resolving) that our kernel builds were failing. Worked on pushing touchpad (not so good but we are still working on) patches upstream (and they ended being upstreamed). While seeing repos being down because of our host. Repos being down because of broken sync with Debian. Repos being down because of our key mis-management. Metadata not working well. PureBrowser getting broken all the time. Tor browser out of date. No real ISO updates. Wrong sources.list entries and so on.

And the hardest part on work was, I was doing all this with very limited scope and even more limited resources. So what kept me on, what is pushing me forward and what am I doing?

One philosophy - Free software. Let me not explain it as a technical debt. Let me explain it as social movement. In age, where people are "bombed" by media, by all-time lying politicians (which use fear of non-existent threats/terror as model to control population), in age where proprietary corporations are selling your freedom so you can gain temporary convenience the term Free software is like Giordano Bruno in age of Inquisitions. Free software does not only preserve your Freedom to software source usage but it preserves your Freedom to think and think out of the box and not being punished for that. It preserves the Freedom to live - to choose what and when to do, without having the negative impact on your or others people lives. The Freedom to be transparent and to share. Because not only ideas grow with sharing, but we, as human beings, grow as we share. The Freedom to say "NO".

NO. I somehow learnt, and personally think, that the Freedom to say NO is the most important Freedom in our lives. No I will not obey some artificially created master that think they can plan and choose my life decision. No I will not negotiate my Freedom for your convenience (also, such Freedom is anyway not real and it is matter of time where you will be blown away by such illusion). No I will not accept your credit because it has STRINGS attached to it which you either don't present or you blur it in mountain of superficial wording. No I will not implant a chip inside me for sake of your research or my convenience. No I will not have social account on media where majority of people are. No, I will not have pacemaker which is a blackbox with proprietary (buggy) software and it harvesting my data without me being able to look at it.

Yin-Yang. Yes, I want to collaborate on making world better place for us all. I don't agree with most of people, but that doesn't make them my enemies (although media would like us to feel and think like that). I will try to preserve everyones Freedom as much as I can. Yes I will share with my community and friends. Yes I want to learn from better than I am. Yes I want to have awesome mentors. Yes, I will try to be awesome mentor. Yes, I choose to care and not ignore facts and actions done by me and other people. Yes, I have the right to be imperfect and do mistakes as long as I will aknowledge and work on them. Bugfixing ourselves as humans is the most important task in our lives. As in software, it is very time consumable but also as in software, it is improvement and incredible satisfaction to see better version of yourself, getting more and more features (even if that sometimes means actually getting read of other/bad features).

This all is blending with my work at Purism. I spend a lot of time thinking about projects, development and future. I must do that in order not to make grave mistakes. Failing hardware and software is not grave mistake. Serious, but not grave. Grave is if we betray ourselves and our community in pursue for Freedom. We are trying to unify many things - we want to give you security, privacy and FREEDOM with convenience. So I am pushing myself out of comfort zones and also out of conventional and sometimes even my standard way of thinking. I have seen that non-existing infrastructure for PureOS is hurting is a lot but I needed to cope with it to the time where I will be able to say: not anymore, we are starting to build our own infrastructure. I was coping with Cloudflare being assholes to Tor users but now we also shifting away from them. I came to team where people didn't properly understand what and why are we building this. Came to very small and not that efficient team.

Now, we employed a dedicated and hard working person on operations (Goran) which I trust. We have dedicated support person (Mladen) which tries hard to work with people. A very creative visual mastermind (Francois). We have a capable Debian Developer (Matthias Klumpp) working on PureOS new infra. We have a capable and dedicated sysadmins (Theo and Stelio) which we didn't even have in past. We are trying to LEVEL UP Free software and unify them in convenient solution which is lead by Joey Hess. We have a hard-working PureOS developer (Hema) who is coping with current non-existent PureOS infra. We have GNOME Boards of Directors person (Jeff) who is trying to light up our image in world (working with James, to try bring some lights into our shadows caused by infinite supply chain delays). We have created Advisory Board for Freedom, Privacy and Security which I don't want to name now as we are preparing to announce soon that (and trust me, we have good people in here).

But, the most important thing here is not that they are all capable or cool people. It is the core value in all of them - they care about Freedom and I trust them on their paths. The trust is always important but in Purism it is essential for our work. I built the workflow without time management (everyone spends their time every single day as they see it fit as long as the work gets done). And we don't create insane short deadlines because everyone else thinks it is important (and rarely something is more important than our time freedom). So the trust is built out of knowledge and the knowledge I have about them and their works is because we freely share with no strings attached.

Because of them, and other good people from our community I have the energy to sacrifice my entire time for Purism. It is not white and black: CEO and me don't always agree, some members of my team don't always agree with me or I with them, some people in community are very rude, impolite and don't respect our work but even with disagreement everyone in Purism finds agreement at the end (we use facts in our judgments) and all the people who just try to disturb my and mine teams work aren't as efficient as all the lovely words of people who believe in us, who send us words of support and who share ideas and their thoughts with us. There is no more satisfaction for me than reading a personal mail giving us kudos for the work and their understanding of underlaying amount of work and issues.

While we are limited with resources we had an occasional outcry from community to help us. Now I want to help them to help me (you see the Freedom of sharing here?). PureOS has now a wiki. It will be a community wiki which is endorsed by Purism as company. Yes you read it right, Purism considers its community part of company (you don't need to get paycheck to be Purism member). That is why a call upon contributors (technical but mostly non-technical too) to help us make PureOS wiki the best resource on net for our needs. Write tutorials for others, gather and put info on wiki, create an ideas page and vote on them so we can see what community wants to see, chat with us so we all understand what, why and how are we working on things. Make it as transparent as possible. Everyone interested please get in touch with our teams by either poking us online (IRC, social accounts) or via emails (our personal or [hr, pr, feedback]@puri.sm.

To finish this writing (as it is 8am here and I still want to rest a bit because I will have meetings for 6 hours straight today) - I wanted to share some personal insight into few things from my point of view. I wanted to say despite all the troubles and people who tried to make our time even harder (and it is already hard by all the limitation which come naturally today with our kind of work), we still create products, we still ship them, we still improved step by step, we still hired and we are still building. Keeping all that together and making progress is for me a milestone greater than just creating a technical product. I just hope we will continue and improve our pace so we can start progressing towards my personal great goal - integrate and cooperate with most of FLOSS ecosystem.

P.S. yes, I also (finally!) became an official Debian Developer - still didn't have time to sit and properly think and cry (as every good men) about it.

22 August, 2016 06:45AM by Zlatan Todoric

hackergotchi for Christian Perrier

Christian Perrier

[LIFE] Running activities - Ultra Trail du Mont-Blanc

Hello dear readers,

It's been ages since I last blogged. Being far less active in Debian than I've been in the past, I guess this is a logical consequence.

However, I'm still active as you may witness if you read the debian-boot mailing list : I still consider myself part of the D-I team and I'm maintaining a few sports-related packages.

Most know what has taken precedence over Debian development, namely trail and ultra-trail running. And, well, it hasn't decreased, far from that : I ran about 10 races already this year....6 of them being above 50km and I ran my favourite 100km moutain race in early July for the second year in a row.

So, the upcoming week, I'll be trying to reach what is usually considered as the Grail of ultra-trail runners : the Ultra-Trail du Mont-Blanc race in Chamonix.

The race is fairly simple : run all around the Mont-Blanc summits, for a 160km race with a bit less than 10,000 meters positive climb. The race itself takes place between 800 and 2700 meters (so no "high mountain") and I expect to complete it (if I succeed) in about 40 hours.

I'm very confident (maybe too much?) as I successfully completed a much more difficult race last year (only 144km, but over 11,000 meters positive climb and a much more difficult path...it took me over 50 hours to complete it).

You can follow me on the live tracking site. The race starts on Friday August 26th, 18:00 CET DST.

I everything goes well, I have great projects for next year, including a 100-mile race in Colorado in August (we'll be traveling in USA for over 3 weeks, peaking with the solar eclipse of August 21st in Kansas City).

22 August, 2016 05:47AM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

go-wmata - golang bindings to the DC metro system

A few weeks ago, I hacked up go-wmata, some golang bindings to the WMATA API. This is super handy if you are in the DC area, and want to interface to the WMATA data.

As a proof of concept, I wrote a yo bot called @WMATA, where it returns the closest station if you Yo it your location. For hilarity, feel free to Yo it from outside DC.

For added fun, and puns, I wrote a dbus proxy for the API as weel, at wmata-dbus, so you can query the next train over dbus. One thought was to make a GNOME Shell extension to tell me when the next train is. I’d love help with this (or pointers on how to learn how to do this right).

22 August, 2016 02:16AM

August 21, 2016

Tomasz Buchert

A new version of pristine-tar

Some of you may have noticed that a new version of pristine-tar was released in Debian unstable. Well, actually, 3 versions were released recently, since very fast a regression was found (#835586) and, sure enough, I found another one after uploading a fix for the first one.

The most important new feature is the ability to use xdelta3 instead of xdelta for delta generation. This doesn’t concern 99.9% of users and is disabled by default for backwards compatibility, but is going to be default in the future. This new format supports bigger files (see #737499) and xdelta3 seems better maintained.

Enjoy the new version and please report any problems you find.

21 August, 2016 10:42PM

hackergotchi for Cyril Brulebois

Cyril Brulebois

Freelance Debian consultant: running DEBAMAX

Executive summary

Since October 2015, I've been running a FLOSS consulting company, specialized on Debian, called DEBAMAX.

DEBAMAX logo

Longer version

Everything started two years ago. Back then I blogged about one of the biggest changes in my life: trying to find the right balance between volunteer work as a Debian Developer, and entrepreneurship as a Freelance Debian consultant. Big change because it meant giving up the comfort of the salaried world, and figuring out whether working this way would be sufficient to earn a living…

I experimented for a while under a simplified status. It comes with a number of limitations but that’s a huge win compared to France’s heavy company-related administrativia. Here’s what it looked like, everything being done online:

  • 1 registration form to begin with: wait a few days, get an identifier from INSEE, mention it in your invoices, there you go!

  • 4 tax forms a year: taxes can be declared monthly or quarterly, I went for the latter.

A number of things became quite clear after a few months:

  • I love this new job! Sharing my Debian knowledge with customers, and using it to help them build/improve/stabilise their products and their internal services feels great!

  • Even if I wasn't aware of that initially, it seems like I've got a decent network already: Debian Developers, former coworkers, and friends thought about me for their Debian-related tasks. It was nice to hear about their needs, say yes, sign paperwork, and start working right away!

  • While I'm trying really hard not to get too optimistic (achieving a given turnover on the first year doesn't mean you're guaranteed to do so again the following year), it seemed to go well enough for me to consider switching from this simplified status to a full-blown company.

Thankfully I was eligible to being accompanied by the local Chamber of Commerce and Industry (CCI Rennes), which provides teaching sessions for new entrepreneurs, coaching, and meeting opportunities (accountants, lawyers, insurance companies, …). Summer in France is traditionally rather quiet (read: almost everybody is on vacation), so DEBAMAX officially started operating in October 2015. Besides different administrative and accounting duties, running this company doesn't change the way I've been working since July 2014, so everything is fine!

As before, I won't be writing much about it through my personal blog, except for an occasional update every other year; if you want to follow what's happening with DEBAMAX:

  • Website: debamax.com — in addition to the usual company, services, and references sections, it features a blog (with RSS) where some missions are going to be detailed (when it makes sense to share and when customers are fine with it). Spoiler alert: Tails is likely to be the first success story there. ;)
  • Twitter: @debamax — which is going to be retweeted for a while from my personal account, @CyrilBrulebois.

21 August, 2016 10:35PM

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2016/30-33

not much to report but I got at least some RC bugs fixed in the last weeks. again mostly perl stuff:

  • #759979 – src:simba: "simba: FTBFS: RoPkg::Rsync ...failed! (needed)"
    keep ExtUtils::AutoInstall from downlaoding stuff, upload to DELAYED/7
  • #817549 – src:libropkg-perl: "libropkg-perl: Removal of debhelper compat 4"
    use debhelper compatibility level 5, upload to DELAYED/7
  • #832599 – iodine: "Fails to start after upgrade"
    update service file and use deb-systemd-helper in postinst
  • #832832 – src:perlbrew: "perlbrew: FTBFS: Tests failures"
    add patch to deal with removed old perl version (pkg-perl)
  • #832833 – src:libtest-valgrind-perl: "libtest-valgrind-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #832853 – src:libmojomojo-perl: "libmojomojo-perl: FTBFS: Tests failures"
    close, the underlying problem is fixed (pkg-perl)
  • #832866 – src:libclass-c3-xs-perl: "libclass-c3-xs-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #834210 – libdancer-plugin-database-core-perl: "libdancer-plugin-database-perl: FTBFS: Failed 1/5 test programs. 6/45 subtests failed."
    upload new upstream release (pkg-perl)
  • #834793 – libgit-wrapper-perl: "libgit-wrapper-perl: FTBFS: t/basic.t whitespace changes"
    add patch from upstream bug (pkg-perl)

21 August, 2016 09:56PM

hackergotchi for David Moreno

David Moreno

WIP: Perl bindings for Facebook Messenger

A couple of weeks ago I started looking into wrapping the Facebook Messenger API into Perl. Since all the calls are extremely simple using a REST API, I thought it could be easier and simpler even, to provide a small framework to hook bots using PSGI/Plack.

So I started putting some things together and with a very simple interface you could do a lot:

use strict;
use warnings;
use Facebook::Messenger::Bot;

my $bot = Facebook::Messenger::Bot->new({
    access_token   => '...',
    app_secret     => '...',
    verify_token   => '...'
});

$bot->register_hook_for('message', sub {
    my $bot = shift;
    my $message = shift;

    my $res = $bot->deliver({
        recipient => $message->sender,
        message => { text => "You said: " . $message->text() }
    });
    ...
});

$bot->spin();

You can hook a script like that as a .psgi file and plug it in to whatever you want.

Once you have some more decent user flow and whatnot, you can build something like:



…using a simple script like this one.

The work is not finished and not yet CPAN-ready but I’m posting this in case someone wants to join me in this mini-project or have suggestions, the work in progress is here.

Thanks!

21 August, 2016 05:18PM

Cosmetic changes to my posts archive

I’ve been doing a lot of cosmetic/layout changes to the nearly 750 posts in my blog’s archive. I apologize if this has broken some feed readers or aggregators. It appears like Hexo still needs better syndication support.

21 August, 2016 04:55PM

Running find with two or more commands to -exec

I spent a couple of minutes today trying to understand how to make find (1) to execute two commands on the same target.

Instead of this or any similar crappy variants:

$ find . -type d -iname "*0" -mtime +60 -exec scp -r -P 1337 "{}" "meh.server.com:/mnt/1/backup/storage" && rm -rf "{}" \;

Try something like this:

$ find . -type d -iname "*0" -mtime +60 -exec scp -r -P 1337 "{}" "meh.server.com:/mnt/1/backup/storage" \; -exec rm -rf "{}" \;

Which is:

$ find . -exec command {} \; -exec other command {} \;

And you’re good to go.

21 August, 2016 04:11PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppEigen 0.3.2.9.0

A new upstream release 3.2.9 of Eigen is now reflected in a new RcppEigen release 0.3.2.9.0 which got onto CRAN late yesterday and is now going into Debian. Once again, Yixuan Qiu did the heavy lifting of merging upstream (and two local twists we need to keep around). Another change is by James "coatless" Balamuta who added a row exporter.

The NEWS file entry follows.

Changes in RcppEigen version 0.3.2.9.0 (2016-08-20)

  • Updated to version 3.2.9 of Eigen (PR #37 by Yixuan closing #36 from Bob Carpenter of the Stan team)

  • An exporter for RowVectorX was added (thanks to PR #32 by James Balamuta)

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 August, 2016 12:21PM

August 20, 2016

Daniel Stender

Collected notes from Python packaging

Here are some collected notes on some particular problems from packaging Python stuff for Debian, and more are coming up like this in the future. Some of the issues discussed here might be rather simple and even benign for the experienced packager, but maybe this is be helpful for people coming across the same issues for the first time, wondering what's going wrong. But some of the things discussed aren't easy. Here are the notes for this posting, there is no particular order.

UnicodeDecoreError on open() in Python 3 running in non-UTF-8 environments

I've came across this problem again recently packaging httpbin 0.5.0. The build breaks the following way e.g. trying to build with sbuild in a chroot, that's the first run of setup.py with the default Python 3 interpreter:

I: pybuild base:184: python3.5 setup.py clean 
Traceback (most recent call last):
  File "setup.py", line 5, in <module>
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()
  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2386: ordinal not in range(128)
E: pybuild pybuild:274: clean: plugin distutils failed with: exit code=1: python3.5 setup.py clean 

One comes across UnicodeDecodeError-s quite oftenly on different occasions, mostly related to Python 2 packaging (like here). Here it's the case that in setup.py the long_description is tried to be read from the opened UTF-8 encoded file README.rst:

long_description = open(
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()

This is a problem for Python 3.5 (and other Python 3 versions) when setup.py is executed by an interpreter run in a non-UTF-8 environment1:

$ LANG=C.UTF-8 python3.5 setup.py clean
running clean
$ LANG=C python3.5 setup.py clean
Traceback (most recent call last):
  File "setup.py", line 5, in <module>
    os.path.join(os.path.dirname(__file__), 'README.rst')).read()
  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2386: ordinal not in range(128)
$ LANG=C python2.7 setup.py clean
running clean

The reason for this error is, the default encoding for file object returned by open() e.g. in Python 3.5 is platform dependent, so that read() fails on that when there's a mismatch:

>>> readme = open('README.rst')
>>> readme
<_io.TextIOWrapper name='README.rst' mode='r' encoding='ANSI_X3.4-1968'>

Non-UTF-8 build environments because $LANG isn't particularly set at all or set to C are common or even default in Debian packaging, like in the continuous integration resp. test building for reproducible builds the primary environment features that (see here). That's also true for the base system of the sbuild environment:

$ schroot -d / -c unstable-amd64-sbuild -u root
(unstable-amd64-sbuild)root@varuna:/# locale
LANG=
LANGUAGE=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=

A problem like this is solved mostly elegant by installing some workaround in debian/rules. A quick and easy fix is to add export LC_ALL=C.UTF-8 here, which supersedes the locale settings of the build environment. $LC_ALL is commonly used to change the existing locale settings, it overrides all other locale variables with the same value (see here). C.UTF-8 is an UTF-8 locale which is available by default in a base system, it could be used without installing the locales package (or worse, the huge locales-all package):

(unstable-amd64-sbuild)root@varuna:/# locale -a
C
C.UTF-8
POSIX

This problem ideally should be taken care of upstream. In Python 3, the default open() is io.open(), for which the specific encoding could be given, so that the UnicodeDecodeError disappears. Python 2 uses os.open() for open(), which doesn't support that, but has io.open(), too. A fix which works for both Python branches goes like this:

import io
long_description = io.open(
    os.path.join(os.path.dirname(__file__), 'README.rst'), encoding='utf-8').read()

non-deterministic order of requirements in egg-info/requires.txt

This problem appeared in prospector/0.11.7-5 in the reproducible builds test builds, that was the first package of Prospector running on Python 32. It was revealed that there were differences in the sorting order of the [with_everything] dependencies resp. requirements in prospector-0.11.7.egg-info/requires.txt if the package was build on varying systems:

$ debbindiff b1/prospector_0.11.7-5_amd64.changes b2/prospector_0.11.7-5_amd64.changes 
{...}
├── prospector_0.11.7-5_all.deb
│   ├── file list
│   │ @@ -1,3 +1,3 @@
│   │  -rw-r--r--   0        0        0        4 2016-04-01 20:01:56.000000 debian-binary
│   │ --rw-r--r--   0        0        0     4343 2016-04-01 20:01:56.000000 control.tar.gz
│   │ +-rw-r--r--   0        0        0     4344 2016-04-01 20:01:56.000000 control.tar.gz
│   │  -rw-r--r--   0        0        0    74512 2016-04-01 20:01:56.000000 data.tar.xz
│   ├── control.tar.gz
│   │   ├── control.tar
│   │   │   ├── ./md5sums
│   │   │   │   ├── md5sums
│   │   │   │   │┄ Files in package differ
│   ├── data.tar.xz
│   │   ├── data.tar
│   │   │   ├── ./usr/share/prospector/prospector-0.11.7.egg-info/requires.txt
│   │   │   │ @@ -1,12 +1,12 @@
│   │   │   │  
│   │   │   │  [with_everything]
│   │   │   │ +pyroma>=1.6,<2.0
│   │   │   │  frosted>=1.4.1
│   │   │   │  vulture>=0.6
│   │   │   │ -pyroma>=1.6,<2.0

Reproducible builds folks recognized this to be a toolchain problem and set up the issue randonmness_in_python_setuptools_requires.txt to cover this. Plus, a wishlist bug against python-setuptools was filed to fix this. The patch which was provided by Chris Lamb adds sorting of dependencies in requires.txt for Setuptools by adding sorted() (stdlib) to _write_requirements() in command/egg_info.py:

--- a/setuptools/command/egg_info.py
+++ b/setuptools/command/egg_info.py
@@ -406,7 +406,7 @@ def warn_depends_obsolete(cmd, basename, filename):
 def _write_requirements(stream, reqs):
     lines = yield_lines(reqs or ())
     append_cr = lambda line: line + '\n'
-    lines = map(append_cr, lines)
+    lines = map(append_cr, sorted(lines))
     stream.writelines(lines)

O.k. In the toolchain, nothing sorts these requirements alphanumerically to make differences disappear, but what is the reason for these differences in the Prospector packages, though? The problem is somewhat subtle. In setup.py, [with_everything] is a dictionary entry of _OPTIONAL (which is used for extras_require) that is created by a list comprehension out of the other values in that dictionary. The code goes like this:

_OPTIONAL = {
    'with_frosted': ('frosted>=1.4.1',),
    'with_vulture': ('vulture>=0.6',),
    'with_pyroma': ('pyroma>=1.6,<2.0',),
    'with_pep257': (),  # note: this is no longer optional, so this option will be removed in a future release
}
_OPTIONAL['with_everything'] = [req for req_list in _OPTIONAL.values() for req in req_list]

The result, the new _OPTIONAL dictionary including the key with_everything (which w/o further sorting is the source of the list of requirements requires.txt) e.g. looks like this (code snipped run through in my IPython):

In [3]: _OPTIONAL
Out[3]: 
{'with_everything': ['vulture>=0.6', 'pyroma>=1.6,<2.0', 'frosted>=1.4.1'],
 'with_vulture': ('vulture>=0.6',),
 'with_pyroma': ('pyroma>=1.6,<2.0',),
 'with_frosted': ('frosted>=1.4.1',),
 'with_pep257': ()}

That list comprehension iterates over the other dictionary entries to gather the value of with_everything, and – Python programmers know that of course – dictionaries are not indexed and therefore the order of the key-value pairs isn't fixed, but is determined by certain conditions. That's the source for the non-determinism of this Debian package revision of Prospector.

This problem has been fixed by a patch, which just presorts the list of requirements before it gets added to _OPTIONAL:

@@ -76,8 +76,8 @@
     'with_pyroma': ('pyroma>=1.6,<2.0',),
     'with_pep257': (),  # note: this is no longer optional, so this option will be removed in a future release
 }
-_OPTIONAL['with_everything'] = [req for req_list in _OPTIONAL.values() for req in req_list]
-
+with_everything = [req for req_list in _OPTIONAL.values() for req in req_list]
+_OPTIONAL['with_everything'] = sorted(with_everything)

In comparison to the list method sort(), the function sorted() does not change iterables in-place, but returns a new object. Both could be used. As a side note, egg-info/requires.txt isn't even needed, but that's another issue.


  1. As an alternative to prefixing LC_ALL, env -i could be used to get an empty environment. 

  2. 0.11.7-4 already but this package revision was in experimental due to the switch to Python 3 and therefore not tested by reproducible builds continuous integration. 

20 August, 2016 11:17PM by Daniel Stender

hackergotchi for Francois Marier

Francois Marier

Remplacer un disque RAID défectueux

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda
unit s
print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb
unit s
mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb

20 August, 2016 10:00PM

Remplacer un disque RAID défectueux

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda
unit s
print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb
unit s
mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb

20 August, 2016 10:00PM

Jose M. Calhariz

Availabilty of at at the Major Linux Distributions

In this blog post I will cover what versions of software at is used by the leading Linux Distributions as reported by LWN.

Also

Currently some distributions are lagging on the use of the latest at software.

20 August, 2016 06:11PM by Jose M. Calhariz

Russell Coker

Basics of Backups

I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

Essential Requirements

Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

Backup Devices

USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

Multiple Backups

It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

Offsite Backups

One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.

Testing

On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

Secret Data

Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

Backup Corruption

With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

Failing Systems – update 2016-08-22

When a system starts to fail it may limp along for years and work reasonably well, or it may totally fail soon. At the first sign of trouble you should immediately make a full backup to separate media. Use different media to your regular backups in case the data is corrupt so you don’t overwrite good backups with bad ones.

One traditional sign of problems has been hard drives that make unusual sounds. Modern drives are fairly quiet so this might not be loud enough to notice. Another sign is hard drives that take unusually large amounts of time to read data. If a drive has some problems it might read a sector hundreds or even thousands of times until it gets the data which dramatically reduces system performance. There are lots of other performance problems that can occur (system overheating, software misconfiguration, and others), most of which are correlated with potential data loss.

A modern SSD storage device (as used in a lot of the recent laptops) doesn’t tend to go slow when it nears the end of it’s life. It is more likely to just randomly fail entirely and then work again after a reboot. There are many causes of systems randomly hanging or crashing (of which overheating is common), but they are all correlated with data loss so a good backup is a good idea.

When in doubt make a backup.

Any Suggestions?

If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

20 August, 2016 02:04AM by etbe

August 19, 2016

hackergotchi for Joey Hess

Joey Hess

keysafe alpha release

Keysafe securely backs up a gpg secret key or other short secret to the cloud. But not yet. Today's alpha release only supports storing the data locally, and I still need to finish tuning the argon2 hash difficulties with modern hardware. Other than that, I'm fairly happy with how it's turned out.

Keysafe is written in Haskell, and many of the data types in it keep track of the estimated CPU time needed to create, decrypt, and brute-force them. Running that through a AWS SPOT pricing cost model lets keysafe estimate how much an attacker would need to spend to crack your password.

4.png
(Above is for the password "makesad spindle stick")

If you'd like to be an early adopter, install it like this:

sudo apt-get install haskell-stack libreadline-dev libargon2-0-dev zenity
stack install keysafe

Run ~/.local/bin/keysafe --backup --store-local to back up a gpg key to ~/.keysafe/objects/local/

I still need to tune the argon2 hash difficulty, and I need benchmark data to do so. If you have a top of the line laptop or server class machine that's less than a year old, send me a benchmark:

~/.local/bin/keysafe --benchmark | mail keysafe@joeyh.name -s benchmark

Bonus announcement: http://hackage.haskell.org/package/zxcvbn-c/ is my quick Haskell interface to the C version of the zxcvbn password strength estimation library.

PS: Past 50% of my goal on Patreon!

19 August, 2016 11:48PM

Simon Désaulniers

[GSOC] Final report




The Google Summer of Code is now over. It has been a great experience and I’m very glad I’ve been able to make it. I’ve had the pleasure to contribute to a project showing very good promise for the future of communication: Ring. The words “privacy” and “freedom” in terms of technologies are being more and more present in the mind of people. All sorts of projects wanting to achieve these goals are coming to life each days like decentralized web networks (ZeroNet for e.g.), blockchain based applications, etc.

Debian

I’ve had the great opportunity to go to the Debian Conference 2016. I’ve been introduced to the debian community and debian developpers (“dd” in short :p). I was lucky to meet with great people like the president of the FSF, John Sullivan. You can have a look at my Debian conference report here.

If you want to read my debian reports, you can do so by browsing the “Google Summer Of Code” category on this blog.

What I have done

Ring is now in official debian repositories since June 30th. This is a good news for the GNU/Linux community. I’m proud to say that I’ve been able to contribute to debian by working on OpenDHT and developing new functionalities to reduce network traffic. The goal behind this was to finally optimize the data persistence traffic consumption on the DHT.

Github repository: https://github.com/savoirfairelinux/opendht

Queries

Issues:

  • #43: DHT queries

Pull requests:

  • #79: [DHT] Queries: remote values filtering
  • 93: dht: return consistent query from local storage
  • #106: [dht] rework get timings after queries in master

Value pagination

Issues:

  • #71: [DHT] value pagination

Pull requests:

  • #110: dht: Value pagination using queries
  • #113: dht: value pagination fix

Indexation (feat. Nicolas Reynaud)

Pull requests:

  • #77: pht: fix invalid comparison, inexact match lookup
  • #78: [PHT] Key consistency

General maintenance of OpenDHT

Issues:

  • #72: Packaging issue for Python bindings with CMake: $DESTDIR not honored
  • #75: Different libraries built with Autotools and CMake
  • #87: OpenDHT does not build on armel
  • #92: [DhtScanner] doesn’t compile on LLVM 7.0.2
  • #99: 0.6.2 filenames in 0.6.3

Pull requests:

  • #73: dht: consider IPv4 or IPv6 disconnected on operation done
  • #74: [packaging] support python installation with make DESTDIR=$DIR
  • #84: [dhtnode] user experience
  • #94: dht: make main store a vector>
  • #94: autotools: versionning consistent with CMake
  • #103: dht: fix sendListen loop bug
  • #106: dht: more accurate name for requested nodes count
  • #108: dht: unify bootstrapSearch and refill method using node cache

View by commits

You can have a look at my work by commits just by clicking this link: https://github.com/savoirfairelinux/opendht/commits/master?author=sim590

What’s left to be done

Data persistence

The only thing left before achieving the totality of my work is to rigorously test the data persistence behavior to demonstrate the network traffic reduction. To do so we use our benchmark python module. We are able to analyse traffic and produce plots like this one:


Plot: 32 nodes, 1600 values with normal condition test.

This particular plot was drawn before the enhancements. We are confident to improve the results using my work produced during the GSOC.

TCP

In the middle of the GSOC, we soon realized that passing from UDP to TCP would ask too much efforts in too short lapse of time. Also, it is not yet clear if we should really do that.

19 August, 2016 04:04PM