September 26, 2016

hackergotchi for Kees Cook

Kees Cook

security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn’t a lot of time to talk about them all, I figured I’d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn’t everything security-related or generally of interest, but they’re the things I thought needed to be pointed out. If there’s something security-related you think I should cover from v4.3, please mention it in the comments. I’m sure I haven’t caught everything. :)

A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren’t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the “kernel self-protection” ideal anyway — some things are just really interesting userspace-facing features.

So, to that end, here are things I found interesting in v4.3:

CONFIG_CPU_SW_DOMAIN_PAN

Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow.

This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink.

While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I’m delighted to see a form of this land in upstream finally.

To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features.

Ambient Capabilities

Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex “least privilege” execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier.

For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn’t enough, there is a self-test available in tools/testing/selftests/capabilities/ too.

PowerPC and Tile support for seccomp filter

Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly.

That’s it for v4.3. If I missed stuff you found interesting, please let me know! I’m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

26 September, 2016 10:54PM by kees

Reproducible builds folks

Reproducible Builds: week 74 in Stretch cycle

Here is what happened in the Reproducible Builds effort between Sunday September 18 and Saturday September 24 2016:

Outreachy

We intend to participate in Outreachy Round 13 and look forward for new enthusiastic applications to contribute to reproducible builds. We're offering four different areas to work on:

  • Improve test and debugging tools.
  • Improving reproducibility of Debian packages.
  • Improving Debian infrastructure.
  • Help collaboration across distributions.

Reproducible Builds World summit #2

We are planning e a similar event to our Athens 2015 summit and expect to reveal more information soon. If you haven't been contacted yet but would like to attend, please contact holger.

Toolchain development and fixes

Mattia uploaded dpkg/1.18.10.0~reproducible1 to our experimental repository. and covered the details for the upload in a mailing list post.

The most important change is the incorporation of improvements made by Guillem Jover (dpkg maintainer) to the .buildinfo generator. This is also in the hope that it will speed up the merge in the upstream.

One of the other relevant changes from before is that .buildinfo files generated from binary-only builds will no longer include the hash of the .dsc file in Checksums-Sha256 as documented in the specification.

Even if it was considered important to include a checksum of the source package in .buildinfo, storing it that way breaks other assumptions (eg. that Checksums-Sha256 contains only files part of that are part of a single upload, wheras the .dsc might not be part of that upload), thus we look forward for another solution to store the source checksum in .buildinfo.

Bugs filed

Reviews of unreproducible packages

250 package reviews have been added, 4 have been updated and 4 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been added:

3 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (11)
  • Santiago Vila (2)

Documentation updates

h01ger created a new Jenkins job so that every commit pushed to the master branch for the website will update reproducible-builds.org.

diffoscope development

strip-nondeterminism development

reprotest development

tests.reproducible-builds.org

  • The full rebuild of all packages in unstable (for all tested archs) with the new build path variation has been completed. This has had the result that we are down to ~75% reproducible packages in unstable now. In comparison, for testing (where we don't vary the build path) we are still at ~90%. IRC notifications for unstable have been enabled again. (Holger)
  • Make the notes job robust about bad data (see #833695 and #833738). (Holger)
  • Setup profitbricks-build7 running stretch as testing reproducible builds of F-Droid need to use a newer version of vagrant in order to support running vagrant VMs with kvm on kvm. (Holger)
  • The misbehaving 'opi2a' armhf node has been replaced with a Jetson-TK1 board kindly donated by NVidia. This machine is using an NVIDIA tegra-k1 (cortex-a15) quad-core board. (vagrant and Holger)

Misc.

This week's edition was written by Chris Lamb, Holger Levsen and Mattia Rizzolo and reviewed by a bunch of Reproducible Builds folks on IRC.

26 September, 2016 09:25PM

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

LP

I guess you know by now that I simply love music. It is powerful, it can move you, change your mood in a lot of direction, make you wanna move your body to it, even unknowingly have this happen, and remind you of situations you want to keep in mind. The singer I present to you was introduce to me by a dear friend with the following words: So this hasn't happened to me in a looooong time: I hear a voice and can't stop crying. I can't decide which song I should send to you thus I send three of which the last one let me think of you.

And I have to agree, that voice is really great. Thanks a lot for sharing LP with me, dear! And given that I got sent three songs and I am not good at holding excitement back, I want to share it with you, so here are the songs:

  • Lost On You: Her voice is really great in this one.
  • Halo: Have to agree that this is really a great cover.
  • Someday: When I hear that song and think about that it reminds my friend of myself I'm close to tears, too ...

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

26 September, 2016 10:00AM by Rhonda

September 25, 2016

hackergotchi for Clint Adams

Clint Adams

Collect the towers

Why is openbmap's North American coverage so sad? Is there a reason that RadioBeacon doesn't also submit to OpenCellID? Is there a free software Android app that submits data to OpenCellID?

25 September, 2016 11:57PM

Vincent Sanders

I'll huff, and I'll puff, and I'll blow your house in

Sometimes it really helps to have a different view on a problem and after my recent writings on my Public Suffix List (PSL) library I was fortunate to receive a suggestion from my friend Enrico Zini.

I had asked for suggestions on reducing the size of the library further and Enrico simply suggested Huffman coding. This was a technique I had learned about long ago in connection with data compression and the intervening years had made all the details fuzzy which explains why it had not immediately sprung to mind.

A small subset of the Public Suffix List as stored within libnspslHuffman coding named for David A. Huffman is an algorithm that enables a representation of data which is very efficient. In a normal array of characters every character takes the same eight bits to represent which is the best we can do when any of the 256 values possible is equally likely. If your data is not evenly distributed this is not the case for example if the data was english text then the value is fifteen times more likely to be that for e than k.

every step of huffman encoding tree build for the example string tableSo if we have some data with a non uniform distribution of probabilities we need a way the data be encoded with fewer bits for the common values and more bits for the rarer values. To be efficient we would need some way of having variable length representations without storing the length separately. The term for this data representation is a prefix code and there are several ways to generate them.

Such is the influence of Huffman on the area of prefix codes they are often called Huffman codes even if they were not created using his algorithm. One can dream of becoming immortalised like this, to join the ranks of those whose names are given to units or whole ideas in a field must be immensely rewarding, however given Huffman invented his algorithm and proved it to be optimal to answer a question on a term paper in his early twenties I fear I may already be a bit too late.

The algorithm itself is relatively straightforward. First a frequency analysis is performed, a fancy way of saying count how many of each character is in the input data. Next a binary tree is created by using a priority queue initialised with the nodes sorted by frequency.

The resulting huffman tree and the binary representation of the input symbols
The two least frequent items count is summed together and a node placed in the tree with the two original entries as child nodes. This step is repeated until a single node exists with a count value equal to the length of the input.

To encode data once simply walks the tree outputting a 0 for a left node or 1 for right node until reaching the original value. This generates a mapping of values to bit output, the input is then simply converted value by value to the bit output. To decode the data the data is used bit by bit to walk the tree to arrive at values.

If we perform this algorithm on the example string table *!asiabvcomcoopitamazonawsarsaves-the-whalescomputebasilicata we can reduce the 488 bits (61 * 8 bit characters) to 282 bits or 40% reduction. Obviously in a real application the huffman tree would need to be stored which would probably exceed this saving but for larger data sets it is probable this technique would yield excellent results on this kind of data.

Once I proved this to myself I implemented the encoder within the existing conversion program. Although my perl encoder is not very efficient it can process the entire PSL string table (around six thousand labels using 40KB or so) in less than a second, so unless the table grows massively an inelegant approach will suffice.

The resulting bits were packed into 32bit values to improve decode performance (most systems prefer to deal with larger memory fetches less frequently) and resulted in 18KB of output or 47% of the original size. This is a great improvement in size and means the statically linked test program is now 59KB and is actually smaller than the gzipped source data.

$ ls -alh test_nspsl
-rwxr-xr-x 1 vince vince 59K Sep 25 23:58 test_nspsl
$ ls -al public_suffix_list.dat.gz
-rw-r--r-- 1 vince vince 62K Sep 1 08:52 public_suffix_list.dat.gz

To be clear the statically linked program can determine if a domain is in the PSL with no additional heap allocations and includes the entire PSL ordered tree, the domain label string table and the huffman decode table to read it.

An unexpected side effect is that because the decode loop is small it sits in the processor cache. This appears to cause the string comparison function huffcasecmp() (which is not locale dependant because we know the data will be limited ASCII) performance to be close to using strcasecmp() indeed on ARM32 systems there is a very modest improvement in performance.

I think this is as much work as I am willing to put into this library but I am pleased to have achieved a result which is on par with the best of breed (libpsl still has a data representation 20KB smaller than libnspsl but requires additional libraries for additional functionality) and I got to (re)learn an important algorithm too.

25 September, 2016 11:23PM by Vincent Sanders (noreply@blogger.com)

Julian Andres Klode

Introducing TrieHash, a order-preserving minimal perfect hash function generator for C(++)

Abstract

I introduce TrieHash an algorithm for constructing perfect hash functions from tries. The generated hash functions are pure C code, minimal, order-preserving and outperform existing alternatives. Together with the generated header files,they can also be used as a generic string to enumeration mapper (enums are created by the tool).

Introduction

APT (and dpkg) spend a lot of time in parsing various files, especially Packages files. APT currently uses a function called AlphaHash which hashes the last 8 bytes of a word in a case-insensitive manner to hash fields in those files (dpkg just compares strings in an array of structs).

There is one obvious drawback to using a normal hash function: When we want to access the data in the hash table, we have to hash the key again, causing us to hash every accessed key at least twice. It turned out that this affects something like 5 to 10% of the cache generation performance.

Enter perfect hash functions: A perfect hash function matches a set of words to constant values without collisions. You can thus just use the index to index into your hash table directly, and do not have to hash again (if you generate the function at compile time and store key constants) or handle collision resolution.

As #debian-apt people know, I happened to play a bit around with tries this week before guillem suggested perfect hashing. Let me tell you one thing: My trie implementation was very naive, that did not really improve things a lot…

Enter TrieHash

Now, how is this related to hashing? The answer is simple: I wrote a perfect hash function generator that is based on tries. You give it a list of words, it puts them in a trie, and generates C code out of it, using recursive switch statements (see code generation below). The function achieves competitive performance with other hash functions, it even usually outperforms them.

Given a dictionary, it generates an enumeration (a C enum or C++ enum class) of all words in the dictionary, with the values corresponding to the order in the dictionary (the order-preserving property), and a function mapping strings to members of that enumeration.

By default, the first word is considered to be 0 and each word increases a counter by one (that is, it generates a minimal hash function). You can tweak that however:

= 0
WordLabel ~ Word
OtherWord = 9

will return 0 for an unknown value, map “Word” to the enum member WordLabel and map OtherWord to 9. That is, the input list functions like the body of a C enumeration. If no label is specified for a word, it will be generated from the word. For more details see the documentation

C code generation

switch(string[0] | 32) {
case 't':
    switch(string[1] | 32) {
    case 'a':
        switch(string[2] | 32) {
        case 'g':
            return Tag;
        }
    }
}
return Unknown;

Yes, really recursive switches – they directly represent the trie. Now, we did not really do a straightforward translation, there are some optimisations to make the whole thing faster and easier to look at:

First of all, the 32 you see is used to make the check case insensitive in case all cases of the switch body are alphabetical characters. If there are non-alphabetical characters, it will generate two cases per character, one upper case and one lowercase (with one break in it). I did not know that lowercase and uppercase characters differed by only one bit before, thanks to the clang compiler for pointing that out in its generated assembler code!

Secondly, we only insert breaks only between cases. Initially, each case ended with a return Unknown, but guillem (the dpkg developer) suggested it might be faster to let them fallthrough where possible. Turns out it was not faster on a good compiler, but it’s still more readable anywhere.

Finally, we build one trie per word length, and switch by the word length first. Like the 32 trick, his gives a huge improvement in performance.

Digging into the assembler code

The whole code translates to roughly 4 instructions per byte:

  1. A memory load,
  2. an or with 32
  3. a comparison, and
  4. a conditional jump.

(On x86, the case sensitive version actually only has a cmp-with-memory and a conditional jump).

Due to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77729 this may be one instruction more: On some architectures an unneeded zero-extend-byte instruction is inserted – this causes a 20% performance loss.

Performance evaluation

I run the hash against all 82 words understood by APT in Packages and Sources files, 1,000,000 times for each word, and summed up the average run-time:

host arch Trie TrieCase GPerfCase GPerf DJB
plummer ppc64el 540 601 1914 2000 1345
eller mipsel 4728 5255 12018 7837 4087
asachi arm64 1000 1603 4333 2401 1625
asachi armhf 1230 1350 5593 5002 1784
barriere amd64 689 950 3218 1982 1776
x230 amd64 465 504 1200 837 693

Suffice to say, GPerf does not really come close.

All hosts except the x230 are Debian porterboxes. The x230 is my laptop with a a Core i5-3320M, barriere has an Opteron 23xx. I included the DJB hash function for another reference.

Source code

The generator is written in Perl, licensed under the MIT license and available from https://github.com/julian-klode/triehash – I initially prototyped it in Python, but guillem complained that this would add new build dependencies to dpkg, so I rewrote it in Perl.

Benchmark is available from https://github.com/julian-klode/hashbench

Usage

See the script for POD documentation.


Filed under: General

25 September, 2016 06:44PM by Julian Andres Klode

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nageru @ Fyrrom

When Samfundet wanted to make their own Boiler Room spinoff (called “Fyrrom”—more or less a direct translation), it was a great opportunity to try out the new multitrack code in Nageru. After all, what can go wrong with a pretty much untested and unfinished git branch, right?

So we cobbled together a bunch of random equipment from here and there:

Video equipment

Hooked it up to Nageru:

Nageru screenshot

and together with some great work from the people actually pulling together the event, this was the result. Lots of fun.

And yes, some bugs were discovered—of course, field testing without followup patches is meaningless (that would either mean you're not actually taking your test experience into account, or that your testing gave no actionable feedback and thus was useless), so they will be fixed in due time for the 1.4.0 release.

Edit: Fixed a screenshot link.

25 September, 2016 02:01PM

Sven Hoexter

in causa wosign

Since I kind of recommended the free WoSign CA in the past, I would like to point out the issues that have piled up. Mozilla has a writeup due to a removal discussion for NSS: https://wiki.mozilla.org/CA:WoSign_Issues (Yes I'm late with this post, about a month or two by now ...)

Since WoSign, or the person behind it, silently also bought StartCom we've now with StartSSL and WoSign two of the three free CAs in one hand with a questionable track record. That leaves everyone looking for a low budget option with Let's Encrypt.

25 September, 2016 10:00AM

Russ Allbery

podlators 4.08

A new release of the distribution that provides Pod::Man and Pod::Text for Perl documentation formatting.

The impetus for this release is fixing a rendering bug in Pod::Man that spewed stray bits of half-escaped *roff into the man page for the text "TRUE (1)". This turned out to be due to two interlocking bugs in the dark magic regexes that try to fix up formatting to make man pages look a bit better: incorrect double-markup in both small caps and as a man page reference, and incorrect interpretation of the string "\s0(1)". Both are fixed in this release.

podlators 4.00 changed Pod::Man to make piping POD through pod2man on standard input without providing the --name option an error, since there was no good choice for the man page title. This turned out to be too disruptive: the old behavior of tolerating this had been around for too long, and I got several bug reports. Since I think backward compatibility is extremely important for these tools, I've backed down from this change, and now Pod::Man and pod2man just silently use the man page name "STDIN" (which still fixes the original problem of being reproducible).

It is, of course, still a good idea to provide the name option when dealing with standard input, since "STDIN" isn't a very good man page title.

This release also adds new --lquote and --rquote options to pod2man to set the quote marks independently, and removes a test that relied on a POD construct that is going to become an error in Pod::Simple.

You can get the latest release from the podlators distribution page.

25 September, 2016 02:28AM

September 24, 2016

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tint 0.0.1: Tint Is Not Tufte

A new experimental package is now on the ghrr drat. It is named tint which stands for Tint Is Not Tufte. It provides an alternative for Tufte-style html presentation. I wrote a bit more on the package page and the README in the repo -- so go read this.

Here is just a little teaser of what it looks like:

and the full underlying document is available too.

For questions or comments use the issue tracker off the GitHub repo. The package may be short-lived as its functionality may end up inside the tufte package.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 September, 2016 11:13PM

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Azure from Debian

Around a week ago, I started to play with programmatically controlling Azure. I needed to create and destroy a bunch of VMs over and over again, and this seemed like something I would want to automate once instead of doing manually and repeatedly. I started to look into the azure-sdk-for-python and mentioned that I wanted to look into this in #debian-python. ardumont from Software Heritage noticed me, and was planning to package azure-storage-python. We joined forces and started a packaging team for Azure-related software.

I spoke with the upstream developer of the azure-sdk-for-python and he pointed me towards azure-cli. It looked to me that this fit my use case better than the SDK alone, as it had the high level commands I was looking for.

Between me and ardumont, in the space of just under a week, we have now packaged: python-msrest (#838121), python-msrestazure (#838122), python-azure (#838101), python-azure-storage (#838135), python-adal (#838716), python-applicationinsights (#838717) and finally azure-cli (#838708). Some of these packages are still in the NEW queue at the time I’m writing this, but I don’t foresee any issues with these packages entering unstable.

azure-cli, as we have packaged, is the new Python-based CLI for Azure. The Microsoft developers gave it the tagline of “our next generation multi-platform command line experience for Azure”. In the short time I’ve been using it I’ve been very impressed with it.

In order to set it up initially, you have to configure a couple of of defaults using az configure. After that, you need to az login which again is an entirely painless process as long as you have a web browser handy in order to perform the login.

After those two steps, you’re only two commands away from deploying a Debian virtual machine:

az resource group create -n testgroup -l "West US"
az vm create -n testvm -g testgroup --image credativ:Debian:8:latest --authentication-type ssh

This will create a resource group, and then create a VM within that resource group with a user automatically created with your current username and with your SSH public key (~/.ssh/id_rsa.pub) automatically installed. Once it returns you the IP address, you can SSH in straight away.

Looking forward to some next steps for Debian on Azure, I’d like to get images built for Azure using vmdebootstrap and I’ll be exploring this in the lead up to, and at, the upcoming vmdebootstrap sprint in Cambridge, UK later in the year (still being organised).

24 September, 2016 10:03PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Laptop Mode Tools 1.70

I'm pleased to announce the release of Laptop Mode Tools, version 1.70. This release adds support for AHCI Runtime PM, introduced in Linux 4.6. It also includes many important bug fixes, mostly related to invocation and determination of power states.

Changelog:

1.70 - Sat Sep 24 16:51:02 IST 2016
    * Deal harder with broken battery states
    * On machines with 2+ batteries, determine states from all batteries
    * Limit status message logging frequency. Some machines tend to send
      ACPI events too often. Thanks Maciej S. Szmigiero
    * Try harder to determine power states. As reports have shown, the
      power_supply subsystem has had incorrect state reporting on many machines,
      for both, BAT and AC.
    * Relax conditional events where Laptop Mode Tools should be executed. This
      affected for use cases of Laptop being docked and undocked
      Thanks Daniel Koch.
    * CPU Hotplug settings extended
    * Cleanup states for improved Laptop Mode Tools invocation
      Thanks: Tomas Janousek
    * Align Intel P State default to what the actual driver (intel_pstate.c)
uses
      Thanks: George Caswell and Matthew Gabeler-Lee
    * Add support for AHCI Runtime PM in module intel-sata-powermgmt
    * Many systemd and initscript fixes
    * Relax default USB device list. This avoids the long standing issues with
      USB devices (mice, keyboard) that mis-behaved during autosuspend

Source tarball, Feodra/SUSE RPM Packages available at:
https://github.com/rickysarraf/laptop-mode-tools/releases

Debian packages will be available soon in Unstable.

Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki
Mailing List: https://groups.google.com/d/forum/laptop-mode-tools
    
 

Categories: 

Keywords: 

Like: 

24 September, 2016 01:55PM by Ritesh Raj Sarraf

hackergotchi for James McCoy

James McCoy

neovim-enters-stretch

Last we heard from our fearless hero, Neovim, it was just entering the NEW queue. Well, a few days later it landed in experimental and 8 months, to the day, since then it is now in Stretch.

Enjoy the fish!

24 September, 2016 03:01AM

September 23, 2016

Arturo Borrero González

Blog moved from Blogger to Jekyllrb at GithubPages


This blog has finally moved away from blogger to jekyll, also changing the hosting and the domain. No new content will be published here.

New coordinates:

This blogger blog will remain as archive, since I don't plan to migrate the content from here to the new blog.

So, see you there!


23 September, 2016 07:25AM by Arturo Borrero Gonzalez (noreply@blogger.com)

September 22, 2016

hackergotchi for Jonathan Dowland

Jonathan Dowland

WadC 2.1

WadC

Today I released version 2.1 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

This comes about a year after version 2.0. The most significant change is an adjustment to the line splitting algorithm to fix a long-standing issue when you try to draw a new linedef over the top of an existing one, but in the opposite direction. Now that this bug is fixed, it's much easier to overdraw vertical or horizontal lines without needing an awareness of the direction of the original lines.

The other big changes are in the GUI, which has been cleaned up a fair bit, now had undo/redo support, the initial window size is twice as large, and it now supports internationalisation, with a partial French translation included.

This version is dedicated to the memory of Professor Seymour Papert (1928-2016), co-inventor of the LOGO programming language).

For more information see the release notes and the reference.

22 September, 2016 08:22PM

hackergotchi for Joey Hess

Joey Hess

keysafe beta release

After a month of development, keysafe 0.20160922 is released, and ready for beta testing. And it needs servers.

With this release, the whole process of backing up and restoring a gpg secret key to keysafe servers is implemented. Keysafe is started at desktop login, and will notice when a gpg secret key has been created, and prompt to see if it should back it up.

At this point, I recommend only using keysafe for lower-value secret keys, for several reasons:

  • There could be some bug that prevents keysafe from restoring a backup.
  • Keysafe's design has not been completely reviewed for security.
  • None of the keysafe servers available so far or planned to be deployed soon meet all of the security requirements for a recommended keysafe server. While server security is only the initial line of defense, it's still important.

Currently the only keysafe server is one that I'm running myself. Two more keysafe servers are needed for keysafe to really be usable, and I can't run those.

If you're interested in running a keysafe server, read the keysafe server requirements and get in touch.

22 September, 2016 08:13PM

hackergotchi for Gustavo Noronha Silva

Gustavo Noronha Silva

WebKitGTK+ 2.14 and the Web Engines Hackfest

Next week our friends at Igalia will be hosting this year’s Web Engines Hackfest. Collabora will be there! We are gold sponsors, and have three developers attending. It will also be an opportunity to celebrate Igalia’s 15th birthday \o/. Looking forward to meet you there! =)

Carlos Garcia has recently released WebKitGTK+ 2.14, the latest stable release. This is a great release that brings a lot of improvements and works much better on Wayland, which is becoming mature enough to be used by default. In particular, it fixes the clipboard, which was one of the main missing features, thanks to Carlos Garnacho! We have also been able to contribute a bit to this release =)

One of the biggest changes this cycle is the threaded compositor, which was implemented by Igalia’s Gwang Yoon Hwang. This work improves performance by not stalling other web engine features while compositing. Earlier this year we contributed fixes to make the threaded compositor work with the web inspector and fixed elements, helping with the goal of enabling it by default for this release.

Wayland was also lacking an accelerated compositing implementation. There was a patch to add a nested Wayland compositor to the UIProcess, with the WebProcesses connecting to it as Wayland clients to share the final rendering so that it can be shown to screen. It was not ready though and there were questions as to whether that was the way to go and alternative proposals were floating around on how to best implement it.

At last year’s hackfest we had discussions about what the best path for that would be where collaborans Emanuele Aina and Daniel Stone (proxied by Emanuele) contributed quite a bit on figuring out how to implement it in a way that was both efficient and platform agnostic.

We later picked up the old patchset, rebased on the then-current master and made it run efficiently as proof of concept for the Apertis project on an i.MX6 board. This was done using the fancy GL support that landed in GTK+ in the meantime, with some API additions and shortcuts to sidestep performance issues. The work was sponsored by Robert Bosch Car Multimedia.

Igalia managed to improve and land a very well designed patch that implements the nested compositor, though it was still not as efficient as it could be, as it was using glReadPixels to get the final rendering of the page to the GTK+ widget through cairo. I have improved that code by ensuring we do not waste memory when using HiDPI.

As part of our proof of concept investigation, we got this WebGL car visualizer running quite well on our sabrelite imx6 boards. Some of it went into the upstream patches or proposals mentioned below, but we have a bunch of potential improvements still in store that we hope to turn into upstreamable patches and advance during next week’s hackfest.

One of the improvements that already landed was an alternate code path that leverages GTK+’s recent GL super powers to render using gdk_cairo_draw_from_gl(), avoiding the expensive copying of pixels from the GPU to the CPU and making it go faster. That improvement exposed a weird bug in GTK+ that causes a black patch to appear when shrinking the window, which I have a tentative fix for.

We originally proposed to add a new gdk_cairo_draw_from_egl() to use an EGLImage instead of a GL texture or renderbuffer. On our proof of concept we noticed it is even more efficient than the texturing currently used by GTK+, and could give us even better performance for WebKitGTK+. Emanuele Bassi thinks it might be better to add EGLImage as another code branch inside from_gl() though, so we will look into that.

Another very interesting igalian addition to this release is support for the MemoryPressureHandler even on systems with no cgroups set up. The memory pressure handler is a WebKit feature which flushes caches and frees resources that are not being used when the operating system notifies it memory is scarce.

We worked with the Raspberry Pi Foundation to add support for that feature to the Raspberry Pi browser and contributed it upstream back in 2014, when Collabora was trying to squeeze as much as possible from the hardware. We had to add a cgroups setup to wrap Epiphany in, back then, so that it would actually benefit from the feature.

With this improvement, it will benefit even without the custom cgroups setups as well, by having the UIProcess monitor memory usage and notify each WebProcess when memory is tight.

Some of these improvements were achieved by developers getting together at the Web Engines Hackfest last year and laying out the ground work or ideas that ended up in the code base. I look forward to another great few days of hackfest next week! See you there o/

22 September, 2016 05:03PM by kov

Zlatan Todorić

Open Source Motion Comic Almost Fully Funded - Pledge now!

The Pepper and Carrot motion comic is almost funded. The pledge from Ethic Cinema put it on good road (as it seemed it would fail). Ethic Cinema is non profit organization that wants to make open source art (as they call it Libre Art). Purism's creative director, François Téchené, is member and co-founder of Ethic Cinema. Lets push final bits so we can get this free as in freedom artwork.

Notice that Pepper and Carrot is a webcomic (also available as book) free as in freedom artwork done by David Revoy who also supports this campaign. Also the support is done by Krita community on their landing page.

Lets do this!

22 September, 2016 11:54AM by Zlatan Todoric

hackergotchi for Junichi Uekawa

Junichi Uekawa

Tried creating a GCE control panel for myself.

Tried creating a GCE control panel for myself. GCP GCE control panel takes about 20 seconds for me to load, CPU is busy loading the page. It does so many things and it's very complex. I've noticed that the API isn't that slow, so I used OAuth to let me do what I want usually; list the hosts and start/stop the instance, and list the IPs. Takes 500ms to do it instead of 20 seconds. I've put the service on AppEngine. The hardest part was figuring out how this OAuth2 dance was supposed to work, and all the python documentation I have seen were somewhat outdated and rewriting then to a workable state. document was outdated but sample code was fixed. I had to read up on vendoring and PIP and other stuff in order to get all the dependencies installed. I guess my python appengine skills are too rusty now.

22 September, 2016 03:04AM by Junichi Uekawa

September 21, 2016

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

virt manager cannot find suitable emulator for x86 64

Looks like I was missing qemu-kvm.

$ sudo apt-get install qemu-kvm qemu-system

21 September, 2016 06:42PM by C.J. Adams-Collier

hackergotchi for Matthew Garrett

Matthew Garrett

Microsoft aren't forcing Lenovo to block free operating systems

There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

(Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

comment count unavailable comments

21 September, 2016 05:09PM

hackergotchi for Sean Whitton

Sean Whitton

September 20, 2016

Vincent Sanders

If I see an ending, I can work backward.

Now while I am sure Arthur Miller was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.

In the early nineties Lou Montulli applied the idea of magic cookies to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.

Chocolate chip cookie are much tastier than HTTP cookiesThe HTTP cookie is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the supercookie where the super prefix in this context has similar connotations as to when applied to the word villain.

Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.

As an example the first time you visit a page on www.example.foo.invalid you might receive a cookie with the domain example.foo.invalid so next time you visit a page on www.example.foo.invalid your browser will send the cookie along. Indeed it will also send it along for any page on another.example.foo.invalid

A supercookies is simply one where instead of being limited to one sub-domain (example.foo.invalid) the cookie is set for a top level domain (foo.invalid) so visiting any such domain (I used the invalid name in my examples but one could substitute com or co.uk) your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.

This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so example.invalid or example.com were allowed but invalid or com on their own were not. This works fine for top level domains like .com, .org and .mil but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like .co.uk).

NetSurf cookie manager showing a supercookieThere is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the Public Suffix List (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.

A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.

I started by looking at limited existing libraries. In fact only the regdom library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-processed input it was decided the library was not suitable.

Lacking other choices I came up with my own implementation which used a perl script to construct a tree of domains from the PSL in a static array with the label strings in a separate table. At the time my implementation added 70Kb of read only data which I thought reasonable and allowed for direct lookup of answers from the database.

This solution still required a pre-processing step to generate the C source code but perl is much more readily available, is a language already used by our tooling and we could always simply ship the generated file. As long as the generated file was updated at release time as we already do for our fallback SSL certificate root set this would be acceptable.

wireshark session shown NetSurf sending a co.uk supercookie to bbc.co.uk
I put the solution into NetSurf, was pleased no-one seemed to notice and moved on to other issues. Recently while fixing a completely unrelated issue in the display of session cookies in the management interface and I realised I had some test supercookies present in the display. After the initial "thats odd" I realised with horror there might be a deeper issue.

It quickly became evident the PSL generation was broken and had been for a long time, even worse somewhere along the line the "redundant" empty generated source file had been removed and the ancient fallback code path was all that had been used.

This issue had escalated somewhat from a trivial display problem. I took a moment to asses the situation a bit more broadly and came to the conclusion there were a number of interconnected causes, centered around the lack of automated testing, which could be solved by extracting the PSL handling into a "support" library.

NetSurf has several of these support libraries which could be used separately to the main browser project but are principally oriented towards it. These libraries are shipped and built in releases alongside the main browser codebase and mainly serve to make API more obvious and modular. In this case my main aim was to have the functionality segregated into a separate module which could be tested, updated and monitored directly by our CI system meaning the embarrassing failure I had found can never occur again.

Before creating my own library I did consider a library called libpsl had been created since I wrote my original implementation. Initially I was very interested in using this library given it managed a data representation within a mere 32Kb.

Unfortunately the library integrates a great deal of IDN and punycode handling which was not required in this use case. NetSurf already has to handle IDN and punycode translations and uses punycode encoded domain names internally only translating to unicode representations for display so duplicating this functionality using other libraries requires a great deal of resource above the raw data representation.

I put the library together based on the existing code generator Perl program and integrated the test set that comes along with the PSL. I was a little alarmed to discover that the PSL had almost doubled in size since the implementation was originally written and now the trivial test program of the library was weighing in at a hefty 120Kb.

This stemmed from two main causes:
  1. there were now many more domain label strings to be stored
  2. there now being many, many more nodes in the tree.
To address the first cause the length of the domain label strings was moved into the unused padding space within each tree node removing a byte from each domain label saving 6Kb. Next it occurred to me that while building the domain label string table that if the label to be added already existed as a substring within the table it could be elided.

The domain labels were sorted from longest to shortest and added in order searching for substring matches as the table was built this saved another 6Kb. I am sure there are ways to reduce this further I have missed (if you see them let me know!) but a 25% saving (47Kb to 35Kb) was a good start.

The second cause was a little harder to address. The structure representing nodes in the tree I started with was at first look reasonable.

struct pnode {
uint16_t label_index; /* index into string table of label */
uint16_t label_length; /* length of label */
uint16_t child_node_index; /* index of first child node */
uint16_t child_node_count; /* number of child nodes */
};

I examined the generated table and observed that the majority of nodes were leaf nodes (had no children) which makes sense given the type of data being represented. By allowing two types of node one for labels and a second for the child node information this would halve the node size in most cases and requiring only a modest change to the tree traversal code.

The only issue with this would be that a way to indicate a node has child information. It was realised that the domain labels can have a maximum length of 63 characters meaning their length can be represented in six bits so a uint16_t was excessive. The space was split into two uint8_t parts one for the length and one for a flag to indicate child data node followed.

union pnode {
struct {
uint16_t index; /* index into string table of label */
uint8_t length; /* length of label */
uint8_t has_children; /* the next table entry is a child node */
} label;
struct {
uint16_t node_index; /* index of first child node */
uint16_t node_count; /* number of child nodes */
} child;
};

static const union pnode pnodes[8580] = {
/* root entry */
{ .label = { 0, 0, 1 } }, { .child = { 2, 1553 } },
/* entries 2 to 1794 */
{ .label = {37, 2, 1 } }, { .child = { 1795, 6 } },

...

/* entries 8577 to 8578 */
{ .label = {31820, 6, 1 } }, { .child = { 8579, 1 } },
/* entry 8579 */
{ .label = {0, 1, 0 } },

};

This change reduced the node array size from 63Kb to 33Kb almost a 50% saving. I considered using bitfields to try and reduce the label length and has_children flag into a single byte but such packing will not reduce the length of a node below 32bits because it is unioned with the child structure.

A possibility of using the spare uint8_t derived by bitfield packing to store an additional label node in three other nodes was considered but added a great deal of complexity to node lookup and table construction for saving around 4Kb so was not incorporated.

With the changes incorporated the test program was a much more acceptable 75Kb reasonably close to the size of the compressed source but with the benefits of direct lookup. Integrating the libraries single API call into NetSurf was straightforward and resulted in correct operation when tested.

This episode just reminded me of the dangers of code that can fail silently. It exposed our users to a security problem that we thought had been addressed almost six years ago and squandered the limited resources of the project. Hopefully a lesson we will not have to learn again any time soon. If there is a positive to take away it is that the new implementation is more space efficient, automatically built and importantly tested

20 September, 2016 09:12PM by Vincent Sanders (noreply@blogger.com)

hackergotchi for Gunnar Wolf

Gunnar Wolf

Proposing a GR to repeal the 2005 vote for declassification of the debian-private mailing list

For the non-Debian people among my readers: The following post presents bits of the decision-taking process in the Debian project. You might find it interesting, or terribly dull and boring :-) Proceed at your own risk.

My reason for posting this entry is to get more people to read the accompanying options for my proposed General Resolution (GR), and have as full a ballot as possible.

Almost three weeks ago, I sent a mail to the debian-vote mailing list. I'm quoting it here in full:

Some weeks ago, Nicolas Dandrimont proposed a GR for declassifying
debian-private[1]. In the course of the following discussion, he
accepted[2] Don Armstrong's amendment[3], which intended to clarify the
meaning and implementation regarding the work of our delegates and the
powers of the DPL, and recognizing the historical value that could lie
within said list.

[1] https://www.debian.org/vote/2016/vote_002
[2] https://lists.debian.org/debian-vote/2016/07/msg00108.html
[3] https://lists.debian.org/debian-vote/2016/07/msg00078.html

In the process of the discussion, several people objected to the
amended wording, particularly to the fact that "sufficient time and
opportunity" might not be sufficiently bound and defined.

I am, as some of its initial seconders, a strong believer in Nicolas'
original proposal; repealing a GR that was never implemented in the
slightest way basically means the Debian project should stop lying,
both to itself and to the whole free software community within which
it exists, about something that would be nice but is effectively not
implementable.

While Don's proposal is a good contribution, given that in the
aforementioned GR "Further Discussion" won 134 votes against 118, I
hereby propose the following General Resolution:

=== BEGIN GR TEXT ===

Title: Acknowledge that the debian-private list will remain private.

1. The 2005 General Resolution titled "Declassification of debian-private
   list archives" is repealed.
2. In keeping with paragraph 3 of the Debian Social Contract, Debian
   Developers are strongly encouraged to use the debian-private mailing
   list only for discussions that should not be disclosed.

=== END GR TEXT ===

Thanks for your consideration,
--
Gunnar Wolf
(with thanks to Nicolas for writing the entirety of the GR text ;-) )

Yesterday, I spoke with the Debian project secretary, who confirmed my proposal has reached enough Seconds (that is, we have reached five people wanting the vote to happen), so I could now formally do a call for votes. Thing is, there are two other proposals I feel are interesting, and should be part of the same ballot, and both address part of the reasons why the GR initially proposed by Nicolas didn't succeed:

So, once more (and finally!), why am I posting this?

  • To invite Iain to formally propose his text as an option to mine
  • To invite more DDs to second the available options
  • To publicize the ongoing discussion

I plan to do the formal call for votes by Friday 23.
[update] Kurt informed me that the discussion period started yesterday, when I received the 5th second. The minimum discussion period is two weeks, so I will be doing a call for votes at or after 2016-10-03.

20 September, 2016 04:03PM by gwolf

hackergotchi for Michal Čihař

Michal Čihař

wlc 0.6

wlc 0.6, a command line utility for Weblate, has been just released. There have been some minor fixes, but the most important news is that Windows and OS X are now supported platforms as well.

Full list of changes:

  • Fixed error when invoked without command.
  • Tested on Windows and OS X (in addition to Linux).

wlc is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version is 2.7 (it is now running on both demo and hosting servers). You can usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate | 0 comments

20 September, 2016 04:00PM

Reproducible builds folks

Reproducible Builds: week 73 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday September 11 and Saturday September 17 2016:

Toolchain developments

Ximin Luo started a new series of tools called (for now) debrepatch, to make it easier to automate checks that our old patches to Debian packages still apply to newer versions of those packages, and still make these reproducible.

Ximin Luo updated one of our few remaining patches for dpkg in #787980 to make it cleaner and more minimal.

The following tools were fixed to produce reproducible output:

Packages reviewed and fixed, and bugs filed

The following updated packages have become reproducible - in our current test setup - after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

The following 3 packages were not changed, but have become reproducible due to changes in their build-dependencies: jaxrs-api python-lua zope-mysqlda.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

462 package reviews have been added, 524 have been updated and 166 have been removed in this week, adding to our knowledge about identified issues.

25 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

  • Chris Lamb (10)
  • Filip Pytloun (1)
  • Santiago Vila (1)

diffoscope development

A new version of diffoscope 60 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

  • Mattia Rizzolo:
    • Various packaging and testing improvements.
  • HW42:
    • minor wording fixes
  • Reiner Herrmann:
    • minor wording fixes

It also included from changes previous weeks; see either the changes or commits linked above, or previous blog posts 72 71 70.

strip-nondeterminism development

New versions of strip-nondeterminism 0.027-1 and 0.028-1 were uploaded to unstable by Chris Lamb. It included contributions from:

  • Chris Lamb:
    • Testing improvements, including better handling of timezones.

disorderfs development

A new version of disorderfs 0.5.1 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Andrew Ayer and Chris Lamb:
    • Support relative paths for ROOTDIR; it no longer needs to be an absolute path.
  • Chris Lamb:
    • Print the behaviour (shuffle/reverse/sort) on startup to stdout.

It also included from changes previous weeks; see either the changes or commits linked above, or previous blog posts 70.

Misc.

This week's edition was written by Ximin Luo and reviewed by a bunch of Reproducible Builds folks on IRC.

20 September, 2016 12:58PM

September 19, 2016

Mike Gabriel

Rocrail changed License to some dodgy non-free non-License

The Background Story

A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail [1] being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+).

A month ago, or so, I collected my old Märklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son.

Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though).

This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015.

I should have moved on from here on...

Instead...

Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again [2]? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely.

Going deeper...

The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that.

Trivial Source Code Investigation...

So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text:

Copyright (c) 2002 Robert Jan Versluis, Rocrail.net
All rights reserved.
Commercial usage needs permission.

The replacement happened with these Git commits:

commit cfee35f3ae5973e97a3d4b178f20eb69a916203e
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Fri Jul 17 16:09:45 2015 +0200

    update copyrights

commit df399d9d4be05799d4ae27984746c8b600adb20b
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 14:49:12 2015 +0200

    update licence

commit 0daffa4b8d3dc13df95ef47e0bdd52e1c2c58443
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 10:17:13 2015 +0200

    update

Getting in touch again, still being really interested and wanting to help...

As I consider such a non-license as really dangerous when distributing any sort of software, be it Free or non-free Software, I posted the below text on the Rocrail forum:

Hi Rob,

I just stumbled over this post [3] [link reference adapted for this
blog post), which probably is the one you have referred to above.

It seems that Rocrail contains features that require a key or such
for permanent activation.  Basically, this is allowed and possible
even with the GPL-3+ (although Free Software activists will  not
appreciate that). As the GPL states that people can share the source
code, programmers can  easily deactivate license key checks (and
such) in the code and re-distribute that patchset as they  like.

Furthermore, the current COPYING file is really non-protective at
all. It does not really protect   you as copyright holder of the
code. Meaning, if people crash their trains with your software, you  
could actually be legally prosecuted for that. In theory. Or in the
U.S. ( ;-) ). Main reason for  having a long long license text is to
protect you as the author in case your software causes t trouble to
other people. You do not have any warranty disclaimer in your COPYING
file or elsewhere. Really not a good idea.

In that referenced post above, someone also writes about the nuisance
of license discussions in  this forum. I have seen various cases
where people produced software and did not really care for 
licensing. Some ended with a letter from a lawyer, some with some BIG
company using their code  under their copyright holdership and their
own commercial licensing scheme. This is not paranoia,  this is what
happens in the Free Software world from time to time.

A model that might be much more appropriate (and more protective to
you as the author), maybe, is a  dual release scheme for the code. A
possible approach could be to split Rocrail into two editions:  
Community Edition and Professional/Commercial Edition. The Community
Edition must be licensed in a  way that it allows re-using the code
in a closed-source, non-free version of Rocrail (e.g.   MIT/Expat
License or Apache2.0 License). Thus, the code base belonging to the
community edition  would be licensed, say..., as Apache-2.0 and for
the extra features in the Commercial Edition, you  may use any
non-free license you want (but please not that COPYING file you have
now, it really  does not protect your copyright holdership).

The reason for releasing (a reduced set of features of a) software as
Free Software is to extend  the user base. The honey jar effect, as
practise by many huge FLOSS projects (e.g. Owncloud,  GitLab, etc.).
If people could install Rocrail from the Debian / Ubuntu archives
directly, I am  sure that the user base of Rocrail will increase.
There may also be developers popping up showing  an interest in
Rocrail (e.g. like me). However, I know many FLOSS developers (e.g.
like me) that  won't waste their free time on working for a non-free
piece of software (without being paid).

If you follow (or want to follow) a business model with Rocrail, then
keep some interesting  features in the Commercial Edition and don't
ship that source code. People with deep interest may  opt for that.

Furthermore, another option could be dual licensing the code. As the
copyright holder of Rocrail  you are free to juggle with licenses and
apply any license to a release you want. For example, this  can be
interesing for a free-again Rocrail being shipped via Apple's iStore. 

Last but not least, as you ship the complete source code with all
previous changes as a Git project  to those who request GitBlit
access, it is possible to obtain all earlier versions of Rocrail. In 
the mail I received with my GitBlit credentials, there was some text
that  prohibits publishing the  code. Fine. But: (in theory) it is
not forbidden to share the code with a friend, for local usage.  This
friend finds the COPYING file, frowns and rewinds back to 2015 where
the license was still  GPL-3+. GPL-3+ code can be shared with anyone
and also published, so this friend could upload the  2015-version of
Rocrail to Github or such and start to work on a free fork. You also
may not want  this.

Thanks for working on this piece of software! It is highly
interesing, and I am still sad, that it  does not come with a free
license anymore. I won't continue this discussion and move on, unless
you  are interested in any of the above information and ask for more
expertise. Ping me here or directly  via mail, if needed. If the
expertise leads to parts of Rocrail becoming Free Software again, the 
expertise is offered free of charge ;-).

light+love
Mike

Wow, the first time I got moderated somewhere... What an experience!

This experience now was really new. My post got immediately removed from the forum by the main author of Rocrail (with the forum's moderator's hat on). The new experience was: I got really angry when I discovererd having been moderated. Wow! Really a powerful emotion. No harassment in my words, no secrets disclosed, and still... my free speech got suppressed by someone. That feels intense! And it only occurred in the virtual realm, not face to face. Wow!!! I did not expect such intensity...

The reason for wiping my post without any other communication was given as below and quite a statement to frown upon (this post has also been "moderately" removed from the forum thread [2] a bit later today):

Mike,

I think its not a good idea to point out a way to get the sources back to the GPL periode.
Therefore I deleted your posting.

(The phpBB forum software also allows moderators to edit posts, so the critical passage could have been removed instead, but immediately wiping the full message, well...). Also, just wiping my post and not replying otherwise with some apology to suppress my words, really is a no-go. And the reason for wiping the rest of the text... Any Git user can easily figure out how to get a FLOSS version of Rocrail and continue to work on that from then on. Really.

Now the political part of this blog post...

Fortunately, I still live in an area of the world where the right of free speech is still present. I found out: I really don't like being moderated!!! Esp. if what I share / propose is really noooo secret at all. Anyone who knows how to use Git can come to the same conclusion as I have come to this morning.

[Off-topic, not at all related to Rocrail: The last votes here in Germany indicate that some really stupid folks here yearn for another–this time highly idiotic–wind of change, where free speech may end up as a precious good.]

To other (Debian) Package Maintainers and Railroad Enthusiasts...

With this blog post I probably close the last option for Rocrail going FLOSS again. Personally, I think that gate was already closed before I got in touch.

Now really moving on...

Probably the best approach for my new train conductor hobby (as already recommended by the woman at my side some weeks back) is to leave the laptop lid closed when switching on the train control units. I should have listened to her much earlier.

I have finally removed the Rocrail source code from my computer again without building and testing the application. I neither have shared the source code with anyone. Neither have I shared the Git URL with anyone. I really think that FLOSS enthusiasts should stay away from this software for now. For my part, I have lost my interest in this completely...

References

light+love,
Mike

19 September, 2016 09:51AM by sunweaver

September 18, 2016

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2016/37

we're not running out of (perl-related) RC bugs. here's my list for this week:

  • #811672 – qt4-perl: "FTBFS with GCC 6: cannot convert x to y"
    add patch from upstream bug tracker, upload to DELAYED/5
  • #815433 – libdata-messagepack-stream-perl: "libdata-messagepack-stream-perl: FTBFS with new msgpack-c library"
    upload new upstream version (pkg-perl)
  • #834249 – src:openbabel: "openbabel: FTBFS in testing"
    propose a patch (build with -std=gnu++98), later upload to DELAYED/2
  • #834960 – src:libdaemon-generic-perl: "libdaemon-generic-perl: FTBFS too much often (failing tests)"
    add patch from ntyni (pkg-perl)
  • #835075 – src:libmail-gnupg-perl: "libmail-gnupg-perl: FTBFS: Failed 1/10 test programs. 0/4 subtests failed."
    upload with patch from dkg (pkg-perl)
  • #835412 – src:libzmq-ffi-perl: "libzmq-ffi-perl: FTBFS too much often, makes sbuild to hang"
    add patch from upstream git (pkg-perl)
  • #835731 – src:libdbix-class-perl: "libdbix-class-perl: FTBFS: Tests failures"
    cherry-pick patch from upstream git (pkg-perl)
  • #837055 – src:fftw: "fftw: FTBFS due to bfnnconv.pl failing to execute m-ascii.pl (. removed from @INC in perl)"
    add patch to call require with "./", upload to DELAYED/2, rescheduled to 0-day on maintainer's request
  • #837221 – src:metacity-themes: "metacity-themes: FTBFS: Can't locate debian/themedata.pm in @INC"
    call helper scripts with "perl -I." in debian/rules, QA upload
  • #837242 – src:jwchat: "jwchat: FTBFS: Can't locate scripts/JWCI18N.pm in @INC"
    add patch to call require with "./", upload to DELAYED/2
  • #837264 – src:libsys-info-base-perl: "libsys-info-base-perl: FTBFS: Couldn't do SPEC: No such file or directory at builder/lib/Build.pm line 42."
    upload with patch from ntyni (pkg-perl)
  • #837284 – src:libimage-info-perl: "libimage-info-perl: FTBFS: Can't locate inc/Module/Install.pm in @INC"
    call perl with -I. in debian/rules, upload to DELAYED/2

18 September, 2016 09:22PM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

DNSync

While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

18 September, 2016 09:00PM

hackergotchi for Eriberto Mota

Eriberto Mota

Statistics to Choose a Debian Package to Help

In the last week I played a bit with UDD (Ultimate Debian Database). After some experiments I did a script to generate a daily report about source packages in Debian. This report is useful to choose a package that needs help.

The daily report has six sections:

  • Sources in Debian Sid (including orphan)
  • Sources in Debian Sid (only Orphan, RFA or RFH)
  • Top 200 sources in Debian Sid with outdated Standards-Version
  • Top 200 sources in Debian Sid with NMUs
  • Top 200 sources in Debian Sid with BUGs
  • Top 200 sources in Debian Sid with RC BUGs

The first section has several important data about all source packages in Debian, ordered by last upload to Sid. It is very useful to see packages without revisions for a long time. Other interesting data about each package are Standards-Version, packaging format, number of NMUs, among others. Believe it or not, there are packages uploaded to Sid for the last time 2003! (seven packages)

With the report, you can choose a ideal package to do QA uploads, NMUs or to adopt.

Well, if you like to review packages, this report is for you: https://people.debian.org/~eriberto/eriberto_stats.html. Enjoy!

 

18 September, 2016 03:30AM by Eriberto

hackergotchi for Norbert Preining

Norbert Preining

Fixing packages for broken Gtk3

As mentioned on sunweaver’s blog Debian’s GTK-3+ v3.21 breaks Debian MATE 1.14, Gtk3 is breaking apps all around. But not only Mate, probably many other apps are broken, too, in particular Nemo (the file manager of Cinnamon desktop) has redraw issues (bug 836908), and regular crashes (bug 835043).

gtk-breakage

I have prepared packages for mate-terminal and nemo built from the most recent git sources. The new mate-terminal now does not crash anymore on profile changes (bug 835188), and the nemo redraw issues are gone. Unfortunately, the other crashes of nemo are still there. The apt-gettable repository with sources and amd64 binaries are here:

deb http://www.preining.info/debian/ gtk3fixes main
deb-src http://www.preining.info/debian/ gtk3fixes main

and are signed with my usual GPG key.

Last but not least, I quote from sunweaver’s blog:

Questions

  1. Isn’t GTK-3+ a shared library? This one was rhetorical… Yes, it is.
  2. One that breaks other application with every point release? Well, unfortunately, as experience over the past years has shown: Yes, this has happened several times, so far — and it happened again.
  3. Why is it that GTK-3+ uploads appear in Debian without going through a proper transition? This question is not rhetorical. If someone has an answer, please enlighten me.

(end of quote)

<rant>
My personal answer to this is: Gtk is strongly related to Gnome, Gnome is strongly related to SystemD, all this is pushed onto Debian users in the usual way of “we don’t care for breaking non-XXX apps” (for XXX in Gnome, SystemD). It is very sad to see this recklessness taking more and more space all over Debian.
</rant>

I finish with another quote from sunweaver’s blog:

already scared of the 3.22 GTK+ release, luckily the last development release of the GTK+ 3-series

18 September, 2016 02:31AM by Norbert Preining

September 17, 2016

Jonas Meurer

apache rewritemap querystring

Apache2: Rewrite REQUEST_URI based on a bulk list of GET parameters in QUERY_STRING

Recently I searched for a solution to rewrite a REQUEST_URI based on GET parameters in QUERY_STRING. To make it even more complicated, I had a list of ~2000 parameters that have to be rewritten like the following:

if %{QUERY_STRING} starts with one of <parameters>:
    rewrite %{REQUEST_URI} from /new/ to /old/

Honestly, it took me several hours to find a solution that was satisfying and scales well. Hopefully, this post will save time for others with the need for a similar solution.

Research and first attempt: RewriteCond %{QUERY_STRING} ...

After reading through some documentation, particularly Manipulating the Query String, the following ideas came to my mind at first:

RewriteCond %{REQUEST_URI} ^/new/
RewriteCond %{QUERY_STRING} ^(param1)(.*)$ [OR]
RewriteCond %{QUERY_STRING} ^(param2)(.*)$ [OR]
...
RewriteCond %{QUERY_STRING} ^(paramN)(.*)$
RewriteRule /new/ /old/?%1%2 [R,L]

or instead of an own RewriteCond for each parameter:

RewriteCond %{QUERY_STRING} ^(param1|param2|...|paramN)(.*)$

There has to be something smarter ...

But with ~2000 parameters to look up, neither of the solutions seemed particularly smart. Both scale really bad and probably it's rather heavy stuff for Apache to check ~2000 conditions for every ^/new/ request.

Instead I was searching for a solution to lookup a string from a compiled list of strings. RewriteMap seemed like it might be what I was searching for. I read the Apache2 RewriteMap documentation here and here and finally found a solution that worked as expected, with one limitation. But read on ...

The solution: RewriteMap and RewriteCond ${mapfile:%{QUERY_STRING}} ...

Finally, the solution was to use a RewriteMap with all parameters that shall be rewritten and check given parameters in the requests against this map within a RewriteCond. If the parameter matches, the simple RewriteRule applies.

For the inpatient, here's the rewrite magic from my VirtualHost configuration:

RewriteEngine On
RewriteMap RewriteParams "dbm:/tmp/rewrite-params.map"
RewriteCond %{REQUEST_URI} ^/new/
RewriteCond ${RewriteParams:%{QUERY_STRING}|NOT_FOUND} !=NOT_FOUND
RewriteRule ^/new/ /old/ [R,L]

A more detailed description of the solution

First, I created a RewriteMap at /tmp/rewrite-params.txt with all parameters to be rewritten. A RewriteMap requires two field per line, one with the origin and the other one with the replacement part. Since I use the RewriteMap merely for checking the condition, not for real string replacement, the second field doesn't matter to me. I ended up putting my parameters in both fields, but you could choose every random value for the second field:

/tmp/rewrite-params.txt:

param1 param1
param2 param2
...
paramN paramN

Then I created a DBM hash map file from that plain text map file, as DBM maps are indexed, while TXT maps are not. In other words: with big maps, DBM is a huge performance boost:

httxt2dbm -i /tmp/rewrite-params.txt -o /tmp/rewrite-params.map

Now, let's go through the VirtualHost configuration rewrite magic from above line by line. First line should be clear: it enables the Apache Rewrite Engine:

RewriteEngine On

Second line defines the RewriteMap that I created above. It contains the list of parameters to be rewritten:

RewriteMap RewriteParams "dbm:/tmp/rewrite-params.map"

The third line limits the rewrites to REQUEST_URIs that start with /new/. This is particularly required to prevent rewrite loops. Without that condition, queries that have been rewritten to /old/ would go through the rewrite again, resulting in an endless rewrite loop:

RewriteCond %{REQUEST_URI} ^/new/

The fourth line is the core condition: it checks whether QUERY_STRING (the GET parameters) is listed in the RewriteMap. A fallback value 'NOT_FOUND' is defined if the lookup didn't match. The condition is only true, if the lookup was successful and the QUERY_STRING was found within the map:

RewriteCond ${RewriteParams:%{QUERY_STRING}|NOT_FOUND} !=NOT_FOUND

The last line is a simple RewriteRule from /new/ to /old/. It is executed only if all previous conditions are met. The flags are R for redirect (issuing a HTTP redirect to browser) and L for last (causing mod_rewrite to stop processing immediately after that rule):

RewriteRule ^/new/ /old/ [R,L]

Known issues

A big limitation of this solution (compared to the ones above) is, that it looks up the whole QUERY_STRING in RewriteMap. Therefore, it works only if param is the only GET parameter. In case of additional GET parameters, the second rewrite condition fails and nothing is rewritten even if the first GET parameter is listed in RewriteMap.

If anyone comes up with a solution to this limitation, I would be glad to learn about it :)

17 September, 2016 03:52PM

hackergotchi for Norbert Preining

Norbert Preining

Android 7.0 Nougat – Root – PokemonGo

Since my switch to Android my Nexus 6p is rooted and I have happily fixed the Android (<7) font errors with Japanese fonts in English environment (see this post). The recently released Android 7 Nougat finally fixes this problem, so it was high time to update.

In addition, a recent update to Pokemon Go excluded rooted devices, so I was searching for a solution that allows me to: update to Nougat, keep root, and run PokemonGo (as well as some bank security apps etc).

android-nougat-root-poke

After some playing around here are the steps I took:

Installation of necessary components

Warning: The following is for Nexus6p device, you need different image files and TWRP recovery for other devices.

Flash Nougat firmware images

Get it from the Google Android Nexus images web site, unpack the zip and the included zip one gets a lot of img files.

unzip angler-nrd90u-factory-7c9b6a2b.zip
cd angler-nrd90u/
unzip image-angler-nrd90u.zip

As I don’t want my user partition to get flashed, I did not use the included flash script, but did it manually:

fastboot flash bootloader bootloader-angler-angler-03.58.img
fastboot reboot-bootloader
sleep 5
fastboot flash radio radio-angler-angler-03.72.img
fastboot reboot-bootloader
sleep 5
fastboot erase system
fastboot flash system system.img
fastboot erase boot
fastboot flash boot boot.img
fastboot erase cache
fastboot flash cache cache.img
fastboot erase vendor
fastboot flash vendor vendor.img
fastboot erase recovery
fastboot flash recovery recovery.img
fastboot reboot

After that boot into the normal system and let it do all the necessary upgrades. Once this is done, let us prepare for systemless root and possible hiding of it.

Get the necessary file

Get Magisk, SuperSU-magisk, as well as the Magisk-Manager.apk from this forum thread (direct links as of 2016/9: Magisk-v6.zip, SuperSU-v2.76-magisk.zip, Magisk-Manager.apk).

Transfer these two files to your device – I am using an external USB stick that can be plugged into the device, or copy it via your computer or via a cloud service.

Also we need to get a custom recovery image, I am using TWRP. I used the version 3.0.2-0 of TWRP I had already available, but that version didn’t manage to decrypt the file system and hangs. One needs to get at least version 3.0.2-2 from the TWRP web site.

Install latest TWRP recorvery

Reboot into boot-loader, then use fastboot to flash twrp:

fastboot erase recovery
fastboot flash recovery twrp-3.0.2-2-angler.img
fastboot reboot-bootloader

After that select Recovery with the up-down buttons and start twrp. You will be asked you pin if you have one set.

Install Magisk-v6.zip

Select “Install” in TWRP, select the Magisk-v6.zip file, and see you device being prepared for systemless root.

Install SuperSU, Magisk version

Again, boot into TWRP and use the install tool to install SuperSU-v2.76-magisk.zip. After reboot you should have a SuperSU binary running.

Install the Magisk Manager

From your device browse to the .apk and install it.

How to run safety net programs

Those programs that check for safety functions (Pokemon Go, Android Pay, several bank apps) need root disabled. Open the Magisk Manager and switch the root switch to the left (off). After this starting the program should bring you past the special check.

17 September, 2016 04:59AM by Norbert Preining

September 16, 2016

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

BBR opensourced

This is pretty big stuff for anyone who cares about TCP. Huge congrats to the team at Google.

16 September, 2016 10:16PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.0.2: Added functionality

anytime arrived on CRAN via release 0.0.1 a good two days ago. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects.

This new release 0.0.2 adds two new functions to gather conversion formats -- and set new ones. It also fixed a minor build bug, and robustifies a conversion which was seen to be not quite right under some time zones.

The NEWS file summarises the release:

Changes in Rcpp version 0.0.2 (2016-09-15)

  • Refactored to use a simple class wrapped around two vector with (string) formats and locales; this allow for adding formats; also adds accessor for formats (#4, closes #1 and #3).

  • New function addFormats() and getFormats().

  • Relaxed one tests which showed problems on some platforms.

  • Added as.POSIXlt() step to anydate() ensuring all POSIXlt components are set (#6 fixing #5).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 September, 2016 02:28AM

September 15, 2016

Craig Sanders

Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24

It’s Alive!

The day before yesterday (at Infoxchange, a non-profit whose mission is “Technology for Social Justice”, where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it.

Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn’t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing – but it turned out that many programs would segfault – e.g. it couldn’t run bash, but sh (dash) was OK.

I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn’t yet run in jessie).

After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I’d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23.

Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6).

In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven’t actually tested installing jessie’s libc6 on squeeze – if it works, I expect it’ll require a lot of extra stuff to be installed too.

I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie.

To build it, I had to use a system which hadn’t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker – crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn’t great, but it was OK…and it worked. Docker has native support for ZFS, so that’s what I’m using on my real hardware.

I started with the base wheezy image we’re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:

APT::Default-Release "wheezy";

Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app’s container again.

I then installed the following:

apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev zlib1g-dev libssl-dev libpq-dev

To minimise the risk of incompatible updates, it’s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn’t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them – I added them after the first build failed but before I remembered to set Apt::Default::Release (OTOH, it’s working OK now and we’re probably better off with libssl-dev from jessie anyway).

Once it built successfully, I exported the image to a tar file, copied it back to my real Docker machine (co-incidentally, the same machine with the docker VM installed) and imported it into docker there and tested it to make sure it didn’t have the same segfault issues that the original wheezy image did. No problem, it worked perfectly.

That worked, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I’m adding to it, but I expect there’ll be a few more yaks to shave before I’m finished.

When I finish what I’m currently working on, I’ll take a look at what needs to be done to get this app running on jessie. It’s on the TODO list at work, but everyone else is too busy – a perfect job for an unpaid volunteer. Wheezy’s getting too old to keep using, and this frankenwheezy needs to float away on an iceberg.

Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24 is a post from: Errata

15 September, 2016 04:24PM by cas

September 14, 2016

Mike Gabriel

[Arctica Project] Release of nx-libs (version 3.5.99.1)

Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Tuesday, Sep 13th, version 3.5.99.1 of nx-libs has been released [1].

This release brings some code cleanups regarding displayed copyright information and an improvement when it comes to reconnecting to an already running session from an X11 server with a color depths setup that is different from the X11 server setup where the NX/X11 session was originally created on. Furthermore, an issue reported to the X2Go developers has been fixed that caused problems on Windows clients on copy+paste actions between the NX/X11 session and the underlying MS Windows system. For details see X2Go BTS, Bug #952 [3].

Change Log

A list of recent changes (since 3.5.99.0) can be obtained from here.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

References

14 September, 2016 02:20PM by sunweaver

September 13, 2016

John Goerzen

Two Boys, An Airplane, Plus Hundreds of Old Computers

“Was there anything you didn’t like about our trip?”

Jacob’s answer: “That we had to leave so soon!”

That’s always a good sign.

When I first heard about the Vintage Computer Festival Midwest, I almost immediately got the notion that I wanted to go. Besides the TRS-80 CoCo II up in my attic, I also have fond memories of an old IBM PC with CGA monitor, a 25MHz 486, an Alpha also in my attic, and a lot of other computers along the way. I didn’t really think my boys would be interested.

But I mentioned it to them, and they just lit up. They remembered the Youtube videos I’d shown them of old line printers and punch card readers, and thought it would be great fun. I thought it could be a great educational experience for them too — and it was.

It also turned into a trip that combined being a proud dad with so many of my other interests. Quite a fun time.

IMG_20160911_061456

(Jacob modeling his new t-shirt)

Captain Jacob

Chicago being not all that close to Kansas, I planned to fly us there. If you’re flying yourself, solid flight planning is always important. I had already planned out my flight using electronic tools, but I always carry paper maps with me in the cockpit for backup. I got them out and the boys and I planned out the flight the old-fashioned way.

Here’s Oliver using a scale ruler (with markings for miles corresponding to the scale of the map) and Jacob doing calculating for us. We measured the entire route and came to within one mile of the computer’s calculation for each segment — those boys are precise!

20160904_175519

We figured out how much fuel we’d use, where we’d make fuel stops, etc.

The day of our flight, we made it as far as Davenport, Iowa when a chance of bad weather en route to Chicago convinced me to land there and drive the rest of the way. The boys saw that as part of the exciting adventure!

Jacob is always interested in maps, and had kept wanting to use my map whenever we flew. So I dug an old Android tablet out of the attic, put Avare on it (which has aviation maps), and let him use that. He was always checking it while flying, sometimes saying this over his headset: “DING. Attention all passengers, this is Captain Jacob speaking. We are now 45 miles from St. Joseph. Our altitude is 6514 feet. Our speed is 115 knots. We will be on the ground shortly. Thank you. DING”

Here he is at the Davenport airport, still busy looking at his maps:

IMG_20160909_183813

Every little airport we stopped at featured adults smiling at the boys. People enjoyed watching a dad and his kids flying somewhere together.

Oliver kept busy too. He loves to help me on my pre-flight inspections. He will report every little thing to me – a scratch, a fleck of paint missing on a wheel cover, etc. He takes it seriously. Both boys love to help get the plane ready or put it away.

The Computers

Jacob quickly gravitated towards a few interesting things. He sat for about half an hour watching this old Commodore plotter do its thing (click for video):

VID_20160910_142044

His other favorite thing was the phones. Several people had brought complete analog PBXs with them. They used them to demonstrate various old phone-related hardware; one had several BBSs running with actual modems, another had old answering machines and home-security devices. Jacob learned a lot about phones, including how to operate a rotary-dial phone, which he’d never used before!

IMG_20160910_151431

Oliver was drawn more to the old computers. He was fascinated by the IBM PC XT, which I explained was just about like a model I used to get to use sometimes. They learned about floppy disks and how computers store information.

IMG_20160910_195145

He hadn’t used joysticks much, and found Pong (“this is a soccer game!”) interesting. Somebody has also replaced the guts of a TRS-80 with a Raspberry Pi running a SNES emulator. This had thoroughly confused me for a little while, and excited Oliver.

Jacob enjoyed an old TRS-80, which, through a modern Ethernet interface and a little computation help in AWS, provided an interface to Wikipedia. Jacob figured out the text-mode interface quickly. Here he is reading up on trains.

IMG_20160910_140524

I had no idea that Commodore made a lot of adding machines and calculators before they got into the home computer business. There was a vast table with that older Commodore hardware, too much to get on a single photo. But some of the adding machines had their covers off, so the boys got to see all the little gears and wheels and learn how an adding machine can do its printing.

IMG_20160910_145911

And then we get to my favorite: the big iron. Here is a VAX — a working VAX. When you have a computer that huge, it’s easier for the kids to understand just what something is.

IMG_20160910_125451

When we encountered the table from the Glenside Color Computer Club, featuring the good old CoCo IIs like what I used as a kid (and have up in my attic), I pointed out to the boys that “we have a computer just like this that can do these things” — and they responded “wow!” I think they are eager to try out floppy disks and disk BASIC now.

Some of my favorites were the old Unix systems, which are a direct ancestor to what I’ve been working with for decades now. Here’s AT&T System V release 3 running on its original hardware:

IMG_20160910_144923

And there were a couple of Sun workstations there, making me nostalgic for my college days. If memory serves, this one is actually running on m68k in the pre-Sparc days:

IMG_20160910_153418

Returning home

After all the excitement of the weekend, both boys zonked out for awhile on the flight back home. Here’s Jacob, sleeping with his maps still up.

IMG_20160911_132952

As we were nearly home, we hit a pocket of turbulence, the kind that feels as if the plane is dropping a bit (it’s perfectly normal and safe; you’ve probably felt that on commercial flights too). I was a bit concerned about Oliver; he is known to get motion sick in cars (and even planes sometimes). But what did I hear from Oliver?

“Whee! That was fun! It felt like a roller coaster! Do it again, dad!”

13 September, 2016 05:03PM by John Goerzen

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.0.1: New package for 'anything' to POSIXct (or Date)

anytime just arrived on CRAN as a very first release 0.0.1.

So why (yet another) package dealing with dates and times? R excels at computing with dates, and times. By using typed representation we not only get all that functionality but also of the added safety stemming from proper representation.

But there is a small nuisance cost: How often have we each told as.POSIXct() that the origin is epoch '1970-01-01'? Do we have to say it a million more times? Similarly, when parsing dates that are in some recogniseable form of the YYYYMMDD format, do we really have to manually convert from integer or numeric or factor or ordered to character first? Having one of several common separators and/or date / time month forms (YYYY-MM-DD, YYYY/MM/DD, YYYYMMDD, YYYY-mon-DD and so on, with or without times, with or without textual months and so on), do we really need a format string?

anytime() aims to help as a small general purpose converter returning a proper POSIXct (or Date) object nomatter the input (provided it was somewhat parseable), relying on Boost date_time for the (efficient, performant) conversion.

See some examples on the anytime page or the GitHub README.md, or in the screenshot below. And then just give it try!

anytime examples

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 September, 2016 12:26PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, August 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 140 work hours have been dispatched among 10 paid contributors. Their reports are available:

  • Balint Reczey did 9.5 hours (out of 14.75 hours allocated + 2 remaining, thus keeping 7.25 extra hours for September).
  • Ben Hutchings did 14 hours (out of 14.75 hours allocated + 0.7 remaining, keeping 1.45 extra hours for September) but he did not publish his report yet.
  • Brian May did 14.75 hours.
  • Chris Lamb did 15 hours (out of 14.75 hours, thus keeping 0.45 hours for next month).
  • Emilio Pozuelo Monfort did 13.5 hours (out of 14.75 hours allocated + 0.5 remaining, thus keeping 2.95 hours extra hours for September).
  • Guido Günther did 9 hours.
  • Markus Koschany did 14.75 hours.
  • Ola Lundqvist did 15.2 hours (out of 14.5 hours assigned + 0.7 remaining).
  • Roberto C. Sanchez did 11 hours (out of 14.75h allocated, thus keeping 3.75 extra hours for September).
  • Thorsten Alteholz did 14.75 hours.

Evolution of the situation

The number of sponsored hours rised to 167 hours per month thanks to UR Communications BV joining as gold sponsor (funding 1 day of work per month)!

In practice, we never distributed this amount of work per month because some sponsors did not renew in time and some of them might not even be able to renew at all.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 29. It’s a small bump compared to last month but almost all issues are affected to someone.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

13 September, 2016 08:50AM by Raphaël Hertzog

hackergotchi for Joey Hess

Joey Hess

PoW bucket bloom: throttling anonymous clients with proof of work, token buckets, and bloom filters

An interesting side problem in keysafe's design is that keysafe servers, which run as tor hidden services, allow anonymous data storage and retrieval. While each object is limited to 64 kb, what's to stop someone from making many requests and using it to store some big files?

The last thing I want is a git-annex keysafe special remote. ;-)

I've done a mash-up of three technologies to solve this, that I think is perhaps somewhat novel. Although it could be entirely old hat, or even entirely broken. (All I know so far is that the code compiles.) It uses proof of work, token buckets, and bloom filters.


Each request can have a proof of work attached to it, which is just a value that, when hashed with a salt, starts with a certain number of 0's. The salt includes the ID of the object being stored or retrieved.

The server maintains a list of token buckets. The first can be accessed without any proof of work, and subsequent ones need progressively more proof of work to be accessed.

Clients will start by making a request without a PoW, and that will often succeed, but when the first token bucket is being drained too fast by other load, the server will reject the request and demand enough proof of work to allow access to the second token bucket. And so on down the line if necessary. At the worst, a client may have to do 8-16 minutes of work to access a keysafe server that is under heavy load, which would not be ideal, but is acceptible for keysafe since it's not run very often.

If the client provides a PoW good enough to allow accessing the last token bucket, the request will be accepted even when that bucket is drained. The client has done plenty of work at this point, so it would be annoying to reject it. To prevent an attacker that is willing to burn CPU from abusing this loophole to flood the server with object stores, the server delays until the last token bucket fills back up.


So far so simple really, but this has a big problem: What prevents a proof of work from being reused? An attacker could generate a single PoW good enough to access all the token buckets, and flood the server with requests using it, and so force everyone else to do excessive amounts of work to use the server.

Guarding against that DOS is where the bloom filters come in. The server generates a random request ID, which has to be included in the PoW salt and sent back by the client along with the PoW. The request ID is added to a bloom filter, which the server can use to check if the client is providing a request ID that it knows about. And a second bloom filter is used to check if a request ID has been used by a client before, which prevents the DOS.

Of course, when dealing with bloom filters, it's important to consider what happens when there's a rare false positive match. This is not a problem with the first bloom filter, because a false positive only lets some made-up request ID be used. A false positive in the second bloom filter will cause the server to reject the client's proof of work. But the server can just request more work, or send a new request ID, and the client will follow along.

The other gotcha with bloom filters is that filling them up too far sets too many bits, and so false positive rates go up. To deal with this, keysafe just keeps count of how many request IDs it has generated, and once it gets to be too many to fit in a bloom filter, it makes a new, empty bloom filter and starts storing request IDs in it. The old bloom filter is still checked too, providing a grace period for old request IDs to be used. Using bloom filters that occupy around 32 mb of RAM, this rotation only has to be done every million requests of so.

But, that rotation opens up another DOS! An attacker could cause lots of request IDs to be generated, and so force the server to rotate its bloom filters too quickly, which would prevent any requests from being accepted. To solve this DOS, just use one more token bucket, to limit the rate that request IDs can be generated, so that the time it would take an attacker to force a bloom filter rotation is long enough that any client will have plenty of time to complete its proof of work.


This sounds complicated, and probably it is, but the implementation only took 333 lines of code. About the same number of lines that it took to implement the entire keysafe HTTP client and server using the amazing servant library.

There are a number of knobs that may need to be tuned to dial it in, including the size of the token buckets, their refill rate, the size of the bloom filters, and the number of argon2 iterations in the proof of work. Servers may eventually need to adjust those on the fly, so that if someone decides it's worth burning large quantities of CPU to abuse keysafe for general data storage, the server throttles down to a rate that will take a very long time to fill up its disk.

This protects against DOS attacks that fill up the keysafe server storage. It does not prevent a determined attacker, who has lots of CPU to burn, from flooding so many requests that legitimate clients are forced to do an expensive proof of work and then time out waiting for the server. But that's an expensive attack to keep running, and the proof of work can be adjusted to make it increasingly expensive.

13 September, 2016 05:14AM

September 12, 2016

hackergotchi for Norbert Preining

Norbert Preining

Farewell academics talk: Colloquium Logicum 2016 – Gödel Logics

Today I had my invited talk at the Colloquium Logicum 2016, where I gave an introduction to and overview of the state of the art of Gödel Logics. Having contributed considerably to the state we are now, it was a pleasure to have the opportunity to give an invited talk on this topic.

cl16-preining

It was also somehow a strange talk (slides are available here), as it was my last as “academic”. After the rejection of extension of contract by the JAIST (foundational research, where are you going? Foreign faculty, where?) I have been unemployed – not a funny state in Japan, but also not the first time I have been, my experiences span Austrian and Italian unemployment offices. This unemployment is going to end this weekend, and after 25 years in academics I say good-bye.

Considering that I had two invited talks, one teaching assignment for the ESSLLI, submitted three articles (another two forthcoming) this year, JAIST is missing out on quite a share of achievements in their faculty database. Not my problem anymore.

It was a good time in academics, and I will surely not stop doing research, but I am looking forward to new challenges and new ways of collaboration and development. I will surely miss academics, but for now I will dedicate my energy to different things in life.

Thanks to all the colleagues who did care, and for the rest, I have already forgotten you.

12 September, 2016 09:27PM by Norbert Preining

hackergotchi for Keith Packard

Keith Packard

hopkins

Hopkins Trailer Brake Controller in Subaru Outback

My minivan transmission gave up the ghost last year, so I bought a Subaru outback to pull my t@b travel trailer. There isn't a huge amount of space under the dash, so I didn't want to mount a trailer brake controller in the 'usual' spot, right above my right knee.

Instead, I bought a Hopkins InSIGHT brake controller, 47297. That comes in three separate pieces which allows for very flexible mounting options.

I stuck the 'main' box way up under the dash on the left side of the car. There was a nice flat spot with plenty of space that was facing the right direction:

The next trick was to mount the display and control boxes around the storage compartment in the center console:

Routing the cables from the controls over to the main unit took a piece of 14ga solid copper wire to use as a fishing line. The display wire was routed above the compartment lid, the control wire was routed below the lid.

I'm not entirely happy with the wire routing; I may drill some small holes and then cut the wires to feed them through.

12 September, 2016 08:22PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

mtpfs, feh and not being able to share the debconf experience.

I have been sick for about 2 weeks now hence haven’t written. I had joint pains and still am weak. There has been lot of reports of malaria, chikungunya and dengue fever around the city. The only thing I came to know is how lucky I am to be able to move around on 2 legs and how powerless and debilitating it feels when you can’t move. In the interim I saw ‘Me Before You‘ and after going through my most miniscule experience, I could relate with Will Taylor’s character. If I was in his place, I would probably make the same choices.

But my issues are and were slightly different. Last month I was supposed to share my debconf experience in the local PLUG meet. For that purpose, I took some pictures from my phone on a pen-drive to share. But when reached the venue, found out that I had forgotten to take the pen-drive. What I had also done is used the mogrify command from the imagemagick stable to lossy compress the images on the pen-drive so it is easier on image viewers.

But that was not to be and at the last moment had to use my phone plugged into the USB drive of the lappy and show some pictures. This was not good. I had known that it was mounted somewhere but hadn’t looked at where.

After coming back home, it took me hardly 10 minutes to find out where it was mounted. It is not mounted under /media/shirish but under /run/user/1000/gvfs . If I do list under it shows mtp:host=%5Busb%3A005%2C007%5D .

I didn’t need any packages under debian to make it work. Interestingly, the only image viewer which seems to be able to work with all the images is ‘feh’ which is a command-line image viewer in Debian.

[$] aptitude show feh
Package: feh
Version: 2.16.2-1
State: installed
Automatically installed: no
Priority: optional
Section: graphics
Maintainer: Debian PhotoTools Maintainers
Architecture: amd64
Uncompressed Size: 391 k
Depends: libc6 (>= 2.15), libcurl3 (>= 7.16.2), libexif12 (>= 0.6.21-1~), libimlib2 (>= 1.4.5), libpng16-16 (>= 1.6.2-1), libx11-6, libxinerama1
Recommends: libjpeg-progs
Description: imlib2 based image viewer
feh is a fast, lightweight image viewer which uses imlib2. It is commandline-driven and supports multiple images through slideshows, thumbnail
browsing or multiple windows, and montages or index prints (using TrueType fonts to display file info). Advanced features include fast dynamic
zooming, progressive loading, loading via HTTP (with reload support for watching webcams), recursive file opening (slideshow of a directory
hierarchy), and mouse wheel/keyboard control.
Homepage: http://feh.finalrewind.org/

I did try various things to get it to mount under /media/shirish/ but as of date have no luck. Am running Android 6.0 – Marshmallow and have enabled ‘USB debugging’ with help from my friend ‘Akshat’ . I even changed the /etc/fuse.conf options but even that didn’t work.

#cat /etc/fuse.conf
[sudo] password for shirish:
# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)

# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
mount_max = 1

# Allow non-root users to specify the allow_other or allow_root mount options.
user_allow_other

One way which I haven’t explored is adding/making an entry into /etc/fstab. If anybody knows of a solution which doesn’t involve changing content of /etc/fstab. At the same time you are able to get the card and phone directories mounted under /media// , in my case /media/shirish would be interested to know. I would like the /etc/fstab to remain as it is.

I am using Samsung J5 (unrooted) –

Btw I tried all the mtpfs packages in Debian testing but without any meaningful change😦

Look forward to tips.


Filed under: Miscellenous Tagged: #Android, #Debconf16, #debian, #mptfs, feh, FUSE, PLUG

12 September, 2016 05:29PM by shirishag75

hackergotchi for Steve Kemp

Steve Kemp

If your code accepts URIs as input..

There are many online sites that accept reading input from remote locations. For example a site might try to extract all the text from a webpage, or show you the HTTP-headers a given server sends back in response to a request.

If you run such a site you must make sure you validate the schema you're given - also remembering to do that if you're sent any HTTP-redirects.

Really the issue here is a confusion between URL & URI.

The only time I ever communicated with Aaron Swartz was unfortunately after his death, because I didn't make the connection. I randomly stumbled upon the html2text software he put together, which had an online demo containing a form for entering a location. I tried the obvious input:

file:///etc/passwd

The software was vulnerable, read the file, and showed it to me.

The site gives errors on all inputs now, so it cannot be used to demonstrate the problem, but on Friday I saw another site on Hacker News with the very same input-issue, and it reminded me that there's a very real class of security problems here.

The site in question was http://fuckyeahmarkdown.com/ and allows you to enter a URL to convert to markdown - I found this via the hacker news submission.

The following link shows the contents of /etc/hosts, and demonstrates the problem:

http://fuckyeahmarkdown.example.com/go/?u=file:///etc/hosts&read=1&preview=1&showframe=0&submit=go

The output looked like this:

..
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
127.0.0.1 stage
127.0.0.1 files
127.0.0.1 brettt..
..

In the actual output of '/etc/passwd' all newlines had been stripped. (Which I now recognize as being an artifact of the markdown processing.)

UPDATE: The problem is fixed now.

12 September, 2016 04:33PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

apt-offline 1.7.1 released

I am happy to mention the release of apt-offline, version 1.7.1.

This release includes many bug fixes, code cleanups and better integration.

  • Integration with PolicyKit
  • Better integration with apt gpg keyring
  • Resilient to failures when a sub-task errors out
  • New Feature: Changelog
    • This release adds the ability to deal with package changelogs ('set' command option: --generate-changelog) based on what is installed, extract changelog (Currently support with python-apt only) from downloaded packages and display them during installation ('install' command opiton: --skip-changelog, if you want to skip display of changelog)
  • New Option: --apt-backend
    • Users can now opt to choose an apt backend of their choice. Currently support: apt, apt-get (default) and python-apt

 

Hopefully, there will be one more release, before the release to Stretch.

apt-offline can be downloaded from its homepage or from Github page. 

 

Update: The PolicyKit integration requires running the apt-offline-gui command with pkexec (screenshot). It also work fine with sudo, su etc.

 

Categories: 

Keywords: 

Like: 

12 September, 2016 10:41AM by Ritesh Raj Sarraf

Reproducible builds folks

Reproducible Builds: week 72 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday September 4 and Saturday September 10 2016:

Reproducible work in other projects

Python 3.6's dictonary type now retains the insertion order. Thanks to themill for the report.

In coreboot, Alexander Couzens committed a change to make their release archives reproducible.

Patches submitted

Reviews of unreproducible packages

We've been adding to our knowledge about identified issues. 3 issue types have been added:

1 issue type has been updated:

16 have been have updated:

13 have been removed, not including removed packages:

100s of packages have been tagged with the more generic captures_build_path, and many with captures_kernel_version, user_hostname_manually_added_requiring_further_investigation, user_hostname_manually_added_requiring_further_investigation, captures_shell_variable_in_autofoo_script, etc.

Particular thanks to Emanuel Bronshtein for his work here.

Weekly QA work

FTBFS bugs have been reported by:

  • Aaron M. Ucko (1)
  • Chris Lamb (7)

diffoscope development

strip-nondeterminism development

tests.reproducible-builds.org:

  • F-Droid:
    • Hans-Christoph Steiner found after extensive debugging that for kvm-on-kvm, vagrant from stretch is needed (or a backport, but that seems harder than setting up a new VM).
  • FreeBSD:
    • Holger updated the VM for testing FreeBSD to FreeBSD 10.3.

Misc.

This week's edition was written by Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

12 September, 2016 07:49AM

September 11, 2016

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2016/34-36

as before, my work on release-critical bugs was centered around perl issues. here's the list of bugs I worked on:

  • #687904 – interchange-ui: "interchange-ui: cannot install this package"
    (re?)apply patch from #625904, upload to DELAYED/5
  • #754755 – src:libinline-java-perl: "libinline-java-perl: FTBFS on mips: test suite issues"
    prepare a preliminary fix (pkg-perl)
  • #821994 – src:interchange: "interchange: Build arch:all+arch:any but is missing build-{arch,indep} targets"
    apply patch from sanvila to add targets, upload to DELAYED/5
  • #834550 – src:interchange: "interchange: FTBFS with '.' removed from perl's @INC"
    patch to "require ./", upload to DELAYED/5
  • #834731 – src:kdesrc-build: "kdesrc-build: FTBFS with '.' removed from perl's @INC"
    add patch from Dom to "require ./", upload to DELAYED/5
  • #834738 – src:libcatmandu-mab2-perl: "libcatmandu-mab2-perl: FTBFS with '.' removed from perl's @INC"
    add patch from Dom to "require ./" (pkg-perl)
  • #835075 – src:libmail-gnupg-perl: "libmail-gnupg-perl: FTBFS: Failed 1/10 test programs. 0/4 subtests failed."
    add some debugging info
  • #835133 – libnet-jabber-perl: "libnet-jabber-perl: FTBFS in testing"
    add patch from CPAN RT (pkg-perl)
  • #835206 – src:munin: "munin: FTBFS with '.' removed from perl's @INC"
    add patch from Dom to call perl with -I., upload to DELAYED/5, then cancelled on maintainer's request
  • #835353 – src:pari: "pari: FTBFS with '.' removed from perl's @INC"
    add patch to call perl with -I., upload to DELAYED/5
  • #835711 – src:libconfig-identity-perl: "libconfig-identity-perl: FTBFS: Tests failures"
    run tests under gnupg1 (pkg-perl)
  • #837136 – libgtk3-perl: "libgtk3-perl: FTBFS: t/overrides.t failure"
    add patch from CPAN RT (pkg-perl)
  • #837237 – src:libtest-file-perl: "libtest-file-perl: FTBFS: Tests failures"
    add patch so tests find their common files again (pkg-perl)
  • #837249 – src:libconfig-record-perl: "libconfig-record-perl: FTBFS: lib/Config/Record.pm: No such file or directory at Config-Record.spec.PL line 13."
    fix build in debian/rules (pkg-perl)

11 September, 2016 09:42PM

Niels Thykier

Unseen changes to lintian.d.o

We have been making a lot of minor changes to lintian.d.o and the underlying report framework. Most of them were hardly noticeable to the naked. In fact, I probably would not have spotted any of them, if I had not been involved in writing them.  Nonetheless, I felt like sharing them, so here goes.🙂

User “visible” changes:

In case you were wondering, the section title is partly a pun as half of these changes were intended to assist visually impaired users. They were triggered by me running into Sam Hartmann at DebConf16, where I asked him about how easy Debian’s websites were for blind people. Allegedly, we are generally doing quite good in his opinion (with one exception, for which Sam filed Bug#830213), which was a positive surprise for me.

On a related note: Thanks Luke Faraone and Asheesh Laroia for getting helping me started on these changes.🙂

Reporting framework / “Internal” changes:

With the last change + the “−−no−generate−reports” option, we were able to schedule lintian more frequently. Originally, lintian only ran once a day. With the “−−no−generate−reports“, we added a second run and with the last changes, we bumped it to 4 times a day. Unsurprisingly, it means that we are now reprocessing the archive a lot faster than previously.

All of the above is basically the all the note-worthy changes on the Lintian reporting framework since the Partial rewrite of lintian’s reporting setup (~1½ years ago).


Filed under: Debian, Lintian

11 September, 2016 07:50PM by Niels Thykier

debhelper 10 is now available

Today, debhelper 10 was uploaded to unstable and is coming to a mirror near you “really soon now”. The actual changes between version “9.20160814” and version “10” are rather modest. However, it does mark the completion of debhelper compat 10, which has been under way since early 2012.

Some highlights from compat 10 include:

  • The dh sequence in compat 10 automatically regenerate autotools files via dh_autoreconf
  • The dh sequence in compat 10 includes the dh-systemd debhelper utilities
  • dh sequencer based packages now defaults to building in parallel (i.e. “–parallel” is default in compat 10)
  • dh_installdeb now properly shell-escapes maintscript arguments.

For the full list of changes in compat 10, please review the contents of the debhelper(7) manpage. Beyond that, you may also want to upgrade your lintian to 2.5.47 as it is the first version that knows that compat 10 is stable.

 


Filed under: Debhelper, Debian

11 September, 2016 06:06PM by Niels Thykier

Hideki Yamane

mirror disk usage: not so much as expected

Debian repository mirror server disk usage.

I guess many new packages are added to repo, but disk usage is not so much. Why?

11 September, 2016 09:02AM by Hideki Yamane (noreply@blogger.com)

September 10, 2016

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

New package gettz on CRAN

gettz is now on CRAN in its initial release 0.0.1.

It provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That can happen when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo.

Duane McCully provided a nice StackOverflow answer with code that offers fallbacks via /etc/timezone (on Debian/Ubuntu) or /etc/sysconfig/clock (on RedHat/CentOS/Fedora, and rumour has it, BSD* systems) or /etc/TIMEZONE (on Solaris). The gettz micro-package essentially encodes that approach so that we have an optional fallback when Sys.timezone() comes up empty.

In the previous paragraph, note the stark absense of OS X where there seems nothing to query, and of course Windows. Contributions for either would be welcome.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 September, 2016 09:40PM

hackergotchi for Sylvain Le Gall

Sylvain Le Gall

Release of OASIS 0.4.7

I am happy to announce the release of OASIS v0.4.7.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Drop support for OASISFormat 0.2 and 0.1.
  • New plugin "omake" to support build, doc and install actions.
  • Improve automatic tests (Travis CI and AppVeyor)
  • Trim down the dependencies (removed ocaml-gettext, camlp4, ocaml-data-notation)

Features:

  • findlib_directory (beta): to install libraries in sub-directories of findlib.
  • findlib_extra_files (beta): to install extra files with ocamlfind.
  • source_patterns (alpha): to provide module to source file mapping.

This version contains a lot of changes and is the achievement of a huge amount of work. The addition of OMake as a plugin is a huge progress. The overall work has been targeted at making OASIS more library like. This is still a work in progress but we made some clear improvement by getting rid of various side effect (like the requirement of using "chdir" to handle the "-C", which leads to propage ~ctxt everywhere and design OASISFileSystem).

I would like to thanks again the contributor for this release: Spiros Eliopoulos, Paul Snively, Jeremie Dimino, Christopher Zimmermann, Christophe Troestler, Max Mouratov, Jacques-Pascal Deplaix, Geoff Shannon, Simon Cruanes, Vladimir Brankov, Gabriel Radanne, Evgenii Lepikhin, Petter Urkedal, Gerd Stolpmann and Anton Bachin.

10 September, 2016 08:00PM by gildor

Enrico Zini

Dreaming of being picked

From "Stop stealing dreams":

«Settling for the not-particularly uplifting dream of a boring, steady job isn’t helpful. Dreaming of being picked — picked to be on TV or picked to play on a team or picked to be lucky — isn’t helpful either. We waste our time and the time of our students when we set them up with pipe dreams that don’t empower them to adapt (or better yet, lead) when the world doesn’t work out as they hope.

The dreams we need are self-reliant dreams. We need dreams based not on what is but on what might be. We need students who can learn how to learn, who can discover how to push themselves and are generous enough and honest enough to engage with the outside world to make those dreams happen.»

This made me think that I know many hero stories based on "the chosen", like Matrix, like most superheros getting powers either from some entity chosing them for it, or from chance.

I have a hard time thinking of a superhero who becomes one just by working hard at acquiring and honing their skills: I can only think of Batman and Ironman, and they start off as super rich.

If I think of people who start from scratch as commoners and work hard to become exceptional, in the standard superhero narrative, I can only think of supervillains.

Scary.

It makes me feel culturally biased into thinking that a common person cannot be trusted to act responsibly, and that only the rich, the chosen and the aristocrats can.

As a bias it may serve the rich and the aristocrats, but I don't think it serves society as a whole.

10 September, 2016 07:47AM

September 09, 2016

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.6: bugfix update

Relatively quickly after version 0.4.5 of RProtoBuf was released, we have a new version 0.4.6 to announce which appeared on CRAN today.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This version contains a contributed bug-fix pull request covering conversion of zero-length vectors, and adding native support for S4 objects. At the request / suggestion of the CRAN maintainers, it also uncomments a LaTeX macro in the vignette (corresponding to our recent JSS paper paper) which older R versions do not (yet) have in their jss.cls file.

Changes in RProtoBuf version 0.4.6 (2016-09-08)

  • Support for serializing zero-length objects was added (PR #18 addressing #13)

  • S4 objects are natively encoded (also PR #18)

  • The vignette based on the JSS paper no longer uses a macro available only with the R-devel version of jss.cls, and hence builds on all R versions

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 September, 2016 11:40PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Thinking about CI, maybe writing ick2

A year ago I got tired of Jenkins and wrote a CI system for myself, Ick. It's served me well since, but it's a bit clunky and awkward and I have to hope nobody else wants to use it.

I've been thinking about re-architecting Ick from scratch, and so I wrote down some of my thinking about this. It's very raw, but just in case someone else might be interested, I put it online at ick2.

At this point I'm still thinking about very high level concepts. I've not written any code, and probably won't in the next couple of months. But I had to get this out of my brain.

09 September, 2016 06:04PM

hackergotchi for Steve McIntyre

Steve McIntyre

Time flies

Slightly belated...

Another year, another OMGWTFBBQ. By my count, we had 49 people (and a dog) in my house and garden at the peak on Saturday evening. It was excellent to see people from all over coming together again, old friends and new. This year we had some weather issues, but due to the delights of gazebo technology most people stayed mostly dry. :-)

Also: thanks to a number of companies near and far who sponsored the important refreshments for the weekend:

As far as I could tell, everybody enjoyed themselves; I know I definitely did!

09 September, 2016 03:57PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Metropolis

Every year since 2010 the Whitley Bay Film Festival has put on a programme of movies in my home town, often with some quirk or gimmick. A few years back we watched "Dawn Of The Dead" in a shopping centre—the last act was interrupted by a fake film-reel break, then a load of zombies emerged from the shops. Sometime after that, we saw "The Graduate" within a Church as part of their annual "Secret Cinema" showing. Other famous stunts (which I personally did not witness) include a screening of Jaws on the beach and John Carpenter's "The Fog" in Whitley Bay Lighthouse.

This year I only went to one showing, Fritz Lang's Metropolis. Two twists this time: it was being shown in The Rendezvous Cafe, an Art-Deco themed building on the sea front; the whole film was accompanied by a live, improvised synthesizer jam by a group of friends and synth/sound enthusiasts who branded themselves "The Mediators" for the evening.

I've been meaning to watch Metropolis for a long time (I've got the Blu-Ray still sat in the shrink-wrap) and it was great to see the newly restored version, but the live synth accompaniment was what really made the night special for me. They used a bunch of equipment, most notably a set of Korg Volcas. The soundtrack varied in style and intensity to suit the scenes, with the various under-city scenes backed by a pumping, industrial-style improvisation which sounded quite excellent.

I've had an interest in playing with synthesisers and making music for years, but haven't put the time in to do it properly. I left newly inspired and energised to finally try to make the time to explore it.

09 September, 2016 10:55AM

September 08, 2016

Jamie McClelland

Wait... is that how you are supposed to configure your SSD card?

I bought a laptop with only SSD drives a while ago and based on a limited amount of reading, added the "discard" option to my /etc/fstab file for all partitions and happily went on my way expecting to avoid the performance degradation problems that happen on SSD cards without this setting).

Yesterday, after a several month ordeal, I finally installed SSD drives in one of May First/People Link's servers and started doing more research to find the best way to set things up.

I was quite surprised to learn that my change in /etc/fstab accomplished nothing. Well, not entirely true, my /boot partition was still getting empty sectors reported to the SSD card.

Since my filesystem is on top of LVM and LVM is on top of an encrypted disk, those messages from the files system to the disk were not getting through. I learned that when I tried to run the fstrim command on one of the partitions and received the message that the disk didn't support it. Since my /boot partition is not in LVM or encrypted, it worked on /boot.

I then made the necessary changes to /etc/lvm/lvm.conf and /etc/crypttab, restarted and... same result. Then I ran update-initramfs -u and rebooted and now fstrim works. I decided to remove the discard option from /etc/fstab and will set a cron job to run fstrim periodically.

Also, I learned of some security implications of using trim on an encrypted disk which don't seem to outweigh the benefits.

08 September, 2016 05:43PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Installing files for other applications with autotools

Let's say you have a configure.ac file which contains this:

PKG_CHECK_VAR([p11_moduledir], "p11-kit-1", "p11_module_path")
AC_SUBST([p11_moduledir])

and that it goes with a Makefile.am which contains this:

dist_p11_module_DATA = foo.module

Then things should work fine, right? When you run make install, your modules install to the right location, and p11-kit will pick up everything the way it should.

Well, no. Not exactly. That is, it will work for the common case, but not for some other cases. You see, if you do that, then make distcheck will fail pretty spectacularly. At least if you run that as non-root (which you really really should do). The problem is that by specifying the p11_moduledir variable in that way, you hardcode it; it doesn't honour any $prefix or $DESTDIR variables that way. The result of that is that when a user installs your package by specifying --prefix=/opt/testmeout, it will still overwrite files in the system directory. Obviously, that's not desireable.

The $DESTDIR bit is especially troublesome, as it makes packaging your software for the common distributions complicated (most packaging software heavily relies on DESTDIR support to "install" your software in a staging area before turning it into an installable package).

So what's the right way then? I've been wondering about that myself, and asked for the right way to do something like that on the automake mailinglist a while back. The answer I got there wasn't entirely satisfying, and at the time I decided to take the easy way out (EXTRA_DIST the file, but don't actually install it). Recently, however, I ran against a similar problem for something else, and decided to try to do it the proper way this time around.

p11-kit, like systemd, ships pkg-config files which contain variables for the default locations to install files into. These variables' values are meant to be easy to use from scripts, so that no munging of them is required if you want to directly install to the system-wide default location. The downside of this is that, if you want to install to the system-wide default location by default from an autotools package (but still allow the user to --prefix your installation into some other place, accepting that then things won't work out of the box), you do need to do the aforementioned munging.

Luckily, that munging isn't too hard, provided whatever package you're installing for did the right thing:

PKG_CHECK_VAR([p11_moduledir], "p11-kit-1", "p11_module_path")
PKG_CHECK_VAR([p11kit_libdir], "p11-kit-1", "libdir")
if test -z $ac_cv_env_p11_moduledir_set; then
    p11_moduledir=$(echo $p11_moduledir|sed -e "s,$p11kit_libdir,\${libdir},g")
fi
AC_SUBST([p11_moduledir])

Whoa, what just happened?

First, we ask p11-kit-1 where it expects modules to be. After that, we ask p11-kit-1 what was used as "libdir" at installation time. Usually that should be something like /usr/lib or /usr/lib/gnu arch triplet or some such, but it could really be anything.

Next, we test to see whether the user set the p11_moduledir variable on the command line. If so, we don't want to munge it.

The next line looks for the value of whatever libdir was set to when p11-kit-1 was installed in the value of p11_module_path, and replaces it with the literal string ${libdir}.

Finally, we exit our if and AC_SUBST our value into the rest of the build system.

The resulting package will have the following semantics:

  • If someone installs p11-kit-1 your package with the same prefix, the files will install to the correct location.
  • If someone installs both packages with a different prefix, then by default the files will not install to the correct location. This is what you'd want, however; using a non-default prefix is the only way to install something as non-root, and if root installed something into /usr, a normal user wouldn't be able to fix things.
  • If someone installs both packages with a different prefix, but sets the p11_moduledir variable to the correct location, at configure time, then things will work as expected.

I suppose it would've been easier if the PKG_CHECK_VAR macro could (optionally) do that munging by itself, but then, can't have everything.

08 September, 2016 12:44PM