July 22, 2017

hackergotchi for Junichi Uekawa

Junichi Uekawa

asterisk fails to start on my raspberry pi.

asterisk fails to start on my raspberry pi. I don't quite understand what the error message is but systemctl tells me there was a timeout. Don't know which timeout it hits.

22 July, 2017 08:02AM by Junichi Uekawa

July 21, 2017

hackergotchi for Michal Čihař

Michal Čihař

Making Weblate more secure and robust

Having publicly running web application always brings challenges in terms of security and in generally in handling untrusted data. Security wise Weblate has been always quite good (mostly thanks to using Django which comes with built in protection against many vulnerabilities), but there were always things to improve in input validation or possible information leaks.

When Weblate has joined HackerOne (see our first month experience with it), I was hoping to get some security driven core review, but apparently most people there are focused on black box testing. I can certainly understand that - it's easier to conduct and you need much less knowledge of the tested website to perform this.

One big area where reports against Weblate came in was authentication. Originally we were mostly fully relying on default authentication pipeline coming with Python Social Auth, but that showed some possible security implications and we ended up with having heavily customized authentication pipeline to avoid several risks. Some patches were submitted back, some issues reported, but still we've diverged quite a lot in this area.

Second area where scanning was apparently performed, but almost none reports came, was input validation. Thanks to excellent XSS protection in Django nothing was really found. On the other side this has triggered several internal server errors on our side. At this point I was really happy to have Rollbar configured to track all errors happening in the production. Thanks to having all such errors properly recorded and grouped it was really easy to go through them and fix them in our codebase.

Most of the related fixes have landed in Weblate 2.14 and 2.15, but obviously this is ongoing effort to make Weblate better with every release.

Filed under: Debian English SUSE Weblate

21 July, 2017 10:00AM

July 20, 2017

hackergotchi for Gunnar Wolf

Gunnar Wolf

Hey, everybody, come share the joy of work!

I got several interesting and useful replies, both via the blog and by personal email, to my two previous posts where I mentioned I would be starting a translation of the Made With Creative Commons book. It is my pleasure to say: Welcome everybody, come and share the joy of work!

Some weeks ago, our project was accepted as part of Hosted Weblate, lowering the bar for any interested potential contributor. So, whoever wants to be a part of this: You just have to log in to Weblate (or create an account if needed), and start working!

What is our current status? Amazingly better than anything I have exepcted: Not only we have made great progress in Spanish, reaching >28% of translated source strings, but also other people have started translating into Norwegian Bokmål (hi Petter!) and Dutch (hats off to Heimen Stoffels!). So far, Spanish (where Leo Arias and myself are working) is most active, but anything can happen.

I still want to work a bit on the initial, pre-po4a text filtering, as there are a small number of issues to fix. But they are few and easy to spot, your translations will not be hampered much when I solve the missing pieces.

So, go ahead and get to work! :-D Oh, and if you translate sizeable amounts of work into Spanish: As my university wants to publish (in paper) the resulting works, we would be most grateful if you can fill in the (needless! But still, they ask me to do this...) authorization for your work to be a part of a printed book.

20 July, 2017 05:17AM by gwolf

hackergotchi for Norbert Preining

Norbert Preining

The poison of academia.edu

All those working in academics or research have surely heard about academia.edu. It started out as a service for academics, in their own words:

Academia.edu is a platform for academics to share research papers. The company’s mission is to accelerate the world’s research.

But as with most of these platforms, they need to get money, and since some months now academia.edu is pressing users to pay into a premium account at the incredible rate of 8.25USD per month.

This is about he same you pay for Netflix, or some other streaming service. If you remain on the free side, what remains for you to do is SNS-like stuff, and uploading your papers so that academia.edu can make money from it.

What I am really surprised that they can pull this of at a .edu domain. The registry requirements state

For Institutions Within the United States. To obtain an Internet name in the .edu domain, your institution must be a postsecondary institution that is institutionally accredited by an agency on the U.S. Department of Education’s list of Nationally Recognized Accrediting Agencies (see recognized accrediting bodies).
Educause web site

Seeing what they are doing I think it is high time to request removal of the domain name.

So let us see what they are offering for their paid service:

  • Reader “The Readers feature tells you who is reading, downloading, and bookmarking your papers.”
  • Mentions “Get notified when you’re cited or mentioned, including in papers, books, drafts, theses, and syllabuses that Google Scholar can’t find.”
  • Advanced Search “Search the full text and citations of over 18 million papers”
  • Analytics “Learn more about who visits your profile”
  • Homepage – automatically generated home page from the data you enter into the system

On the other hand, the free service is consisting of SNS elements where you can follow other researchers, see when they upload/input an event, and that is it more or less. They have lured a considerable amount of academics into this service, gathered lots of papers, and now they are showing their real face – money.

In contrast to LinkedIn, which also offers paid tier, but keeps the free tier reasonably usable, academia.edu has broken its promise to “accelerate the world’s research” and even worse, it is NOT a “platform for academics to share research papers”. They are collecting papers and sell access to them, like the publisher paywalls.

I consider this kind of service highly poisonous for the academic environment and researchers.

20 July, 2017 01:28AM by Norbert Preining

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Testing Our Theories About “Eternal September”

Graph of subscribers and moderators over time in /r/NoSleep. The image is taken from our 2016 CHI paper.

Last year at CHI 2016, my research group published a qualitative study examining the effects of a large influx of newcomers to the /r/nosleep online community in Reddit. Our study began with the observation that most research on sustained waves of newcomers focuses on the destructive effect of newcomers and frequently invokes Usenet’s infamous “Eternal September.” Our qualitative study argued that the /r/nosleep community managed its surge of newcomers gracefully through strategic preparation by moderators, technological systems to reign in on norm violations, and a shared sense of protecting the community’s immersive environment among participants.

We are thrilled that, less a year after the publication of our study, Zhiyuan “Jerry” Lin and a group of researchers at Stanford have published a quantitative test of our study’s findings! Lin analyzed 45 million comments and upvote patterns from 10 Reddit communities that a massive inundation of newcomers like the one we studied on /r/nosleep. Lin’s group found that these communities retained their quality despite a slight dip in its initial growth period.

Our team discussed doing a quantitative study like Lin’s at some length and our paper ends with a lament that our findings merely reflected, “propositions for testing in future work.” Lin’s study provides exactly such a test! Lin et al.’s results suggest that our qualitative findings generalize and that sustained influx of newcomers need not doom a community to a descent into an “Eternal September.” Through strong moderation and the use of a voting system, the subreddits analyzed by Lin appear to retain their identities despite the surge of new users.

There are always limits to research projects work—quantitative and qualitative. We think the Lin’s paper compliments ours beautifully, we are excited that Lin built on our work, and we’re thrilled that our propositions seem to have held up!

This blog post was written with Charlie Kiene. Our paper about /r/nosleep, written with Charlie Kiene and Andrés Monroy-Hernández, was published in the Proceedings of CHI 2016 and is released as open access. Lin’s paper was published in the Proceedings of ICWSM 2017 and is also available online.

20 July, 2017 12:12AM by Benjamin Mako Hill

July 19, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAPT 0.0.4

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- arrived on CRAN yesterday.

We added a few more functions in order to compute on the package graph. A concrete example is shown in this vignette which determines the (minimal) set of remaining Debian packages requiring a rebuild under R 3.4.* to update their .C() and .Fortran() registration code. It has been used for the binNMU request #868558.

As we also added a NEWS file, its (complete) content covering all releases follows below.

Changes in version 0.0.4 (2017-07-16)

  • New function getDepends

  • New function reverseDepends

  • Added package registration code

  • Added usage examples in scripts directory

  • Added vignette, also in docs as rendered copy

Changes in version 0.0.3 (2016-12-07)

  • Added dumpPackages, showSrc

Changes in version 0.0.2 (2016-04-04)

  • Added reverseDepends, dumpPackages, showSrc

Changes in version 0.0.1 (2015-02-20)

  • Initial version with getPackages and hasPackages

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 July, 2017 12:12PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Dropping Yakking from Planet Debian

A couple of people objected to having Yakking on Planet Debian, so I've removed it.

19 July, 2017 05:54AM

July 18, 2017

hackergotchi for Daniel Silverstone

Daniel Silverstone

Yay, finished my degree at last

A little while back, in June, I sat my last exam for what I hoped would be the last module in my degree. For seven years, I've been working on a degree with the Open University and have been taking advantage of the opportunity to have a somewhat self-directed course load by taking the 'Open' degree track. When asked why I bothered to do this, I guess my answer has been a little varied. In principle it's because I felt like I'd already done a year's worth of degree and didn't want it wasted, but it's also because I have been, in the dim and distant past, overlooked for jobs simply because I had no degree and thus was an easy "bin the CV".

Fed up with this, I decided to commit to the Open University and thus began my journey toward 'qualification' in 2010. I started by transferring the level 1 credits from my stint at UCL back in 1998/1999 which were in a combination of basic programming in Java, some mathematics including things like RSA, and some psychology and AI courses which at the time were aiming at a degree called 'Computer Science with Cognitive Sciences'.

Then I took level 2 courses, M263 (Building blocks of software), TA212 (The technology of music) and MS221 (Exploring mathematics). I really enjoyed the mathematics course and so...

At level 3 I took MT365 (Graphs, networks and design), M362 (Developing concurrent distributed systems), TM351 (Data management and analysis - which I ended up hating), and finally finishing this June with TM355 (Communications technology).

I received an email this evening telling me the module result for TM355 had been posted, and I logged in to find I had done well enough to be offered my degree. I could have claimed my degree 18+ months ago, but I persevered through another two courses in order to qualify for an honours degree which I have now been awarded. Since I don't particularly fancy any ceremonial awarding, I just went through the clicky clicky and accepted my qualification of 'Batchelor of Science (Honours) Open, Upper Second-class Honours (2.1)' which grants me the letters 'BSc (Hons) Open (Open)' which, knowing me, will likely never even make it onto my CV because I'm too lazy.

It has been a significant effort, over the course of the past few years, to complete a degree without giving up too much of my personal commitments. In addition to earning the degree, I have worked, for six of the seven years it has taken, for Codethink doing interesting work in and around Linux systems and Trustable software. I have designed and built Git server software which is in use in some universities, and many companies, along with a good few of my F/LOSS colleagues. And I've still managed to find time to attend plays, watch films, read an average of 2 novel-length stories a week (some of which were even real books), and be a member of the Manchester Hackspace.

Right now, I'm looking forward to a stress free couple of weeks, followed by an immense amount of fun at Debconf17 in Montréal!

18 July, 2017 09:56PM by Daniel Silverstone

Foteini Tsiami

Internationalization, part three

The first builds of the LTSP Manager were uploaded and ready for testing. Testing involves installing or purging the ltsp-manager package, along with its dependencies, and using its GUI to configure LTSP, create users, groups, shared folders etc. Obviously, those tasks are better done on a clean system. And the question that emerges is: how can we start from a clean state, without having to reinstall the operating system each time?

My mentors pointed me to an answer for that: VirtualBox snapshots. VirtualBox is a virtualization application (others are KVM or VMware) that allows users to install an operating system like Debian in a contained environment inside their host operating system. It comes with an easy to use GUI, and supports snapshots, which are points in time where we mark the guest operating system state, and can revert to that state later on.

So I started by installing Debian Stretch with the MATE desktop environment in VirtualBox, and I took a snapshot immediately after the installation. Now whenever I want to test LTSP Manager, I revert to that snapshot, and that way I have a clean system where I can properly check the installation procedure and all of its features!


18 July, 2017 10:18AM by fottsia

Reproducible builds folks

Reproducible Builds: week 116 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday July 9 and Saturday July 15 2017:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

13 package reviews have been added, 12 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

3 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (47)

diffoscope development

Version 84 was uploaded to unstable by Mattia Rizzolo. It included contributions already reported from the previous weeks, as well as new ones:

After the release, development continued in git with contributions from:

strip-nondeterminism development

Versions 0.036-1, 0.037-1 and 0.038-1 were uploaded to unstable by Chris Lamb. They included contributions from:

reprotest development

Development continued in git with contributions from:

buildinfo.debian.net development

tests.reproducible-builds.org

  • Mattia Rizzolo:
    • Make database backups quicker to restore by avoiding --column-inserts's pg_dump option.
    • Fixup the deployment scripts after the stretch migration.
    • Fixup Apache redirects that were broken after introducing the buster suite
    • Fixup diffoscope jobs that were not always installing the highest possible version of diffoscope
  • Holger Levsen:
    • Add a node health check for a too big jenkins.log.

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

18 July, 2017 07:29AM

hackergotchi for Matthew Garrett

Matthew Garrett

Avoiding TPM PCR fragility using Secure Boot

In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].

One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.

The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.

Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.

I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.

However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.

The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archives.

My proposal is to generate a small initramfs whose sole job is to get secrets from the TPM and stash them in the kernel keyring, and then measure an additional value into PCR 7 in order to ensure that the secrets can't be obtained again. Later disk encryption setup will then be able to set up dm-crypt using the secret already stored within the kernel. This small initramfs will be built into the signed kernel image, and the bootloader will be responsible for appending it to the end of any user-provided initramfs. This means that the TPM will only grant access to the secrets while trustworthy code is running - once the secret is in the kernel it will only be available for in-kernel use, and once PCR 7 has been modified the TPM won't give it to anyone else. A similar approach for some kernel command-line arguments (the kernel, module-init-tools and systemd all interpret the kernel command line left-to-right, with later arguments overriding earlier ones) would make it possible to ensure that certain kernel configuration options (such as the iommu) weren't overridable by an attacker.

There's obviously a few things that have to be done here (standardise how to embed such an initramfs in the kernel image, ensure that luks knows how to use the kernel keyring, teach all relevant bootloaders how to handle these images), but overall this should make it practical to use PCR 7 as a mechanism for supporting TPM-backed disk encryption secrets on Linux without introducing a hug support burden in the process.

[1] The patchset I've posted to add measured boot support to Grub use PCRs 8 and 9 to measure various components during the boot process, but other bootloaders may have different policies.

[2] This is because most Linux systems generate the initramfs locally rather than shipping it pre-built. It may also get rebuilt on various userspace updates, even if the kernel hasn't changed. Including it in PCR 7 would entirely break the fragility guarantees and defeat the point of all of this.

comment count unavailable comments

18 July, 2017 06:48AM

hackergotchi for Norbert Preining

Norbert Preining

Calibre and rar support

Thanks to the cooperation with upstream authors and the maintainer Martin Pitt, the Calibre package in Debian is now up-to-date at version 3.4.0, and has adopted a more standard packaging following upstream. In particular, all the desktop files and man pages have been replaced by what is shipped by Calibre. What remains to be done is work on RAR support.

Rar support is necessary in the case that the eBook uses rar as compression, which happens quite often in comic books (cbr extension). Calibre 3 has split out rar support into a dynamically loaded module, so what needs to be done is packaging it. I have prepared a package for the Python library unrardll which allows Calibre to read rar-compressed ebooks, but it depends on the unrar shared library, which unfortunately is not built in Debian. I have sent a patch to fix this to the maintainer, see bug 720051, but without reaction from the maintainer.

Thus, I am publishing updated packages for unrar shipping also libunrar5, and unrardll Python package in my calibre repository. After installing python-unrardll Calibre will happily import meta-data from rar-compressed eBooks, as well as display them.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

18 July, 2017 01:33AM by Norbert Preining

July 17, 2017

hackergotchi for Jonathan McDowell

Jonathan McDowell

Just because you can, doesn't mean you should

There was a recent Cryptoparty Belfast event that was aimed at a wider audience than usual; rather than concentrating on how to protect ones self on the internet the 3 speakers concentrated more on why you might want to. As seems to be the way these days I was asked to say a few words about the intersection of technology and the law. I think people were most interested in all the gadgets on show at the end, but I hope they got something out of my talk. It was a very high level overview of some of the issues around the Investigatory Powers Act - if you’re familiar with it then I’m not adding anything new here, just trying to provide some sort of details about why it’s a bad thing from both a technological and a legal perspective.

Download

17 July, 2017 06:41PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Solskogen 2017: Nageru all the things

Solskogen 2017 is over! What a blast that was; I especially enjoyed that so many old-timers came back to visit, it really made the party for me.

This was the first year we were using Nageru for not only the stream but also for the bigscreen mix, and I was very relieved to see the lack of problems; I've had nightmares about crashes with 150+ people watching (plus 200-ish more on stream), but there were no crashes and hardly a dropped frame. The transition to a real mixing solution as well as from HDMI to SDI everywhere gave us a lot of new opportunities, which allowed a number of creative setups, some of them cobbled together on-the-spot:

  • Nageru with two cameras, of which one was through an HDMI-to-SDI converter battery-powered from a 20000 mAh powerbank (and sent through three extended SDI cables in series): Live music compo (with some, er, interesting entries).
  • 1080p60 bigscreen Nageru with two computer inputs (one of them through a scaler) and CasparCG graphics run from an SQL database, sent on to a 720p60 mixer Nageru (SDI pass-through from the bigscreen) with two cameras mixed in: Live graphics compo
  • Bigscreen Nageru switching from 1080p50 to 1080p60 live (and stream between 720p50 and 720p60 correspondingly), running C64 inputs from the Framemeister scaler: combined intro compo
  • And finally, it's Nageru all the way down: A camera run through a long extended SDI cable to a laptop running Nageru, streamed over TCP to a computer running VLC, input over SDI to bigscreen Nageru and sent on to streamer Nageru: Outdoor DJ set/street basket compo (granted, that one didn't run entirely smoothly, and you can occasionally see Windows device popups :-) )

It's been a lot of fun, but also a lot of work. And work will continue for an even better show next year… after some sleep. :-)

17 July, 2017 03:45PM

July 16, 2017

Jose M. Calhariz

Crossgrading a complex Desktop and Debian Developer machine running Debian 9

This article is an experiment in progress, please recheck, while I am updating with the new information.

I have a very old installation of Debian, possibly since v2, dot not remember, that I have upgraded since then both in software and hardware. Now the hardware is 64bits, runs a kernel of 64bits but the run-time is still 32bits. For 99% of tasks this is very good. Now that I have made many simulations I may have found a solution to do a crossgrade of my desktop. I write here the tentative procedure and I will update with more ideias on the problems that I may found.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading and the instalation of all the libs as amd64:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 dpkg --list > original.dpkg
 for pack32 in $(grep i386 original.dpkg | awk '{print $2}' ) ; do 
   echo $pack32 ; 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
   fi ; 
 done
 cd /var/cache/apt/archives/
 dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
 dpkg --configure --pending
 dpkg -i --skip-same-version dpkg_1.18.24_amd64.deb apt_1.4.6_amd64.deb bash_4.4-5_amd64.deb dash_0.5.8-2.4_amd64.deb mawk_1.3.3-17+b3_amd64.deb *.deb

 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

But this step does not prevent the apt-get install to have broken dependencies. So instead of only installing the libs with dpkg -i, I am going to try to install all the packages with dpkg -i:

apt-get clean
apt-get upgrade
apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
dpkg --list > original.dpkg
for pack32 in $(grep i386 original.dpkg | awk '{print $2}' ) ; do 
  echo $pack32 ; 
  if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
    apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
  fi ; 
done
cd /var/cache/apt/archives/
dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
dpkg --configure --pending
dpkg --install dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb

dpkg --install /var/cache/apt/archives/*_amd64.deb
dpkg --install /var/cache/apt/archives/*_amd64.deb
dpkg --print-architecture
dpkg --print-foreign-architectures

Forth, do a full crossgrade:

 if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then
   apt-get --fix-broken --allow-remove-essential install
   apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//)
 fi

16 July, 2017 04:49PM by Jose M. Calhariz

hackergotchi for Vasudev Kamath

Vasudev Kamath

Overriding version information from setup.py with pbr

I recently raised a pull request on zfec for converting its python packaging from pure setup.py to pbr based. Today I got review from Brian Warner and one of the issue mentioned was python setup.py --version is not giving same output as previous version of setup.py.

Previous version used versioneer which extracts version information needed from VCS tags. Versioneer also provides flexibility of specifying type of VCS used, style of version, tag prefix (for VCS) etc. pbr also does extract version information from git tag but it expects git tag to be of format tags/refs/x.y.z format but zfec used a zfec- prefix to tag (example zfec-1.4.24) and pbr does not process this. End result, I get a version in format 0.0.devNN where NN is number of commits in the repository from its inception.

Me and Brian spent few hours trying to figure out a way to tell pbr that we would like to override version information it auto deduces, but there was none other than putting version string in PBR_VERSION environment variable. That documentation was contributed by me 3 years back to pbr project.

So finally I used versioneer to create a version string and put it in the environment variable PBR_VERSION.

import os
import versioneer

os.environ['PBR_VERSION'] = versioneer.get_version()

...
setup(
    setup_requires=['pbr'],
    pbr=True,
    ext_modules=extensions
)

And added below snippet to setup.cfg which is how versioneer can be configured with various information including tag prefixes.

[versioneer]
VCS = git
style = pep440
versionfile_source = zfec/_version.py
versionfile_build = zfec/_version.py
tag_prefix = zfec-
parentdir_prefix = zfec-

Though this work around gets the work done, it does not feel correct to set environment variable to change the logic of other part of same program. If you guys know the better way do let me know!. Also probably I should consider filing an feature request against pbr to provide a way to pass tag prefix for version calculation logic.

16 July, 2017 03:23PM by copyninja

Lior Kaplan

PDO_IBM: tracking changes publicly

As part of my work at Zend (now a RogueWave company), I maintain the various patch sets. One of those is the changes for PDO_IBM extension for PHP.

After some patch exchange I decided it’s would be easier to manage the whole process over a public git repository, and maybe gain some more review / feedback along the way. Info at https://github.com/kaplanlior/pecl-database-pdo_ibm/commits/zend-patches

Another aspect of this, is having IBMi specific patches from YIPS (young i professionals) at http://www.youngiprofessionals.com/wiki/index.php/XMLService/PHP, which itself are patches on top of vanilla releases. Info at https://github.com/kaplanlior/pecl-database-pdo_ibm/commits/zend-patches-for-yips

So keeping track over these changes as well is easier while using git’s ability to rebase efficiently, so when a new release is done, I can adapt my patches quite easily. Make sure the changes can be back and forward ported between vanilla and IBMi versions of the extension.


Filed under: PHP

16 July, 2017 01:13PM by Kaplan

July 15, 2017

hackergotchi for Joey Hess

Joey Hess

Functional Reactive Propellor

I wrote this code, and it made me super happy!

data Variety = Installer | Target
    deriving (Eq)

seed :: UserInput -> Versioned Variety Host
seed userinput ver = host "foo"
    & ver (   (== Installer) --> hostname "installer"
          <|> (== Target)    --> hostname (inputHostname userinput)
          )
    & osDebian Unstable X86_64
    & Apt.stdSourcesList
    & Apt.installed ["linux-image-amd64"]
    & Grub.installed PC
    & XFCE.installed
    & ver (   (== Installer) --> desktopUser defaultUser
          <|> (== Target)    --> desktopUser (inputUsername userinput)
          )
    & ver (   (== Installer) --> autostartInstaller )

This is doing so much in so little space and with so little fuss! It's completely defining two different versions of a Host. One version is the Installer, which in turn installs the Target. The code above provides all the information that propellor needs to convert a copy of the Installer into the Target, which it can do very efficiently. For example, it knows that the default user account should be deleted, and a new user account created based on the user's input of their name.

The germ of this idea comes from a short presentation I made about propellor in Portland several years ago. I was describing RevertableProperty, and Joachim Breitner pointed out that to use it, the user essentially has to keep track of the evolution of their Host in their head. It would be better for propellor to know what past versions looked like, so it can know when a RevertableProperty needs to be reverted.

I didn't see a way to address the objection for years. I was hung up on the problem that propellor's properties can't be compared for equality, because functions can't be compared for equality (generally). And on the problem that it would be hard for propellor to pull old versions of a Host out of git. But then I ran into the situation where I needed these two closely related hosts to be defined in a single file, and it all fell into place.

The basic idea is that propellor first reverts all the revertible properties for other versions. Then it ensures the property for the current version.

Another use for it would be if you wanted to be able to roll back changes to a Host. For example:

foos :: Versioned Int Host
foos ver = host "foo"
    & hostname "foo.example.com"
    & ver (   (== 1) --> Apache.modEnabled "mpm_worker"
          <|> (>= 2) --> Apache.modEnabled "mpm_event"
          )
    & ver ( (>= 3)   --> Apt.unattendedUpgrades )

foo :: Host
foo = foos `version` (4 :: Int)

Versioned properties can also be defined:

foobar :: Versioned Int -> RevertableProperty DebianLike DebianLike
foobar ver =
    ver (   (== 1) --> (Apt.installed "foo" <!> Apt.removed "foo")
        <|> (== 2) --> (Apt.installed "bar" <!> Apt.removed "bar")
        )

Notice that I've embedded a small DSL for versioning into the propellor config file syntax. While implementing versioning took all day, that part was super easy; Haskell config files win again!

API documentation for this feature

PS: Not really FRP, probably. But time-varying in a FRP-like way.


Development of this was sponsored by Jake Vosloo on Patreon.

15 July, 2017 09:43PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.12: Rounding some corners

The twelveth update in the 0.12.* series of Rcpp landed on CRAN this morning, following two days of testing at CRAN preceded by five full reverse-depends checks we did (and which are always logged in this GitHub repo). The Debian package has been built and uploaded; Windows and macOS binaries should follow at CRAN as usual. This 0.12.12 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, the 0.12.10.release in March, and the 0.12.11.release in May making it the sixteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1097 packages (and hence 71 more since the last release in May) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases contain a fairly large number of fairly small and focused pull requests most of which either correct some corner cases or improve other aspects. JJ tirelessly improved the package registration added in the previous release and following R 3.4.0. Kirill tidied up a number of small issues allowing us to run compilation in even more verbose modes---usually a good thing. Jeroen, Elias Pipping and Yo Gong all contributed as well, and we thank everybody for their contributions.

All changes are listed below in some detail.

Changes in Rcpp version 0.12.12 (2017-07-13)

  • Changes in Rcpp API:

    • The tinyformat.h header now ends in a newline (#701).

    • Fixed rare protection error that occurred when fetching stack traces during the construction of an Rcpp exception (Kirill Müller in #706).

    • Compilation is now also possibly on Haiku-OS (Yo Gong in #708 addressing #707).

    • Dimension attributes are explicitly cast to int (Kirill Müller in #715).

    • Unused arguments are no longer declared (Kirill Müller in #716).

    • Visibility of exported functions is now supported via the R macro atttribute_visible (Jeroen Ooms in #720).

    • The no_init() constructor accepts R_xlen_t (Kirill Müller in #730).

    • Loop unrolling used R_xlen_t (Kirill Müller in #731).

    • Two unused-variables warnings are now avoided (Jeff Pollock in #732).

  • Changes in Rcpp Attributes:

    • Execute tools::package_native_routine_registration_skeleton within package rather than current working directory (JJ in #697).

    • The R portion no longer uses dir.exists to no require R 3.2.0 or newer (Elias Pipping in #698).

    • Fix native registration for exports with name attribute (JJ in #703 addressing #702).

    • Automatically register init functions for Rcpp Modules (JJ in #705 addressing #704).

    • Add Shield around parameters in Rcpp::interfaces (JJ in #713 addressing #712).

    • Replace dot (".") with underscore ("_") in package names when generating native routine registrations (JJ in #722 addressing #721).

    • Generate C++ native routines with underscore ("_") prefix to avoid exporting when standard exportPattern is used in NAMESPACE (JJ in #725 addressing #723).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 July, 2017 05:09PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

revisiting libjson-spirit.

revisiting libjson-spirit. I tried compiling a program that uses libjson-spirit and noticed that it still is broken. New programs compiled against the header does not link with the provided static library. Trying to rebuild it fixes it, but it uses compat version 8, and that needs to be fixed (trivially). hmm... actually code doesn't build anymore and there's multiple new upstream versions. ... and then I noticed that it was a stale copy already removed from Debian repository. What's a good C++ json implementation these days?

15 July, 2017 12:18PM by Junichi Uekawa

hackergotchi for Vasudev Kamath

Vasudev Kamath

debcargo: Replacing subprocess crate with git2 crate

In my previous post I talked about using subprocess crate to extract beginning and ending year from git repository for generating debian/copyright file. In this post I'm going to talk on how I replaced subprocess with native git2 crate and achieved the same result in much cleaner and safer way.

git2 is a native Rust crate which provides access to Git repository internals. git2 does not involve any unsafe invocation as it is built against libgit2-sys which is actually using Rust FFI to directly bind to underlying libgit library. Below is the new copyright_fromgit function with git2 crate.

fn copyright_fromgit(repo_url: &str) -> Result<String> {
   let tempdir = TempDir::new_in(".", "debcargo")?;
   let repo = Repository::clone(repo_url, tempdir.path())?;

   let mut revwalker = repo.revwalk()?;
   revwalker.push_head()?;

   // Get the latest and first commit id. This is bit ugly
   let latest_id = revwalker.next().unwrap()?;
   let first_id = revwalker.last().unwrap()?; // revwalker ends here is consumed by last

   let first_commit = repo.find_commit(first_id)?;
   let latest_commit = repo.find_commit(latest_id)?;

   let first_year =
       DateTime::<Utc>::from_utc(
               NaiveDateTime::from_timestamp(first_commit.time().seconds(), 0),
               Utc).year();

   let latest_year =
       DateTime::<Utc>::from_utc(
             NaiveDateTime::from_timestamp(latest_commit.time().seconds(), 0),
             Utc).year();

   let notice = match first_year.cmp(&latest_year) {
       Ordering::Equal => format!("{}", first_year),
       _ => format!("{}-{},", first_year, latest_year),
   };

   Ok(notice)
}
So here is what I'm doing
  1. Use git2::Repository::clone to clone the given URL. We are thus avoiding exec of git clone command.
  2. Get a revision walker object. git2::RevWalk implements Iterator trait and allows walking through the git history. This is what we are using to avoid exec of git log command.
  3. revwalker.push_head() is important because we want to tell revwalker from where we want to walk the history. In this case we are asking it to walk history from repository HEAD. Without this line next line will not work. (Learned it in hard way :-) ).
  4. Then we extract git2::Oid which is we can say similar to commit hash and can be used to lookup a particular commit. We take latest commit hash using RevWalk::next call and the first commit using Revwalk::last, note the order this is because Revwalk::last consumes the revwalker so doing it in reverse order will make borrow checker unhappy :-). This replaces exec of head -n1 command.
  5. Look up the git2::Commit objects using git2::Repository::find_commit
  6. Then convert the git2::Time to chrono::DateTime and take out the years.

After this change I found an obvious error which went unnoticed in previous version, that is if there was no repository key in Cargo.toml. When there was no repo URL git clone exec did not error out and our shell commands happily extracted year from the debcargo repository!. Well since I was testing code from debcargo repository It never failed, but when I executed from non-git repository folder git threw an error but that was git log and not git clone. This error was spotted right away because git2 threw me an error that I gave it empty URL.

When it comes to performance I see that debcargo is faster compared to previous version. This makes sense because previously it was doing 5 fork and exec system calls and now that is avoided.

15 July, 2017 10:05AM by copyninja

July 14, 2017

hackergotchi for Chris Lamb

Chris Lamb

Installation birthday

Fancy receiving congratulations on the anniversary of when you installed your system?

Installing the installation-birthday package on your Debian machines will celebrate each birthday of your system by automatically sending a message to the local system administrator.

The installation date is based on the system installation time, not the package itself.

installation-birthday is available in Debian 9 ("stretch") via the stretch-backports repository, as well as in the testing and unstable distributions:

$ apt install installation-birthday

Enjoy, and patches welcome. :)

14 July, 2017 10:08AM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live contrib repository (re)new(ed)

It is my pleasure to announce the renewal/rework/restart of the TeX Live contrib repository service. The repository is collecting packages that cannot enter TeX Live directly (mostly due to license reasons), but are free to distribute. The basic idea is to provide a repository mimicking Debian’s nonfree branch.

The packages on this site are not distributed inside TeX Live proper for one or another of the following reasons:

  • because it is not free software according to the FSF guidelines;
  • because it is an executable update;
  • because it is not available on CTAN;
  • because it is an intermediate release for testing.

In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can hav e a placeon TLContrib.

Currently there are 52 packages in the repository, falling roughly into the following categories:

  • nosell fonts fonts and macros with nosell licenses, e.g., garamond, garamondx, etc. These fonts are mostly those that are also available via getnonfreefonts.
  • nosell packages packages with nosell licenses, e.g. acmtrans
  • nonfree support packages those packages that require non-free tools or fonts, e.g., acrotex, lucida-otf, verdana, etc

The full list of packages can be seen here.

The ultimate goal is to provide a companion to the core TeX Live (tlnet) distribution in much the same way as Debian‘s non-free tree is a companion to the normal distribution. The goal is not to replace TeX Live: packages that could go into TeX Live itself should stay (or be added) there. TLContrib is simply trying to fill in a gap in the current distribution system.

Quick Start

If you are running the current version of TeX Live, which is 2017 at the moment, the following code will suffice:

  tlmgr repository add http://contrib.texlive.info/current tlcontrib
  tlmgr pinning add tlcontrib '*'

In future there might be releases for certain years.

Verification

The packages are signed with my GnuPG RSA key: 0x6CACA448860CDC13. tlmgr will automatically verify authenticity if you add my key:

  curl -fsSL https://www.preining.info/rsa.asc | tlmgr key add -

After that tlmgr will tell you about failed authentication of this repository.

History

Taco Hoekwater started in 2010, but it hasn’t seen much activity in recent years. Taco agreed to hand it over to myself, who is currently maintaining the repository. Big thanks to Taco for his long support and cooperation.

In contrast to the original tlcontrib page, we don’t offer an automatic upload of packages or user registration. If you want to add packages here, see below.

Adding packages

If you want to see a package included here that cannot enter TeX Live proper, the following ways are possible (from most appreciated to least appreciated):

  • clone the tlcontrib git repo (see below), add the package, and publish the branch where I can pull from it. Then send me an email with the URL and explanation about free distributability/license;
  • send me the package in TDS format, with explanation about free distributability/license;
  • send me the package as distributed by the author (or on CTAN), with explanation about free distributability/license;
  • send me a link to the package explanation about free distributability/license.

Git repository

The packages are kept in a git repository and the tlmgr repo is built from after changes. The location is https://git.texlive.info/tlcontrib.

Enjoy.

14 July, 2017 12:34AM by Norbert Preining

July 13, 2017

Jose M. Calhariz

Crossgrading a more typical server in Debian9

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

Third, do a crossgrade of the libraries:

 dpkg --list > original.dpkg
 apt-get --fix-broken --allow-remove-essential install
 for pack32 in $(grep :i386 original.dpkg | awk '{print $2}' ) ; do 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get install --yes --allow-remove-essential ${pack32%:i386} ; 
   fi ; 
 done

Forth, do a full crossgrade:

 if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then
   apt-get --fix-broken --allow-remove-essential install
   apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//)
 fi

13 July, 2017 05:32PM by Jose M. Calhariz

hackergotchi for Lars Wirzenius

Lars Wirzenius

Adding Yakking to Planet Debian

In a case of blatant self-promotion, I am going to add the Yakking RSS feed to the Planet Debian aggregation. (But really because I think some of the readership of Planet Debian may be interested in the content.)

Yakking is a group blog by a few friends aimed at new free software contributors. From the front page description:

Welcome to Yakking.

This is a blog for topics relevant to someone new to free software development. We assume you are already familiar with computers, and are curious about participating in the production of free software. You don't need to be a programmer: software development requires a wide variety of skills, and you can be a valued core contributor to a project without being a programmer.

If anyone objects, please let me know.

13 July, 2017 10:07AM

Vincent Fourmond

Screen that hurts my eyes, take 2

Six month ago, I wrote a lengthy post about my new computer hurting my eyes. I haven't made any progress with that, but I've accidentally upgraded my work computer from Kernel 4.4 to 4.8 and the nvidia drivers from 340.96-4 to 375.66-2. Well, my work computer now hurts, I've switched back to the previous kernel and drivers, I hope it'll be back to normal.

Any ideas of something specific that changed, either between 4.4 and 4.8 (kernel startup code, default framebuffer modes, etc ?), or between the 340.96 and the 375.66 drivers ? In any case, I'll try that specific combination of kernels/drivers home to see if I can get it to a useable state.

13 July, 2017 08:03AM by Vincent Fourmond (noreply@blogger.com)

Lucy Wayland

Basic Chilli Template

Amounts are to taste:
[Stage one]
Chopped red onion
Chopped garlic
Chopped fresh ginger
Chopped big red chillies (mild)
Chopped birds eye chillies (red or green, quite hot)
Chopped scotch bonnets (hot)
[Fry onion in some olive oil. When getting translucent, and rest of ingredients. May need to add some more oil. When the garlic is browning. On to stage two.]
[Stage two]
Some tins of chopped tomato
Some tomato puree
Some basil
Some thyme
Bay leaf optional
Some sliced mushroom
Some chopped capsicum pepper
Some kidney beans
Other beans optional (butter beans are nice)
Lentils optional (Pro tip: if adding lentils to adding lentils, especially red lentils, I recommend adding some garam masala as well. Lifts the flavour.)
Veggie mince optional
Pearled barley very optional
Stock (some reclaimed from swilling water around tom tims)
Water to keep topping up with if it get too sticky or dry
Dash of red wine optional
Worcester sauce optional
Any other flavouring you feel like optional (I quite often add random herbs or spices
[[Secret ingredient: a spoonful of Marmite]]
[Cook everything up together, but wait until there is enough fluid before you add the dry/sticky ingredients in.]
[Serve with carb of choice. I currently fond of using Ryvita as dipper instead of tortilla chips.]
[Also serve with a a “cooler” such as natural yogurt, soured cream or something else.

You want more than one type of chilli in there to broaden the flavour. I use all three, plus occasionally others as well. If you are feeling masochistic you can go hotter than scotch bonnets, but I although you may get something of the same heat, I think you lose something in the flavour.

BTW – if you get the chance, of all the tortilla chips, I think blue corn ones are the best. Only seem to find them in health food shops.

There you go. It’s a “Zen” recipe, which is why I couldn’t give you a link. You just do it until it looks right, feels right, tastes right. And with practice you get it better and better.


13 July, 2017 01:59AM by aardvarkoffnord

July 12, 2017

Jose M. Calhariz

Crossgrading a minimal install of Debian 9

By testing the previous instructions for a full crosgrade I run into trouble. Here is the results of my tests to do a full crossgrade of a minimal installation of Debian inside a VM.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures
 apt-get --fix-broken --allow-remove-essential install

Third do a full crossgrade:

 apt-get install --allow-remove-essential $(dpkg --list | grep :i386 | awk '{print $2}' | sed -e s/:i386// )

This procedure seams to be a little fragile, but worked most of the time for me.

12 July, 2017 10:46PM by Jose M. Calhariz

Crossgrading the kernel in Debian 9

I have a very old installation of 32bits Debian running in new hardware. Until now running a 64bits kernel was enough to use efficiently more than 4GiB of RAM. The only problem I found was the proprietary drivers from AMD/ATI and NVIDIA, that did not like this mixed environment and some problems with openafs, easilly solved with the help of the package maintainers of openafs. Crossgrading the Qemu/KVM to 64 bits did not pose a problem, so I have been running 64bits VMs for some time.

But now the nouveau driver do not work with my new display adapter and I need to run tools from OpsCode not available as 32bits. So is time to do a CrossGrade. Finding some problems I can not recommend it to the inexperienced people. Is time investigate the issues and report bugreports to Debian were appropriate.

If you run 32bits Debian installation you can easily install a 64bits kernel . The procedure is simple and well tested.

dpkg --add-architecture amd64
apt-get update
apt-get install linux-image-amd64:amd64

And reboot to test the new kernel.

You can expect here more articles about crossgrading.

12 July, 2017 08:26PM by Jose M. Calhariz

Sven Hoexter

environment variable names with dots and busybox 1.26.0 ash

In case you're for example using Alpine Linux 3.6 based docker images, and you've been passing through environment variable names with dots, you might miss them now in your actual environment. It seems that with busybox 1.26.0 the busybox ash got a lot stricter regarding validation of environment variable names and now you can no longer pass through variable names with dots in them. They just won't be there. If you've been running ash interactively you could not add them in the past, but until now you could do something like this in your Dockerfile

ENV foo.bar=baz

and later on accces a variable "foo.bar".

bash still allows those invalid variable names and is way more tolerant. So to be nice to your devs, and still bump your docker image version, you can add bash and ensure you're starting your application with /bin/bash instead of /bin/sh inside of your container.

Links

12 July, 2017 03:38PM

Reproducible builds folks

Reproducible Builds: week 115 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday July 2 and Saturday July 8 2017:

Reproducible work in other projects

Ed Maste pointed to a thread on the LLVM developer mailing list about container iteration being the main source of non-determinism in LLVM, together with discussion on how to solve this. Ignoring build path issues, container iteration order was also the main issue with rustc, which was fixed by using a fixed-order hash map for certain compiler structures. (It was unclear from the thread whether LLVM's builds are truly path-independent or rather that they haven't done comparisons between builds run under different paths.)

Bugs filed

Patches submitted upstream:

Reviews of unreproducible packages

52 package reviews have been added, 62 have been updated and 20 have been removed in this week, adding to our knowledge about identified issues.

No issue types were updated or added this week.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (143)
  • Andreas Beckmann (1)
  • Dmitry Shachnev (1)
  • Lucas Nussbaum (3)
  • Niko Tyni (3)
  • Scott Kitterman (1)
  • Sean Whitton (1)

diffoscope development

Development continued in git with contributions from:

  • Ximin Luo:
    • Add a PartialString class to help with lazily-loaded output formats such as html-dir.
    • html and html-dir output:
      • add a size-hint to the diff headers and lazy-load buttons
      • add new limit flags and deprecate old ones
    • html-dir output
      • split index pages up if they get too big
      • put css/icon data in separate files to avoid duplication
    • main: warn if loading a diff but also giving diff-calculation flags
    • Test fixes for Python 3.6 and CI environments without imagemagick (#865625).
    • Fix a performance regression (#865660) involving the Wagner-Fischer algorithm for calculating levenshtein distance.

With these changes, we are able to generate a dynamically loaded HTML diff for GCC-6 that can be displayed in a normal web browser. For more details see this mailing list post.

Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

12 July, 2017 01:22PM

hackergotchi for Francois Marier

Francois Marier

Toggling Between Pulseaudio Outputs when Docking a Laptop

In addition to selecting the right monitor after docking my ThinkPad, I wanted to set the correct sound output since I have headphones connected to my Ultra Dock. This can be done fairly easily using Pulseaudio.

Switching to a different pulseaudio output

To find the device name and the output name I need to provide to pacmd, I ran pacmd list-sinks:

2 sink(s) available.
...
  * index: 1
    name: <alsa_output.pci-0000_00_1b.0.analog-stereo>
    driver: <module-alsa-card.c>
...
    ports:
        analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown)
            properties:

        analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
            properties:
                device.icon_name = "audio-speakers"

From there, I extracted the soundcard name (alsa_output.pci-0000_00_1b.0.analog-stereo) and the names of the two output ports (analog-output and analog-output-speaker).

To switch between the headphones and the speakers, I can therefore run the following commands:

pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker

Listening for headphone events

Then I looked for the ACPI event triggered when my headphones are detected by the laptop after docking.

After looking at the output of acpi_listen, I found jack/headphone HEADPHONE plug.

Combining this with the above pulseaudio names, I put the following in /etc/acpi/events/thinkpad-dock-headphones:

event=jack/headphone HEADPHONE plug
action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output"

to automatically switch to the headphones when I dock my laptop.

Finding out whether or not the laptop is docked

While it is possible to hook into the docking and undocking ACPI events and run scripts, there doesn't seem to be an easy way from a shell script to tell whether or not the laptop is docked.

In the end, I settled on detecting the presence of USB devices.

I ran lsusb twice (once docked and once undocked) and then compared the output:

lsusb  > docked 
lsusb  > undocked 
colordiff -u docked undocked 

This gave me a number of differences since I have a bunch of peripherals attached to the dock:

--- docked  2017-07-07 19:10:51.875405241 -0700
+++ undocked    2017-07-07 19:11:00.511336071 -0700
@@ -1,15 +1,6 @@
 Bus 001 Device 002: ID 8087:8000 Intel Corp. 
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
-Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub
-Bus 003 Device 080: ID 17ef:1010 Lenovo 
 Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
-Bus 002 Device 041: ID xxxx:xxxx ...
-Bus 002 Device 040: ID xxxx:xxxx ...
-Bus 002 Device 039: ID xxxx:xxxx ...
-Bus 002 Device 038: ID 17ef:100f Lenovo 
-Bus 002 Device 037: ID xxxx:xxxx ...
-Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub
-Bus 002 Device 036: ID 17ef:1010 Lenovo 
 Bus 002 Device 002: ID xxxx:xxxx ...
 Bus 002 Device 004: ID xxxx:xxxx ...
 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

I picked 17ef:1010 as it appeared to be some internal bus on the Ultra Dock (none of my USB devices were connected to Bus 003) and then ended up with the following port toggling script:

#!/bin/bash

if /usr/bin/lsusb | grep 17ef:1010 > /dev/null ; then
    # docked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
else
    # undocked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker
fi

12 July, 2017 05:07AM

July 11, 2017

hackergotchi for Joey Hess

Joey Hess

bonus project

Little bonus project after the solar upgrade was replacing the battery box's rotted roof, down to the cinderblock walls.

Except for a piece of plywood, used all scrap lumber for this project, and also scavenged a great set of hinges from a discarded cabinet. I hope the paint on all sides and an inch of shingle overhang will be enough to protect the plywood.

Bonus bonus project to use up paint. (Argh, now I want to increase the size of the overflowing grape arbor. Once you start on this kind of stuff..)

After finishing all that, it was time to think about this while enjoying this.

(Followed by taking delivery of a dumptruck full of gravel -- 23 tons -- which it turns out was enough for only half of my driveway..)

11 July, 2017 08:29PM

Andreas Bombe

PDP-8/e Replicated — Overview

This is an overview of the hardware and internals of the PDP-8/e replica I’m building.

The front panel board

functional replica of the PDP-8/e front panel

If you know the original or remember the picture from the first post it is clear that this is a functional replica not aiming to be as pretty as those of the other projects I mentioned. I have reordered the switches into two rows to make the board more compact (which also means cheaper) without sacrificing usability.

There’s the two rows of display lights plus one run light the 8/e provides. The upper row is the address made up of 12 bits of memory address and 3 bits of extended memory address or field. Below are the 12 bits indicator which can show one data set out of six as selected by the user.

All the switches of the original are implemented as more compact buttons1. While momentary switches are easily substituted by buttons, all buttons implementing two position switches toggle on/off with each press and they have a LED above that shows the current state. The six position rotary switch is implemented as a button cycling through all indicator displays together with six LEDs which show the active selection.

Markings show the meaning of the indicator and switches as on the original, grouped in threes as the predominant numbering system for the PDPs was octal. The upper line shows the meaning for the state indicator, the middle for the status indicator and bit numbers for the rest. Note that on the PDP-8 and opposite to modern conventions, the most significant bit was numbered 0.

I designed it as a pure front panel board without any PDP-8 simulation parts. The buttons and associated lights are controllable via SPI lines with a 3.3 V supply. The address and indicator lights have a separate common anode configuration with all cathodes individually available on a pin header without any resistors in the path, leaving voltage and current regulation up to the simulation board. This board is actually a few years old from a similar project where I emulated the PDP-8 in software on a microcontroller and the flexible design allowed me to reuse it unchanged.

The main board

main board with CPU and peripherals of the replicated PDP-8/e

This is where the magic happens. You can see three big ICs on the board: On the left is the STM32F405 microcontroller (with ARM Cortex-M4 core), the bigger one in the middle is the Altera2 MAX 10 FPGA and finally to the right is the SRAM that is large enough to hold all the main memory of the 32 KW maximum expansion of the PDP-8/e. The two smaller chips to the right of that are just buffers that drive the front panel address LEDs, the small chip at the top left is a RS-232 level shifter.

The idea behind this is that the PDP-8 and peripherals that are simple to directly implement, such as GPIO or a serial port, are fully on the FPGA. Other peripherals such as paper and magnetic tape and disks, which are after all not connected to real PDP-8 drives but disk images on a microSD, are implemented on the microcontroller interfacing with stub devices in the FPGA. Compared to implementing everything everything in the FPGA, the STM32F4 has the advantage of useful built in peripherals such as two host/device capable USB ports. 5 V tolerant I/O pins are very useful and simply not available in any FPGA.

I have to admit that this board was a bit of a rush job in order to have something at all to show at the Vintage Computer Festival Europe 18.0. Given that it was my first time designing a board with a large microcontroller and the first time with an FPGA, it wasn’t exactly my fastest progressing project either and I got basic functionality (front panel allows toggling in small programs and running them) working just in time. For various reasons the project hasn’t progressed much since, so the following is still just plans, but plans for which the hardware was designed.

Since the aim is to have a cycle accurate PDP-8/e implementation, digital I/O was always planned. Rather than defining my own header I have included Arduino R3 compatible headers (for 3.3 V compatible boards only) that have become a popular even outside the Arduino world for this purpose. The digital Arduino pins are connected directly to the FPGA and will be directly controllable by PDP-8 software. The downside of choosing the Arduino headers is that the original PDP-8 digital I/O interface is not a perfect match since it naturally has 12 lines whereas the Arduino has 15. The analog inputs are not connected to the FPGA, the documentation of the MAX10’s ADC in the EQFP package are not very encouraging. They are connected to the STM32 instead3.

Another interface connected directly to the FPGA and that would be directly under PDP-8 control is a standard 9 pin RS-232 interface. RX, TX, CTS and RTS are connected and level-shifted between 3.3 V and RS-232 levels by a MAX3232.

Besides the PDP-8, I also plan to implement a full video terminal on the board. The idea is that with a power supply, keyboard and monitor this board would form a complete system without the need of connecting another computer to act as a terminal. To that end, there is a VGA port attached to the FPGA with simple resistor network DACs for 9 bits color (RGB with 3 bits each). This is another spot where I left myself room to expand, for e.g. a VT220 you really only need one color in two brightness levels. Keyboards will be connected either via the PS/2 connector on the right or the USB-A host port at the top left.

Last of the interface ports is the USB micro-AB port on the left, which for now I am using only for power supply. I mainly plan to use it to provide alternative or additional serial ports to the PDP-8 or to export the video terminal serial port for testing purposes. Other possible uses are access to the image files on the microSD and firmware updates.

This has gotten rather long again, so I’m stopping here and leave some implementation notes for another post.


  1. They are also much cheaper. Given the large number of switches, the savings are substantial. Additionaly the buttons are nicer to operate than long rows of tiny switches. [return]
  2. Or rather Intel now. At least Altera’s web site, documentation and software have already been thoroughly rebranded, but the chips I got were produced prior to that. [return]
  3. That’s not to say that the analog conversions on the STM32 are necessarily better than those of the MAX10 when you can’t follow their guidelines, I have no comparisons. Certainly following the guidelines would have been prohibitive given how many pins’ usage they restrict. [return]

11 July, 2017 06:02PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, June 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 161 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly with one new bronze sponsor and another silver sponsor is in the process of joining.

The security tracker currently lists 49 packages with a known CVE and the dla-needed.txt file 54. The number of open issues is close to last month.

Thanks to our sponsors

New sponsors are in bold.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

11 July, 2017 02:49PM by Raphaël Hertzog

July 10, 2017

hackergotchi for Steve Kemp

Steve Kemp

bind9 considered harmful

Recently there was another bind9 security update released by the Debian Security Team. I thought that was odd, so I've scanned my mailbox:

  • 11 January 2017
    • DSA-3758 - bind9
  • 26 February 2017
    • DSA-3795-1 - bind9
  • 14 May 2017
    • DSA-3854-1 - bind9
  • 8 July 2017
    • DSA-3904-1 - bind9

So in the year to date there have been 7 months, in 3 of them nothing happened, but in 4 of them we had bind9 updates. If these trends continue we'll have another 2.5 updates before the end of the year.

I don't run a nameserver. The only reason I have bind-packages on my system is for the dig utility.

Rewriting a compatible version of dig in Perl should be trivial, thanks to the Net::DNS::Resolver module:

These are about the only commands I ever run:

dig -t a    steve.fi +short
dig -t aaaa steve.fi +short
dig -t a    steve.fi @8.8.8.8

I should do that then. Yes.

10 July, 2017 09:00PM

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in June 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java + Android

Debian LTS

This was my sixteenth month as a paid contributor and I have been paid to work 16 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • I triaged mp3splt and putty and marked CVE-2017-5666 and CVE-2017-6542 as no-dsa because the impact was very low.
  • DLA-975-1. I uploaded the security update for wordpress which I prepared last month fixing 6 CVE.
  • DLA-986-1. Issued a security update for zookeeper fixing 1 CVE.
  • DLA-989-1. Issued a security update for jython fixing 1 CVE.
  • DLA-996-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-1002-1. Issued a security update for smb4k fixing 1 CVE.
  • DLA-1013-1. Issued a security update for graphite2 fixing 8 CVE.
  • DLA-1020-1. Issued a security update for jetty fixing 1 CVE.
  • DLA-1021-1. Issued a security update for jetty8 fixing 1 CVE.

Misc

  • I updated wbar, fixed #829981 and uploaded mediathekview and osmo to unstable. For the Buster release cycle I decided to package the fork of xarchiver‘s master branch which receives regular updates and bug fixes. Besides being an GTK-3 application now, a lot of older bugs could be fixed.

Thanks for reading and see you next time.

10 July, 2017 08:16PM by Apo

hackergotchi for Jonathan McDowell

Jonathan McDowell

Going to DebConf 17

Going to DebConf17

Completely forgot to mention this earlier in the year, but delighted to say that in just under 4 weeks I’ll be attending DebConf 17 in Montréal. Looking forward to seeing a bunch of fine folk there!

Outbound:

2017-08-04 11:40 DUB -> 13:40 KEF WW853
2017-08-04 15:25 KEF -> 17:00 YUL WW251

Inbound:

2017-08-12 19:50 YUL -> 05:00 KEF WW252
2017-08-13 06:20 KEF -> 09:50 DUB WW852

(Image created using GIMP, fonts-dkg-handwriting and the DebConf17 Artwork.)

10 July, 2017 05:54PM

hackergotchi for Kees Cook

Kees Cook

security things in Linux v4.12

Previously: v4.11.

Here’s a quick summary of some of the interesting security things in last week’s v4.12 release of the Linux kernel:

x86 read-only and fixed-location GDT
With kernel memory base randomization, it was stil possible to figure out the per-cpu base address via the “sgdt” instruction, since it would reveal the per-cpu GDT location. To solve this, Thomas Garnier moved the GDT to a fixed location. And to solve the risk of an attacker targeting the GDT directly with a kernel bug, he also made it read-only.

usercopy consolidation
After hardened usercopy landed, Al Viro decided to take a closer look at all the usercopy routines and then consolidated the per-architecture uaccess code into a single implementation. The per-architecture code was functionally very similar to each other, so it made sense to remove the redundancy. In the process, he uncovered a number of unhandled corner cases in various architectures (that got fixed by the consolidation), and made hardened usercopy available on all remaining architectures.

ASLR entropy sysctl on PowerPC
Continuing to expand architecture support for the ASLR entropy sysctl, Michael Ellerman implemented the calculations needed for PowerPC. This lets userspace choose to crank up the entropy used for memory layouts.

LSM structures read-only
James Morris used __ro_after_init to make the LSM structures read-only after boot. This removes them as a desirable target for attackers. Since the hooks are called from all kinds of places in the kernel this was a favorite method for attackers to use to hijack execution of the kernel. (A similar target used to be the system call table, but that has long since been made read-only.)

KASLR enabled by default on x86
With many distros already enabling KASLR on x86 with CONFIG_RANDOMIZE_BASE and CONFIG_RANDOMIZE_MEMORY, Ingo Molnar felt the feature was mature enough to be enabled by default.

Expand stack canary to 64 bits on 64-bit systems
The stack canary values used by CONFIG_CC_STACKPROTECTOR is most powerful on x86 since it is different per task. (Other architectures run with a single canary for all tasks.) While the first canary chosen on x86 (and other architectures) was a full unsigned long, the subsequent canaries chosen per-task for x86 were being truncated to 32-bits. Daniel Micay fixed this so now x86 (and future architectures that gain per-task canary support) have significantly increased entropy for stack-protector.

Expanded stack/heap gap
Hugh Dickens, with input from many other folks, improved the kernel’s mitigation against having the stack and heap crash into each other. This is a stop-gap measure to help defend against the Stack Clash attacks. Additional hardening needs to come from the compiler to produce “stack probes” when doing large stack expansions. Any Variable Length Arrays on the stack or alloca() usage needs to have machine code generated to touch each page of memory within those areas to let the kernel know that the stack is expanding, but with single-page granularity.

That’s it for now; please let me know if I missed anything. The v4.13 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

10 July, 2017 08:24AM by kees

July 09, 2017

Niels Thykier

Approaching the exclusive “sub-minute” build time club

For the first time in at least two years (and probably even longer), debhelper with the 10.6.2 upload broke the 1 minute milestone for build time (by mere 2 seconds – look for “Build needed 00:00:58, […]”).  Sadly, the result it is not deterministic and the 10.6.3 upload needed 1m + 5s to complete on the buildds.

This is not the result of any optimizations I have done in debhelper itself.  Instead, it is the result of “questionable use of developer time” for the sake of meeting an arbitrary milestone. Basically, I made it possible to parallelize more of the debhelper build (10.6.1) and finally made it possible to run the tests in parallel (10.6.2).

In 10.6.2, I also made the most of the tests run against all relevant compat levels.  Previously, it would only run the tests against one compat level (either the current one or a hard-coded older version).

Testing more than one compat turned out to be fairly simple given a proper test library (I wrote a “Test::DH” module for the occasion).  Below is an example, which is the new test case that I wrote for Debian bug #866570.

$ cat t/dh_install/03-866570-dont-install-from-host.t
#!/usr/bin/perl
use strict;
use warnings;
use Test::More;

use File::Basename qw(dirname);
use lib dirname(dirname(__FILE__));
use Test::DH;
use File::Path qw(remove_tree make_path);
use Debian::Debhelper::Dh_Lib qw(!dirname);

plan(tests => 1);

each_compat_subtest {
  my ($compat) = @_;
  # #866570 - leading slashes must *not* pull things from the root FS.
  make_path('bin');
  create_empty_file('bin/grep-i-licious');
  ok(run_dh_tool('dh_install', '/bin/grep*'));
  ok(-e "debian/debhelper/bin/grep-i-licious", "#866570 [${compat}]");
  ok(!-e "debian/debhelper/bin/grep", "#866570 [${compat}]");
  remove_tree('debian/debhelper', 'debian/tmp');
};

I have cheated a bit on the implementation; while the test runs in a temporary directory, the directory is reused between compat levels (accordingly, there is a “clean up” step at the end of the test).

If you want debhelper to maintain this exclusive (and somewhat arbitrary) property (deterministically), you are more than welcome to help me improve the Makefile. 🙂  I am not sure I can squeeze any more out of it with my (lack of) GNU make skills.


Filed under: Debhelper, Debian

09 July, 2017 06:41PM by Niels Thykier

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nageru 1.6.1 released

I've released version 1.6.1 of Nageru, my live video mixer.

Now that Solskogen is coming up, there's been a lot of activity on the Nageru front, but hopefully everything is actually coming together now. Testing has been good, but we'll see whether it stands up to the battle-hardening of the real world or not. Hopefully I won't be needing any last-minute patches. :-)

Besides the previously promised Prometheus metrics (1.6.1 ships with a rather extensive set, as well as an example Grafana dashboard) and frame queue management improvements, a surprising late addition was that of a new transcoder called Kaeru (following the naming style of Nageru itself, from the japanese verb kaeru (換える) which means roughly to replace or excahnge—iKnow! claims it can also mean “convert”, but I haven't seen support for this anywhere else).

Normally, when I do streams, I just let Nageru do its thing and send out a single 720p60 stream (occasionally 1080p), usually around 5 Mbit/sec; less than that doesn't really give good enough quality for the high-movement scenarios I'm after. But Solskogen is different in that there's a pretty diverse audience when it comes to networking conditions; even though I have a few mirrors spread around the world (and some JavaScript to automatically pick the fastest one; DNS round-robin is really quite useless here!), not all viewers can sustain such a bitrate. Thus, there's also a 480p variant with around 1 Mbit/sec or so, and it needs to come from somewhere.

Traditionally, I've been using VLC for this, but streaming is really a niche thing for VLC. I've been told it will be an increased focus for 4.0 now that 3.0 is getting out the door, but over the last few years, there's been a constant trickle of little issues that have been breaking my transcoding pipeline. My solution for this was to simply never update VLC, but now that I'm up to stretch, this didn't really work anymore, and I'd been toying around with the idea of making a standalone transcoder for a while. (You'd ask “why not the ffmpeg(1) command-line client?”, but it's a bit too centered around files and not streams; I use it for converting to HLS for iOS devices, but it has a nasty habit of I/O blocking real work, and its HTTP server really isn't meant for production work. I could survive the latter if it supported Metacube and I could feed it into Cubemap, but it doesn't.)

It turned out Nageru had already grown most of the pieces I needed; it had video decoding through FFmpeg, x264 encoding with speed control (so that it automatically picks the best preset the machine can sustain at any given time) and muxing, audio encoding, proper threading everywhere, and a usable HTTP server that could output Metacube. All that was required was to add audio decoding to the FFmpeg input, and then replace the GPU-based mixer and GUI with a very simple driver that just connects the decoders to the encoders. (This means it runs fine on a headless server with no GPU, but it also means you'll get FFmpeg's scaling, which isn't as pretty or fast as Nageru's. I think it's an okay tradeoff.) All in all, this was only about 250 lines of delta, which pales compared to the ~28000 lines of delta that are between 1.3.1 (used for last Solskogen) and 1.6.1. It only supports a rather limited set of Prometheus metrics, and it has some limitations, but it seems to be stable and deliver pretty good quality. I've denoted it experimental for now, but overall, I'm quite happy with how it turned out, and I'll be using it for Solskogen.

Nageru 1.6.1 is on its way into Debian, but it depends on a new version of Movit which needs to go through the NEW queue (a soname bump), so it might be a few days. In the meantime, I'll be busy preparing for Solskogen. :-)

09 July, 2017 10:14AM

July 08, 2017

hackergotchi for Urvika Gola

Urvika Gola

Outreachy Progress on Lumicall

unnamedLumicall 1.13.0 is released! 😀

Through Lumicall, you can make encrypted calls and send messages using open standards. It uses the SIP protocol to inter-operate with other apps and corporate telephone systems.

During the Outreachy Internship period I worked on the following issues :-

I researched on creating a white label version of Lumicall. Few ideas on how the white label build could be used..

  1. Existing SIP providers can use white label version of Lumicall to expand their business and launch SIP client. This would provide a one stop shop for them!!
  2. New SIP clients/developers can use Lumicall white label version to get the underlying working of making encrypted phone calls using SIP protocol, it will help them to focus on other additional functionalities they would like to include.

Documentation for implementing white labelling – Link 1 and Link 2

 

Since Lumicall is majorly used to make encrypted calls, there was a need to designate quiet times and the phone will not make an audible ringing tone during that time & if the user has multiple SIP accounts, the user can set the silent mode functionality on one of them, maybe, the Work account.
Documentation for adding silent mode feature  – Link 1 and Link 2

 

  • Adding 9 Patch Image 

Using Lumicall, users can send SIP messages across. Just to improve the UI a little, I added a 9 patch image in the message screen. A 9 patch image is created using 9 patch tools and are saved as imagename.9.png . The image will resize itself according to the text length and font size.

Documentation for 9 patch image – Link

9patch

You can try the new version of Lumicall here ! and know more about Lumicall on a blog by Daniel Pocock.
Looking forward to your valuable feedback !! 😀


08 July, 2017 09:59AM by urvikagola

hackergotchi for Daniel Silverstone

Daniel Silverstone

Gitano - Approaching Release - Access Control Changes

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects.

In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines

With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:

define is_steve user exact steve
allow "Steve can read my repo" is_steve op_read

And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:

define is_jeff user exact jeff
define is_steve user exact steve
define readers anyof is_jeff is_steve
allow "Steve and Jeff can read my repo" readers op_read

This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:

allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]

Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:

define readers anyof [user exact jeff] [user exact steve] [user exact susan]
allow "My friends can read my repo" op_read readers

The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY

As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better.

If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:

  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.

No more 'basic' matches

Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/${user}/. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy.

To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is.

This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.

  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano

If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system.

Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

08 July, 2017 09:31AM by Daniel Silverstone

July 06, 2017

hackergotchi for Urvika Gola

Urvika Gola

Speaking at Open Source Bridge’17

Recently, I and my Co – speaker Pranav Jain, got a chance to speak at Open Source Bridge conference which was held in Portland, Oregon!

Pranav talked about GSoC and I talked about Outreachy , together we talked about Free RTC project Lumicall.
OSB conference was much more than just a ‘conference’. More than content in the talks, it had meaning. I am referring to the amazing keynote session by Nicole Sanchez on Tech Reform. She explained wonderfully the need of the hour, i.e Diversity inclusion is not just ‘inclusion’. Focus should be on what comes after the inclusion, Growth.

We also met several Debian developers and Debian mentor for Outreachy (Hoping to meet my mentors someday!! )

Thanks to OSB, I got to meet Outreachy co-ordinator Sarah Sharp! It was wonderful meeting an Outreachy-person! 😀 We talked and exchanged ideas about the programme. and.. she clicked beautiful pictures of us delivering the talk.

Urvika Gola at Open Source BridgePicture Courtesy – Sarah Sharp
Urvika Gola at Open Source BridgePicture Courtesy – Sarah Sharp

The talk ended with an unexpected and very precious hand written note written by Audrey Eschright..

20170622_114118-1

Thank you Debian for giving us a chance to speak at Open Source Bridge and to meet wonderful people in Open Source. ❤


06 July, 2017 03:51PM by urvikagola

hackergotchi for Joachim Breitner

Joachim Breitner

The Micro Two Body Problem

Inspired by recent PhD comic “Academic Travel” and not-so-recent xkcd comic “Movie Narrative Charts”, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).

Two bodies traveling the world

Two bodies traveling the world

06 July, 2017 03:27PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Holger Levsen

Holger Levsen

20170706-fcmc.tv

a media experiment: fcmc.tv / G20 not welcome

Our view currently every day and night:

No one is illegal!

No football for fascists!

The FC/MC is a collective initiative to change the perception of the G20 in Hamburg - the summit itself and the protests surrounding it. FC/MC is a media experiment, located in the stadium of the amazing St.Pauli football club. We will operate until this Sunday, providing live coverage (text, photos, audio, video), back stories and much much more. Another world is possible!

Disclaimer: I'm not involved in content generation, I'm just doing computer stuff as usual, but that said, I really like the work of those who are! :-)

06 July, 2017 02:54PM

Thadeu Lima de Souza Cascardo

News on Debian on apexqtmo

I had been using my Samsung Galaxy S Relay 4G for almost three years when I decided to get a new phone. I would use this new phone for daily tasks and take the chance to get a new model for hacking in the future. My apexqtmo would still be my companion and would now be more available for real hacking.

And so it also happened that its power button got stuck. It was not the first time, but now it would happen every so often, and would require me to disassemble it. So I managed to remove the plastic button and leave it with a hole so I could press the button with a screwdriver or a paperclip. That was the excuse I needed to get it to running Debian only.

Though it's now always plugged on my laptop, I got the chance to hack on it on my scarce free time. As I managed to get a kernel I built myself running on it, I started fixing things like enabling devtmpfs. I didn't insist much on running systemd, though, and kept with System V. The Xorg issues were either on the server or the client, depending on which client I ran.

I decided to give a chance to running the Android userspace on a chroot, but gave up after some work to get some firmware loaded.

I managed to get the ALSA controls right after saving them inside a chroot on my CyanogenMod system. Then, restoring them on Debian allowed to play songs. Unfortunately, it seems I broke the audio jack when disassembling it. Otherwise, it would have been a great portable audio player. I even wrote a small program that would allow me to control mpd by swiping on the touchscreen.

Then, as Debian release approached, I decided to investigate the framebuffer issue closely. I ended finding out that it was really a bug in the driver, and after fixing it, the X server and client crashes were gone. It was beautiful to get some desktop environment running with the right colors, get a calculator started and really using the phone as a mobile device.

There are two lessons or findings here for me. The first one is that the current environments are really lacking. Even something like GPE can't work. The buttons are tiny, scrollbars are still the only way for scrolling, some of the time. No automatic virtual keyboards. So, there needs to be some investing in the existing environments, and maybe even the development of new environments for these kinds of devices. This was something I expected somehow, but it's still disappointing to know that we had so much of those developed in the past and now gone. I really miss Maemo. Running something like Qtopia would mean grabing a very old unmaintained software not available in Debian. There is still matchbox, but it's as subpar as the others I tested.

The second lesson is that building a userspace to run on old kernels will still hit the problem of broken drivers. In my particular case, unless I wrote code for using Ion instead of the framebuffer, I would have had that problem. Or it would require me to add code to xorg-xserver that is not appropriate. Or fix the kernel drivers of available kernel sourcecodes. But this does not scale much more than doing the right thing and adding upstream support for these devices.

So, I decided it was time I started working on upstream support for my device. I have it in progress and may send some upstream patches soon. I have USB and MMC/SDcard working fine. DRM is still a challenge, but thanks to Rob Clark, it's something I expect to get working soon, and after that, I would certainly celebrate. Maybe even consider starting the work on other devices a little sooner.

Trying to review my post on GNU on smartphones, here is where I would put some of the status of my device and some extra notes.

On Halium

I am really glad people started this project. This was one of the things I criticized: that though Ubuntu Phone and FirefoxOS built on Android userspace, they were not easily portable to many devices out there.

But as I am looking for a more pure GNU experience, let's call it that, Halium does not help much in that direction. But I'd like to see it flourish and allow people to use more OSes on more devices.

Unfortunately, it suffers from similar problems as the strategy I was trying to go with. If you have a device with a very old kernel, you won't be able to run some of the latest userspace, even with Android userspace help. So, lots of devices would be left unsupported, unless we start working on some upstream support.

On RYF Hardware

My device is one of the worst out there. It's a modem that has a peripherical CPU. Much has already been said about Qualcomm chips being some of the least freedom-friendly. Ironically, it's some with the best upstream support, as far as I found out while doing this upstreaming work. Guess we'll have to wait for opencores, openrisc and risc-v to catch up here.

Diversity

Though I have been experimenting with Debian, the upstream work would sure benefit lots of other OSes out there, mainly GNU+Linux based ones, but also other non-GNU Linux based ones. Not so much for other kernels.

On other options

After the demise of Ubuntu Phone, I am glad to see UBports catching up. I hope the project is sustainable and produce more releases for more devices.

Rooting

This needs documentation. Most of the procedures rely on booting a recovery system, which means we are already past the root requirement. We simply boot our own system, then. However, for some debugging strategies, getting root on the OEM system is useful. So, try to get root on your system, but beware of malware out there.

Booting

Most of these devices will have their bootloaders in there. They may be unlocked, allowing unsigned kernels to be booted. Replacing these bootloaders is still going to be a challenge for another future phase. Though adding a second bootloader there, one that is freedom respecting, and that allows more control on that booting step to the user is something possible once you have some good upstream support. One could either use kexec for that, or try to use the same device tree for U-Boot, and use the knowledge of the device drivers for Linux on writing drivers for U-Boot, GRUB or Libreboot.

Installation

If you have root on your OEM system, this is something that could be worked on. Otherwise, there is magic-device-tool, whose approach is one that could be used.

Kernels

While I am working on adding Linux upstream support for my device, it would be wonderful to see more kernels supporting those gadgets. Hopefully, some of the device driver writing and reverse engineering could help with that, though I am not too much optimistic. But there is hope.

Basic kernel drivers

Adding the basic support, like USB and MMC, after clocks, gpios, regulators and what not, is the first step to a long road. But it would allow using the device as a board computer, under better control of the user. Hopefully, lots of eletronic garbage out there would have some use as control gadgets. Instead of buying a new board, just grab your old phone and put it to some nice use.

Sensors, input devices, LEDs

There are usually easy too. Some sensors may depend on your modem or some userspace code that is not that easily reverse engineered. But others would just require some device tree work, or some small input driver.

Graphics

Here, things may get complicated. Even basic video output is something I have some trouble with. Thanks to some other people's work, I have hope at least for my device. And using the vendor's linux source code, some framebuffer should be possible, even some DRM driver. But OpenGL or other 3D acceleration support requires much more work than that, and, at this moment, it's not something I am counting on. I am thankful for the work lots of people have been doing on this area, nonetheless.

Wireless

Be it Wifi or Bluetooth, things get ugly here. The vendor driver might be available. Rewriting it would take a long time. Even then, it would most likely require some non-free firmware loading. Using USB OTG here might be an option.

Modem/GSM

The work of the Replicant folks on that is what gives me some hope that it might be possible to get this working. Something I would leave to after I have a good interface experience in my hands.

GPS

Problem is similar to the Modem/GSM one, as some code lives in userspace, sometimes talking to the modem is a requirement to get GPS access, etc.

Shells

This is where I would like to see new projects, even if they work on current software to get them more friendly to these form factors. I consider doing some work there, though that's not really my area of expertise.

Next steps

For me, my next steps are getting what I have working upstream, keep working on DRM support, packaging GPE, then experimenting with some compositor code. In the middle of that, trying to get some other devices started.

But documenting some of my work is something I realized I need to do more often, and this post is some try on that.

06 July, 2017 04:20AM

July 05, 2017

hackergotchi for Shirish Agarwal

Shirish Agarwal

Debian 9 release party at Co-hive

Dear all, This would be a biggish one so please have a chai/coffee or something stronger as it would take a while.

I would start with attempt at some alcohol humor. While some people know that I have had a series of convulsive epileptic seizure I had shared bits about it in another post as well. Recovery is going through allopathic medicines as well as physiotherapy which I go to every alternate day.

One of the exercises that I do in physiotherapy sessions is walk cross-legged on a line. While doing it today, it occurred to me that this is the same test that a Police inspector would do if they caught you drinking or are suspected of drunk driving. While some in the police force have now also have breath analyzer machines to determine alcohol content in the breath and body (and ways to deceive it are also there) the above exercise is still an integral part of examination. Now few of my friends who do drink have and had made expertise of walking on a line, while I due to this neurological disorder still have issues of walking on a line. So while I don’t think of a drinking party in the near future (6 months at least), if I ever do get caught with a friend who is drunk (by association I would also be a suspect) by a policeman who doesn’t have a breath analyzer machine, I could be in a lot of trouble. In addition if I tell him I have a neurological disorder I am bound to land up in a cell as he will think I’m trying to make a fool of him. If you are able to picturize the situation, I’m sure you will get a couple of laughs.

Now coming to the release party, I was a bit apprehensive. It’s been quite a while I had faced an audience and just coming out of illness didn’t know how well or ill-prepared I would be for the session. I had forsaken/given up exercising two days earlier before the event as I wanted to have loose body, loose limbs all over. I also took a mild sedative (1mg) the day before just so I will have a fit night sleep and be able to focus all my energies on the big day. (I don’t recommend sedatives unless the doctor prescribes) and I did have a doctor prescription so was able to have a nice sleep. I didn’t do any Debian study as I hoped my somewhat long experience with both Ubuntu and Debian should help me.

On the d-day, I had asked dhanesh (the organizer of the event) to accompany me from home to venue and back as I was unsure of the journey as it was around 9-10 kms. from my place and while I had been to the venue about couple of years back, I had just a mild remembrance of the place.

Anyways, Dhanesh compiled with my request and together we reached the venue before the appointed 1500 hrs. As it was a Sunday I was unsure as how many people would turn up as people usually like to cozy up on a Sunday.

Around 1530 hrs everybody showed up

The whole group

It included couple of co-organizers with most people being newbies so while I had thought of showing how to contribute via reporting bugs or putting up patches, had to set that aside and explain how things work in free software and open-source world. We didn’t get into the debate of free vs open-source or free/open-source/open-core as that would have been counter-productive and probably confusing for newbies.

We did however share the debian tree structure

debian-tree-structure-discussion

I was stumped by /var and /proc . I hadn’t taken my lappy as it is expensive (a lenovo thinkpad I love very dearly) and I was unsure if I would be able to take care of it (weight wise). Dhanesh had told me that he had Debian on his lappy + zsh both of which are my favourites.

While back at home I realized /var has been relegated to having apache/server logs and stuff like that, I do recall (vaguely) a thread on debian-devel about removing /var although that discussion went nowhere.

One of the bugs that we hit early on is that nobody had enough space on their hdd to have Debian comfortably. It took a while to get an external hdd and push some of the content from somebody’s lappy to the external drive to have space for the installation.

I did share the /. /home, optional swap while Dhanesh helped by sharing about having a separate /boot partition as well which I had forgotten. I can’t even begin to remember the number of times having a separate /boot partition has helped me in all of my systems.

That done, we did try to install/show Debian 9 without network but were hit with #866629 so wasn’t able to complete the installation. We had got the latest 9.0.1 as I had seen Steve’s message about issues with the live images but even then we were hit with the above bug. As shared in the bug history, it might be a good idea to have the last couple of RC’s (Release Candidate releases) as pre-release parties so people have a chance to report bugs and get them fixed. It was also nice to see Praveen raising the seriousness of the bug shared above.

The next day I also filed #866971 as I had mistaken the release to be a regular release and not the live instance. I have pushed my rationale and hope the right thing happens.

As installation takes a bit of time, we used the time to share about Google’s Summer of Code and absence of Debian from GSoC this year. I should have directed them to an itfoss article I wrote sometime ago and also shared that Debian is also looking to having a similar apprenticeship within Debian itself. There were questions about why Debian would like to take the administrative overhead, my response was that it probably had to do with Debian wanting more control over the process. While Debian has had some great luck getting all number of seats that it asks for in GSoC, the ball is always in Google’s court. Having that uncertainty off would be beneficial to Debian both in short-term as well as long-term. One interesting stat that was shared with me was that something akin to 89 students from India had been selected this year to GSoC even with the lower stipend and that is a far cry from the 10-15 students who are able to complete GSoC every year. Let’s see what happens this year.

One of the interesting fact/gossip I shared with them is that Google uses a modified/forked Debian internally which it probably would never release ever.

There were quite a few queries about GSoC which resulted into how contributions are made and how git had become the master of all VCS (Version Control Systems). While I do have pet bugs about git (the biggest one being for places/countries having not big bandwidth git fails many a times while cloning). I *think* the bug has been reported enough times but haven’t seen any improvements yet. There is a need of a solution like wget and wget -c so git just works even under the most trying circumstances (bandwidth wise)

We also shared what a repo is and Dhanesh helpfully did a git shortlog to show people how commits are commented. It was from his official work so he couldn’t show anything apart from the shortlog.

I also shared how non-technical people can help with regard to documentation, artwork but didn’t get into any concrete examples although wiki.debian.org would have been a good start. I also don’t know if it’s a fact or not but it seems/seemed that moinmoin (the current wiki solution used by debian) seems to have got sectional edit feature which I used sometime back. If moinmoin has got this feature then it is on par with mediawiki, although do know that mediawiki has lot more features going for it.

Dhanesh did manage to install a Debian 8.0.7 (while 8.8.0 was the last release) which might have been better. The debian-installer (d-i) looks the same even though I know there are improvements with respect to UEFI and many updated components.

There are and were many bugs which I wanted to share but didn’t know if it was the right forum or not, for e.g. #597176 which probably needs improvements in other libraries along with the JPEG 2000 support #604859 all of which I’m subscribed to.

We also had a talk about code documentation and code readability, python (as almost everything in Debian is based on python) I had actually wanted to show metrics.debian.net but had seen it was down the day before and again checked to see it is down now as well, hence reported it, will hopefully come on Debian BTS some time soonish. The idea was to share that Debian does have/uses other programming languages as well and is only limited by people interested in a specific language and be willing to package and maintain packages in that specific programming language.

As Dhanesh was part of Firefox OS and a Campus Ambassador we did discuss what went wrong in Firefox OS deployment and Firefox as a whole, specifically between the community and Mozilla the Corporation.

In this there were lots that was lot that I wasn’t able to share as had become tired otherwise would have shared that ncurses debian-installer interface although bit ugly to look at is still good as it has speech for visually differently abled people as well as people with poor sight.

There is and was lots to share about Debian but was conscious that I might over-burden the audience and my stamina was also stretched.

We also shared about automated builds bud didn’t show either travis.debian.net or jeankins.debian.net

We also had a small discussion about Artificial Intelligence, Robotics and how the outlook for people coming in Computer Science looks today.

One thing I forgot to share we did do the cake-cutting together

Dhanesh-Shirish cutting cake together

Dhanesh is on the left while I’m on the right.

We did have a cute 3-4 year old boy as our guest as well bud didn’t get good pictures of him.

Lastly, the Debian cake itself

Debian 9 cake

We also talked about the kernel and subsystem maintainers and how Linus is the overlord.

Look forward to comments. I am hoping Dhanesh might recollect some points that I might have missed/forgotten.

Update 07/07/17 – All programming languages stats. can be seen here


Filed under: Miscellenous Tagged: #alcohol humor, #co-hive, #crystal-gazing, #Debian bugs, #Debian cake, #Debian-release-party-9-pune, #live-installer, #moinmoin, #planet-debian, debian-installer, Pune

05 July, 2017 06:11PM by shirishag75

Patrick Matthäi

Bug in Stretch Linux vmxnet3 driver

Hey,

Are I am the only one experiencing #864642 (vmxnet3: Reports suspect GRO implementation on vSphere hosts / one VM crashes)? Unluckily I still do not have got any answer on my report so that I may help to track this issue down.

Help is welcome :)

05 July, 2017 04:21PM by the-me

July 04, 2017

hackergotchi for Steve Kemp

Steve Kemp

I've never been more proud

This morning I remembered I had a beefy virtual-server setup to run some kernel builds upon (when I was playing with Linux security moduels), and I figured before I shut it down I should use the power to run some fuzzing.

As I was writing some code in Emacs at the time I figured "why not fuzz emacs?"

After a few hours this was the result:

 deagol ~ $ perl -e 'print "`" x ( 1024 * 1024  * 12);' > t.el
 deagol ~ $ /usr/bin/emacs --batch --script ./t.el
 ..
 ..
 Segmentation fault (core dumped)

Yup, evaluating an lisp file caused a segfault, due to a stack-overflow (no security implications). I've never been more proud, even though I have contributed code to GNU Emacs in the past.

04 July, 2017 09:00PM

Reproducible builds folks

Reproducible Builds: week 114 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 25 and Saturday July 1 2017:

Upcoming and past events

Our next IRC meeting is scheduled for July 6th at 17:00 UTC (agenda). Topics to be discussed include an update on our next Summit, a potential NMU campaign, a press release for buster, branding, etc.

Toolchain development and fixes

  • James McCoy reviewed and merged Ximin Luo's script debpatch into the devscripts Git repository. This is useful for rebasing our patches onto new versions of Debian packages.

Packages fixed and bugs filed

Ximin Luo uploaded dash, sensible-utils and xz-utils to the deferred uploads queue with a delay of 14 days. (We have had patches for these core packages for over a year now and the original maintainers seem inactive so Debian conventions allow for this.)

Patches submitted upstream:

Reviews of unreproducible packages

4 package reviews have been added, 4 have been updated and 35 have been removed in this week, adding to our knowledge about identified issues.

One issue types has been updated:

One issue type has been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (68)
  • Daniel Schepler (1)
  • Michael Hudson-Doyle (1)
  • Scott Kitterman (6)

diffoscope development

tests.reproducible-builds.org

  • Vagrant Cascadian working on testing Debian:
    • Upgraded the 27 armhf build machines to stretch.
    • Fix mtu check to only display status when eth0 is present.
  • Helmut Grohne worked on testing Debian:
    • Limit diffoscope memory usage to 10GB virtual per process. It currently tends to use 50GB virtual, 36GB resident which is bad for everything else sharing the machine. (This is #865660)
  • Alexander Couzens working on testing LEDE:
    • Add multiple architectures / targets.
    • Limit LEDE and OpenWrt jobs to only allow one build at the same time using the jenkins build blocker plugin.
    • Move git log -1 > .html to node`document environment().
    • Update TODOs.
  • Mattia Rizzolo working on testing Debian:
    • Updated the maintenance script for postgresql-9.6.
    • Restored the postgresql database from backup after Holger accidently purged it.
  • Holger Levsen working on:
    • Merging all the above commits.
    • Added a check for (known) Jenkins zombie jobs and report them. (This is an long known problem with jenkins; deleted jobs sometimes come back…)
    • Upgraded the remaining amd64 nodes to stretch.
    • Accidentially purged postgres-9.4 from jenkins, so we could test our backups ;-)
    • Updated our stretch upgrade TODOs.

Misc.

This week's edition was written by Chris Lamb, Ximin Luo, Holger Levsen, Bernhard Wiedemann, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

04 July, 2017 12:49PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in June 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 12 hours to work on security updates for Debian 7 Wheezy. During this time I did the following:

  • Released DLA-983-1 and DLA-984-1 on tiff3/tiff to fix 4 CVE. I also updated our patch set to get back in sync with upstream since we had our own patches for a while and upstream ended up using a slightly different approach. I checked that the upstream fix did really fix the issues with the reproducer files that were available to us.
  • Handled CVE triage for a whole week.
  • Released DLA-1006-1 on libarchive (2 CVE fixed by Markus Koschany, one by me).

Debian packaging

Django. A last-minute update to Django in stretch before the release, I uploaded python-django 1:1.10.7-2 fixing two bugs (among which one was release critical) and filed the corresponding unblock request.

schroot. I tried to prepare another last-minute update, this time for schroot. The goal was to fix the bash completion (#855283) and a problem encountered by the Debian sysadmins (#835104). Those issues are fixed in unstable/testing but my unblock request got turned into a post-release stretch update because the release managers wanted to give the package some more testing time in unstable. Even now, they are wondering whether they should accept the new systemd service file.

live-build, live-config and live-boot. On live-build, I merged a patch to add a keyboard shortcut for the advanced option menu entry (#864386). For live-config, I uploaded version 5.20170623 to fix a broken boot sequence when you have multiple partitions (#827665). For live-boot, I uploaded version 1:20170623 to fix the path to udevadm (#852570) and avoiding a file duplication in the initrd (864385).

zim. I packaged a release candidate (0.67~rc2) in Debian Experimental and started to use it. I quickly discovered two annoying regressions that I reported upstream (here and here).

logidee-tools. This is a package I authored a long time ago and that I’m no longer actively using. It does still work but I sometimes wonder if it still has real users. Anyway I wanted to quickly replace the broken dependency on pgf but I ended up converting the Subversion repository to Git and I also added autopkgtests. At least those tests will inform me when the package no longer works… otherwise I would not notice since I’m no longer using it.

Bugs filed. I filed #865531 on lintian because the new check testsuite-autopkgtest-missing is giving some bad advice and probably does its check in a bad way. I also filed #865541 on sbuild because sbuild --apt-distupgrade can under some circumstances remove build-essential and break the build chroot. I filed an upstream ticket on publican to forward the request I received in #864648.

Sponsorship. I sponsored a jessie update for php-tcpdf (#814030) and dolibarr 5.0.4+dfsg3-1 for unstable. I sponsored many other packages, but all in the context of the pkg-security team.

pkg-security work

Now that the Stretch freeze is over, the team became more active again and I have been overwhelmed with the number of packages to review and sponsor:

  • knocker
  • recon-ng
  • dsniff
  • libnids
  • rfdump
  • snoopy
  • dirb
  • wcc
  • arpwatch
  • dhcpig
  • backdoor-factory

I also updated hashcat to a new upstream release (3.6.0) and had to discuss with upstream about its weird versioning change. Looks like we will have to introduce an epoch to be able to get back in sync with upstream. 🙁 To be able to get in sync with Kali, I introduced an hashcat-meta source package (in contrib) providing hashcat-nvidia to make it easy to install hashcat for owners of NVidia hardware.

Misc stuff

Distro Tracker. I merged a small CSS fix from Aurélien Couderc (#858101) and added a missing constraint on the data model (found through an unexpected traceback that I received by email). I also updated the list of repositories shortly after the stretch release (#865070).

Salt formulas. As part of my Kali work, I did setup a build daemon on Debian stretch host and I encountered a couple of issues with my Salt rules. I reported one against salt-formula (here) and I pushed updates for debootstrap-formula, apache-formula and schroot-formula.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

04 July, 2017 09:40AM by Raphaël Hertzog

Foteini Tsiami

Internationalization, part two

Now that sch-scripts has been renamed to ltsp-manager and translated to English, it was time to set up a proper project site for it in launchpad: http://www.launchpad.net/ltsp-manager

The following subpages were created for LTSP Manager there:

  • Code: a review of all the project code, which currently only includes the master git repository.
  • Bugs: a tracker where bugs can be reported. I already filed a few bugs there!
  • Translations: translators will use this to localize LTSP Manager to their languages. It’s not quite ready yet.
  • Answers: a place to ask the LTSP Manager developers for anything regarding the project.

We currently have an initial version of LTSP Manager running in Debian Stretch; although more testing and more bug reports will be needed before we start the localization phase. Attaching a first screenshot!

 


04 July, 2017 08:35AM by fottsia

Arturo Borrero González

Netfilter Workshop 2017: I'm new coreteam member!

nfws2017

I was invited to attend the Netfilter Workshop 2017 in Faro, Portugal this week, so I’m here with all the folks enjoying some days of talks, discussions and hacking around Netfilter and general linux networking.

The Coreteam of the Netfilter project, with active members Pablo Neira Ayuso (head), Jozsef Kadlecsik, Eric Leblond and Florian Westphal have invited me to join them, and the appointment has happened today.

You may contact me now at my new email address: arturo@netfilter.org

This is the result of my continued contribution to the Netfilter project since several years now (probably since 2012-2013). I’m really happy with this, and I appreciate their recognition. I will do my best in this new position. Thanks!

Regarding the workshop itself, we are having lots of interesting talks and discussions about the state of the Netfilter technology, open issues, missing features and where to go in the future.

Really interesting!

04 July, 2017 08:00AM

John Goerzen

Time, Frozen

We’re expecting a baby any time now. The last few days have had an odd quality of expectation: any time, our family will grow.

It makes time seem to freeze, to stand still.

We have Jacob, about to start fifth grade and middle school. But here he is, still a sweet and affectionate kid as ever. He loves to care for cats and seeks them out often. He still keeps an eye out for the stuffed butterfly he’s had since he was an infant, and will sometimes carry it and a favorite blanket around the house. He will also many days prepare the “Yellow House News” on his computer, with headlines about his day and some comics pasted in — before disappearing to play with Legos for awhile.

And Oliver, who will walk up to Laura and “give baby a hug” many times throughout the day — and sneak up to me, try to touch my arm, and say “doink” before running off before I can “doink” him back. It was Oliver that had asked for a baby sister for Christmas — before he knew he’d be getting one!

In the past week, we’ve had out the garden hose a couple of times. Both boys will enjoy sending mud down our slide, or getting out the “water slide” to play with, or just playing in mud. The rings of dirt in the bathtub testify to the fun that they had. One evening, I built a fire, we made brats and hot dogs, and then Laura and I sat visiting and watching their water antics for an hour after, laughter and cackles of delight filling the air, and cats resting on our laps.

These moments, or countless others like Oliver’s baseball games, flying the boys to a festival in Winfield, or their cuddles at bedtime, warm the heart. I remember their younger days too, with fond memories of taking them camping or building a computer with them. Sometimes a part of me wants to just keep soaking in things just as they are; being a parent means both taking pride in children’s accomplishments as they grow up, and sometimes also missing the quiet little voice that can be immensely excited by a caterpillar.

And yet, all four of us are so excited and eager to welcome a new life into our home. We are ready. I can’t wait to hold the baby, or to lay her to sleep, to see her loving and excited older brothers. We hope for a smooth birth, for mom and baby.

Here is the crib, ready, complete with a mobile with a cute bear (and even a plane). I can’t wait until there is a little person here to enjoy it.

04 July, 2017 03:00AM by John Goerzen

July 03, 2017

Antoine Beaupré

My free software activities, June 2017

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This time I worked on Mercurial, sudo and Puppet.

Mercurial remote code execution

I issued DLA-1005-1 to resolve problems with the hg server --stdio command that could be abused by "remote authenticated users to launch the Python debugger, and consequently execute arbitrary code, by using --debugger as a repository name" (CVE-2017-9462).

Backporting the patch was already a little tricky because, as is often the case in our line of work, the code had changed significantly in newer version. In particular, the commandline dispatcher had been refactored which made the patch non-trivial to port. On the other hand, mercurial has an extensive test suite which allowed me to make those patches in all confidence. I also backported a part of the test suite to detect certain failures better and to fix the output so that it matches the backported code. The test suite is slow, however, which meant slow progress when working on this package.

I also noticed a strange issue with the test suite: all hardlink operations would fail. Somehow it seems that my new sbuild setup doesn't support doing hardlinks. I ended up building a tarball schroot to build those types of packages, as it seems the issue is related to the use of overlayfs in sbuild. The odd part is my tests of overlayfs, following those instructions, show that it does support hardlinks, so there maybe something fishy here that I misunderstand.

This, however, allowed me to get a little more familiar with sbuild and the schroots. I also took this opportunity to optimize the builds by installing an apt-cacher-ng proxy to speed up builds, which will also be useful for regular system updates.

Puppet remote code execution

I have issued DLA-1012-1 to resolve a remote code execution attack against puppetmaster servers, from authenticated clients. To quote the advisory: "Versions of Puppet prior to 4.10.1 will deserialize data off the wire (from the agent to the server, in this case) with a attacker-specified format. This could be used to force YAML deserialization in an unsafe manner, which would lead to remote code execution."

The fix was non-trivial. Normally, this would have involved fixing the YAML parsing, but this was considered problematic because the ruby libraries themselves were vulnerable and it wasn't clear we could fix the problem completely by fixing YAML parsing. The update I proposed took the bold step of switching all clients to PSON and simply deny YAML parsing from the server. This means all clients need to be updated before the server can be updated, but thankfully, updated clients will run against an older server as well. Thanks to LeLutin at Koumbit for helping in testing patches to solve this issue.

Sudo privilege escalation

I have issued DLA-1011-1 to resolve an incomplete fix for a privilege escalation issue (CVE-2017-1000368 from CVE-2017-1000367). The backport was not quite trivial as the code had changed quite a lot since wheezy as well. Whereas mercurial's code was more complex, it's nice to see that sudo's code was actually simpler and more straightforward in newer versions, which is reassuring. I uploaded the packages for testing and uploaded them a year later.

I also took extra time to share the patch in the Debian bugtracker, so that people working on the issue in stable may benefit from the backported patch, if needed. One issue that came up during that work is that sudo doesn't have a test suite at all, so it is quite difficult to test changes and make sure they do not break anything.

Should we upload on fridays?

I brought up a discussion on the mailing list regarding uploads on fridays. With the sudo and puppet uploads pending, it felt really ... daring to upload both packages, on a friday. Years of sysadmin work hardwired me to be careful on fridays; as the saying goes: "don't deploy on a friday if you don't want to work on the weekend!"

Feedback was great, but I was surprised to find that most people are not worried worried about those issues. I have tried to counter some of the arguments that were brought up: I wonder if there could be a disconnection here between the package maintainer / programmer work and the sysadmin work that is at the receiving end of that work. Having myself to deal with broken updates in the past, I'm surprised this has never come up in the discussions yet, or that the response is so underwhelming.

So far, I'll try to balance the need for prompt security updates and the need for stable infrastructure. One does not, after all, go without the other...

Triage

I also did small fry triage:

Hopefully some of those will come to fruitition shortly.

Other work

My other work this month was a little all over the place.

Stressant

Uploaded a new release (0.4.1) of stressant to split up the documentation from the main package, as the main package was taking up too much space according to grml developers.

The release also introduces limited anonymity option, by blocking serial numbers display in the smartctl output.

Debiman

Also did some small followup on the debiman project to fix the FAQ links.

Local server maintenance

I upgraded my main server to Debian stretch. This generally went well, althought the upgrade itself took way more time than I would have liked (4 hours!). This is partly because I have a lot of cruft installed on the server, but also because of what I consider to be issues in the automation of major Debian upgrades. For example, I was prompted for changes in configuration files at seemingly random moments during the upgrade, and got different debconf prompts to answer. This should really be batched together, and unfortunately I had forgotten to use the home-made script I established when i was working at Koumbit which shortens the upgrade a bit.

I wish we would improve on our major upgrade mechanism. I documented possible solutions for this in the AutomatedUpgrade wiki page, but I'm not sure I see exactly where to go from here.

I had a few regressions after the upgrade:

  • the infrared remote control stopped working: still need to investigate
  • my home-grown full-disk encryption remote unlocking script broke, but upstream has a nice workaround, see Debian bug #866786
  • gdm3 breaks bluetooth support (Debian bug #805414 - to be fair, this is not a regression in stretch, it's just that I switched my workstation from lightdm to gdm3 after learning that the latter can do rootless X11!)

Docker and Subsonic

I did my first (and late?) foray into Docker and containers. My rationale was that I wanted to try out Subsonic, an impressive audio server which some friends have shown me. Since Subsonic is proprietary, I didn't want it to contaminate the rest of my server and it seemed like a great occasion to try out containers to keep things tidy. Containers may also allow me to transparently switch to the FLOSS fork LibreSonic once the trial period is over.

I have learned a lot and may write more about the details of that experience soon, for now you can look at the contributions I made to the unofficial Subsonic docker image, but also the LibreSonic one.

Since Subsonic also promotes album covers as first-class citizens, I used beets to download a lot of album covers, which was really nice. I look forward to using beets more, but first I'll need to implement two plugins.

Wallabako

I did a small release of wallabako to fix the build with the latest changes in the underlying wallabago library, which led me to ask upstream to make versionned releases.

I also looked into creating a separate documentation site but it looks like mkdocs doesn't like me very much: the table of contents is really ugly...

Small fry

That's about it! And that was supposed to be a slow month...

03 July, 2017 04:37PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, June 2017

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 5 hours. I worked all 20 hours.

I spent most of my time working - together with other Linux kernel developers - on backporting and testing several versions of the fix for CVE-2017-1000364, part of the "Stack Clash" problem. I uploaded two updates to linux and issued DLA-993-1 and DLA-993-2. Unfortunately the latest version still causes regressions for some applications, which I will be investigating this month.

I also released a stable update on the Linux 3.2 longterm stable branch (3.2.89) and prepared another (3.2.90) which I released today.

03 July, 2017 04:11PM

hackergotchi for Vincent Bernat

Vincent Bernat

Performance progression of IPv4 route lookup on Linux

TL;DR: Each of Linux 2.6.39, 3.6 and 4.0 brings notable performance improvements for the IPv4 route lookup process.


In a previous article, I explained how Linux implements an IPv4 routing table with compressed tries to offer excellent lookup times. The following graph shows the performance progression of Linux through history:

IPv4 route lookup performance

Two scenarios are tested:

  • 500,000 routes extracted from an Internet router (half of them are /24), and
  • 500,000 host routes (/32) tightly packed in 4 distinct subnets.

All kernels are compiled with GCC 4.9 (from Debian Jessie). This version is able to compile older kernels1 as well as current ones. The kernel configuration used is the default one with CONFIG_SMP and CONFIG_IP_MULTIPLE_TABLES options enabled (however, no IP rules are used). Some other unrelated options are enabled to be able to boot them in a virtual machine and run the benchmark.

The measurements are done in a virtual machine with one vCPU2. The host is an Intel Core i5-4670K and the CPU governor was set to “performance”. The benchmark is single-threaded. Implemented as a kernel module, it calls fib_lookup() with various destinations in 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC (and converted to nanoseconds by assuming a constant clock).

The following kernel versions bring a notable performance improvement:

  • In Linux 2.6.39, commit 3630b7c050d9, David Miller removes the hash-based routing table implementation to switch to the LPC-trie implementation (available since Linux 2.6.13 as a compile-time option). This brings a small regression for the scenario with many host routes but boosts the performance for the general case.

  • In Linux 3.0, commit 281dc5c5ec0f, the improvement is not related to the network subsystem. Linus Torvalds disables the compiler size-optimization from the default configuration. It was believed that optimizing for size would help keeping the instruction cache efficient. However, compilers generated under-performing code on x86 when this option was enabled.

  • In Linux 3.6, commit f4530fa574df, David Miller adds an optimization to not evaluate IP rules when they are left unconfigured. From this point, the use of the CONFIG_IP_MULTIPLE_TABLES option doesn’t impact the performances unless some IP rules are configured. This version also removes the route cache (commit 5e9965c15ba8). However, this has no effect on the benchmark as it directly calls fib_lookup() which doesn’t involve the cache.

  • In Linux 4.0, notably commit 9f9e636d4f89, Alexander Duyck adds several optimizations to the trie lookup algorithm. It really pays off!

  • In Linux 4.1, commit 0ddcf43d5d4a, Alexander Duyck collapses the local and main tables when no specific IP rules are configured. For non-local traffic, both those tables were looked up.


  1. Compiling old kernels with an updated userland may still require some small patches

  2. The kernels are compiled with the CONFIG_SMP option to use the hierarchical RCU and activate more of the same code paths as actual routers. However, progress on parallelism are left unnoticed. 

03 July, 2017 01:25PM by Vincent Bernat

July 02, 2017

Bits from Debian

New Debian Developers and Maintainers (May and June 2017)

The following contributors got their Debian Developer accounts in the last two months:

  • Alex Muntada (alexm)
  • Ilias Tsitsimpis (iliastsi)
  • Daniel Lenharo de Souza (lenharo)
  • Shih-Yuan Lee (fourdollars)
  • Roger Shimizu (rosh)

The following contributors were added as Debian Maintainers in the last two months:

  • James Valleroy
  • Ryan Tandy
  • Martin Kepplinger
  • Jean Baptiste Favre
  • Ana Cristina Custura
  • Unit 193

Congratulations!

02 July, 2017 12:30PM by Jean-Pierre Giraud

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

apt-offline 1.8.1 released

apt-offline 1.8.1 released.

This is a bug fix release fixing some python3 glitches related to module imports. Recommended for all users.

apt-offline (1.8.1) unstable; urgency=medium

  * Switch setuptools to invoke py3
  * No more argparse needed on py3
  * Fix genui.sh based on comments from pyqt mailing list
  * Bump version number to 1.8.1

 -- Ritesh Raj Sarraf <rrs@debian.org>  Sat, 01 Jul 2017 21:39:24 +0545
 

What is apt-offline

Description: offline APT package manager
 apt-offline is an Offline APT Package Manager.
 .
 apt-offline can fully update and upgrade an APT based distribution without
 connecting to the network, all of it transparent to APT.
 .
 apt-offline can be used to generate a signature on a machine (with no network).
 This signature contains all download information required for the APT database
 system. This signature file can be used on another machine connected to the
 internet (which need not be a Debian box and can even be running windows) to
 download the updates.
 The downloaded data will contain all updates in a format understood by APT and
 this data can be used by apt-offline to update the non-networked machine.
 .
 apt-offline can also fetch bug reports and make them available offline.

 

Categories: 

Keywords: 

Like: 

02 July, 2017 11:38AM by Ritesh Raj Sarraf