July 18, 2019

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, June 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 201 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 7 hours (out of 14 hours allocated plus 7 extra hours from May, thus carrying over 14h to July).
  • Adrian Bunk did 6 hours (out of 8 hours allocated plus 8 extra hours from May, thus carrying over 10h to July).
  • Ben Hutchings did 17 hours (out of 17 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 17 hours (out of 17 hours allocated plus 0.25 extra hours from May, thus carrying over 0.25h to July).
  • Emilio Pozuelo Monfort did not provide his June report yet. (He got 17 hours allocated and carried over 0.25h from May).
  • Hugo Lefeuvre did 4.25 hours (out of 17 hours allocated and he gave back 12.75 hours to the pool, thus he’s not carrying over any hours to July).
  • Jonas Meurer did 16.75 hours (out of 17 hours allocated plus 1.75h extra hours from May, thus he is carrying over 2h to July).
  • Markus Koschany did 17 hours (out of 17 hours allocated).
  • Mike Gabriel did 9.75 hours (out of 17 hours allocated, thus carrying over 7.25h to July).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated plus 6h from June, then he gave back 1.5h to the pool, thus he is carrying over 8h to July).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 17 hours (out of 17 hours allocated).
  • Thorsten Alteholz did 17 hours (out of 17 hours allocated).

DebConf sponsorship

Thanks to the Extended LTS service, Freexian has been able to invest some money in DebConf sponsorship. This year, Debconf attendees should have Debian LTS stickers and flyer in their welcome bag. And while we were thinking of marketing, we also opted to create a promotional video explaining LTS and Freexian’s offer. This video will be premiered at Debconf 19!

Evolution of the situation

We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker (now for oldoldstable as Buster has been released and thus Stretch became oldoldstable) currently lists 41 packages with a known CVE and the dla-needed.txt file has 43 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

18 July, 2019 12:08PM by Raphaël Hertzog

hackergotchi for Kees Cook

Kees Cook

security things in Linux v5.2

Previously: v5.1.

Linux kernel v5.2 was released last week! Here are some security-related things I found interesting:

page allocator freelist randomization
While the SLUB and SLAB allocator freelists have been randomized for a while now, the overarching page allocator itself wasn’t. This meant that anything doing allocation outside of the kmem_cache/kmalloc() would have deterministic placement in memory. This is bad both for security and for some cache management cases. Dan Williams implemented this randomization under CONFIG_SHUFFLE_PAGE_ALLOCATOR now, which provides additional uncertainty to memory layouts, though at a rather low granularity of 4MB (see SHUFFLE_ORDER). Also note that this feature needs to be enabled at boot time with page_alloc.shuffle=1 unless you have direct-mapped memory-side-cache (you can check the state at /sys/module/page_alloc/parameters/shuffle).

stack variable initialization with Clang
Alexander Potapenko added support via CONFIG_INIT_STACK_ALL for Clang’s -ftrivial-auto-var-init=pattern option that enables automatic initialization of stack variables. This provides even greater coverage than the prior GCC plugin for stack variable initialization, as Clang’s implementation also covers variables not passed by reference. (In theory, the kernel build should still warn about these instances, but even if they exist, Clang will initialize them.) Another notable difference between the GCC plugins and Clang’s implementation is that Clang initializes with a repeating 0xAA byte pattern, rather than zero. (Though this changes under certain situations, like for 32-bit pointers which are initialized with 0x000000AA.) As with the GCC plugin, the benefit is that the entire class of uninitialized stack variable flaws goes away.

Kernel Userspace Access Prevention on powerpc
Like SMAP on x86 and PAN on ARM, Michael Ellerman and Russell Currey have landed support for disallowing access to userspace without explicit markings in the kernel (KUAP) on Power9 and later PPC CPUs under CONFIG_PPC_RADIX_MMU=y (which is the default). This is the continuation of the execute protection (KUEP) in v4.10. Now if an attacker tries to trick the kernel into any kind of unexpected access from userspace (not just executing code), the kernel will fault.

Microarchitectural Data Sampling mitigations on x86
Another set of cache memory side-channel attacks came to light, and were consolidated together under the name Microarchitectural Data Sampling (MDS). MDS is weaker than other cache side-channels (less control over target address), but memory contents can still be exposed. Much like L1TF, when one’s threat model includes untrusted code running under Symmetric Multi Threading (SMT: more logical cores than physical cores), the only full mitigation is to disable hyperthreading (boot with “nosmt“). For all the other variations of the MDS family, Andi Kleen (and others) implemented various flushing mechanisms to avoid cache leakage.

unprivileged userfaultfd sysctl knob
Both FUSE and userfaultfd provide attackers with a way to stall a kernel thread in the middle of memory accesses from userspace by initiating an access on an unmapped page. While FUSE is usually behind some kind of access controls, userfaultfd hadn’t been. To avoid things like Use-After-Free heap grooming, Peter Xu added the new “vm.unprivileged_userfaultfd” sysctl knob to disallow unprivileged access to the userfaultfd syscall.

temporary mm for text poking on x86
The kernel regularly performs self-modification with things like text_poke() (during stuff like alternatives, ftrace, etc). Before, this was done with fixed mappings (“fixmap”) where a specific fixed address at the high end of memory was used to map physical pages as needed. However, this resulted in some temporal risks: other CPUs could write to the fixmap, or there might be stale TLB entries on removal that other CPUs might still be able to write through to change the target contents. Instead, Nadav Amit has created a separate memory map for kernel text writes, as if the kernel is trying to make writes to userspace. This mapping ends up staying local to the current CPU, and the poking address is randomized, unlike the old fixmap.

ongoing: implicit fall-through removal
Gustavo A. R. Silva is nearly done with marking (and fixing) all the implicit fall-through cases in the kernel. Based on the pull request from Gustavo, it looks very much like v5.3 will see -Wimplicit-fallthrough added to the global build flags and then this class of bug should stay extinct in the kernel.

That’s it for now; let me know if you think I should add anything here. We’re almost to -rc1 for v5.3!

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

18 July, 2019 12:07AM by kees

July 17, 2019

hackergotchi for Steve Kemp

Steve Kemp

Building a computer - part 2

My previous post on the subject of building a Z80-based computer briefly explained my motivation, and the approach I was going to take.

This post describes my progress so far:

  • On the hardware side, zero progress.
  • On the software-side, lots of fun.

To recap I expect to wire a Z80 microprocessor to an Arduino (mega). The arduino will generate a clock-signal which will make the processor "tick". It will also react to read/write attempts that the processor makes to access RAM, and I/O devices.

The Z80 has a neat system for requesting I/O, via the use of the IN and OUT instructions which allow the processor to read/write a single byte to one of 255 connected devices.

To experiment, and for a memory recap I found a Z80 assembler, and a Z80 disassembler, both packaged for Debian. I also found a Z80 emulator, which I forked and lightly-modified.

With the appropriate tools available I could write some simple code. I implemented two I/O routines in the emulator, one to read a character from STDIN, and one to write to STDOUT:

IN A, (1)   ; Read a character from STDIN, store in A-register.
OUT (1), A  ; Write the character in A-register to STDOUT

With those primitives implemented I wrote a simple script:

;
;  Simple program to upper-case a string
;
org 0
   ; show a prompt.
   ld a, '>'
   out (1), a
start:
   ; read a character
   in a,(1)
   ; eof?
   cp -1
   jp z, quit
   ; is it lower-case?  If not just output it
   cp 'a'
   jp c,output
   cp 'z'
   jp nc, output
   ; convert from lower-case to upper-case.  yeah.  math.
   sub a, 32
output:
   ; output the character
   out (1), a
   ; repeat forever.
   jr start
quit:
   ; terminate
   halt

With that written it could be compiled:

 $ z80asm ./sample.z80 -o ./sample.bin

Then I could execute it:

 $ echo "Hello, world" | ./z80emulator ./sample.bin
 Testing "./sample.bin"...
 >HELLO, WORLD

 1150 cycle(s) emulated.

And that's where I'll leave it for now. When I have the real hardware I'll hookup some fake-RAM containing this program, and code a similar I/O handler to allow reading/writing to the arduino's serial-console. That will allow the same code to run, unchanged. That'd be nice.

I've got a simple Z80-manager written, but since I don't have the chips yet I can only compile-test it. We'll see how well I did soon enough.

17 July, 2019 06:45PM

John Goerzen

Tips for Upgrading to, And Securing, Debian Buster

Wow.  Once again, a Debian release impresses me — a guy that’s been using Debian for more than 20 years.  For the first time I can ever recall, buster not only supported suspend-to-disk out of the box on my laptop, but it did so on an encrypted volume atop LVM.  Very impressive!

For those upgrading from previous releases, I have a few tips to enhance the experience with buster.

AppArmor

AppArmor is a new line of defense against malicious software.  The release notes indicate it’s now enabled by default in buster.  For desktops, I recommend installing apparmor-profiles-extra apparmor-notify.  The latter will provide an immediate GUI indication when something is blocked by AppArmor, so you can diagnose strange behavior.  You may also need to add userself to the adm group with adduser username adm.

Security

I recommend installing these packages and taking note of these items, some of which are different in buster:

  • unattended-upgrades will automatically install security updates for you.  New in buster, the default config file will also apply stable updates in addition to security updates.
  • needrestart will detect what processes need a restart after a library update and, optionally, restart them. Beginning in buster, it will not automatically restart them when in noninteractive (unattended-upgrades) mode. This can be changed by editing /etc/needrestart/needrestart.conf (or, better, putting a .conf file in /etc/needrestart/conf.d) and setting $nrconf{restart} = 'a'.
  • debian-security-support will warn you of gaps in security support for packages you are installing or running.
  • package-update-indicator is useful for desktops that won’t be running unattended-upgrades. I believe Gnome 3 has this built in, but for other desktops, this adds an icon when updates are available.
  • You can harden apt with seccomp.
  • You can enable UEFI secure boot.

Tuning

If you hadn’t noticed, many of these items are links into the buster release notes. It’s a good document to read over, even for a new buster install.

17 July, 2019 05:41PM by John Goerzen

hackergotchi for Jonathan Dowland

Jonathan Dowland

Nadine Shah

ticket and cuttings from gig

ticket and cuttings from gig

On July 8 I went to see Nadine Shah perform at the Whitley Bay Playhouse as part of the Mouth Of The Tyne Festival. It was a fantastic gig!

I first saw Nadine Shah — as a solo artist — supporting the Futureheads in the same venue, back in 2013. At that point, she had either just released her debut album, Love Your Dum and Mad, or was just about to (It came out sometime in the same month), but this was the first we heard of her. If memory serves, she played with a small backing band (possibly just drummer, likely co-writer Ben Hillier) and she handled keyboards. It's a pretty small venue. My friends and I loved that show, and as we talked about how good it was, what it reminded us of, (I think we said stuff like "that was nice and gothy, I haven't heard stuff like that for ages"), we hadn't realised that she was sat right behind us, with a grin on her face!

Since then shes put out two more albums, Fast Food which got a huge amount of airplay on 6 Music (and was the point at which I bought into her) and the Mercury-nominated Holiday Destination, a really compelling evolution of her art and a strong political statement.

Kinevil 7 inch

Kinevil 7 inch

It turns out, though, that I think we saw her before that, too: A local band called Kinevil (now disbanded) supported Ladytron at Digital in Newcastle in 2008. I happen to have their single "Everything's Gone Black" on vinyl (here it is on bandcamp) and noticed years later that the singer is credited as Nadine Shar.

This year's gig was my first gig of 2019, and it was a real blast. The sound mix was fantastic, and loud. The performance was very confident: Nadine now exclusively sings, all the instrument work is done by her band which is now five-strong. The saxophonist made some incredible noises that reminded me of some synth stuff from mid-90s Nine Inch Nails records. I've never heard a saxaphone played that way before. Apparently Shah has been on hiatus for a while for personal reasons and this was her comeback gig. Under those circumstances, it was very impressive. I hope the reception was what she hoped for.

17 July, 2019 03:45PM

July 16, 2019

hackergotchi for Holger Levsen

Holger Levsen

20190716-wanna-work-on-lts

Wanna work on Debian LTS (and get funded)?

If you are in Curitiba and are interested to work on Debian LTS (and get paid for that work), please come and talk to me, Debian LTS is still looking for more contributors! Also, if you want a bigger challenge, extended LTS also needs more contributors, though I'd suggest you start with regular LTS ;)

On Thursday, July 25th, there will also be a talk titled "Debian LTS, the good, the bad and the better" where we plan to present what we think works nicely and what doesn't work so nicely yet and where we also want to gather your wishes and requests.

If cannot make it to Curitiba, there will be a video stream (and the possibility to ask questions via IRC) and you can always send me an email or ping on IRC if you want to work on LTS.

16 July, 2019 03:56PM

July 15, 2019

Russ Allbery

DocKnot 3.01

The last release of DocKnot failed a whole bunch of CPAN tests that didn't fail locally or on Travis-CI, so this release cleans that up and adds a few minor things to the dist command (following my conventions to run cppcheck and Valgrind tests). The test failures are moderately interesting corners of Perl module development that I hadn't thought about, so seem worth blogging about.

First, the more prosaic one: as part of the tests of docknot dist, the test suite creates a new Git repository because the release process involves git archive and needs a repository to work from. I forgot to use git config to set user.email and user.name, so that broke on systems without Git global configuration. (This would have been caught by the Debian package testing, but sadly I forgot to add git to the build dependencies, so that test was being skipped.) I always get bitten by this each time I write a test suite that uses Git; someday I'll remember the first time.

Second, the build system runs perl Build.PL to build a tiny test package using Module::Build, and it was using system Perl. Slaven Rezic pointed out that this fails if Module::Build isn't installed system-wide or if system Perl doesn't work for whatever reason. Using system Perl is correct for normal operation of docknot dist, but the test suite should use the same Perl version used to run the test suite. I added a new module constructor argument for this, and the test suite now passes in $^X for that argument.

Finally, there was a more obscure problem on Windows: the contents of generated and expected test files didn't match because the generated file content was supposedly just the file name. I think I fixed this, although I don't have Windows on which to test. The root of the problem is another mistake I've made before with Perl: File::Temp->new() does not return a file name, but it returns an object that magically stringifies to the file name, so you can use it that way in many situations and it appears to magically work. However, on Windows, it was not working the way that it was on my Debian system. The solution was to explicitly call the filename method to get the actual file name and use it consistently everywhere; hopefully tests will now pass on Windows.

You can get the latest version from CPAN or from the DocKnot distribution page. A Debian package is also available from my personal archive. I'll probably upload DocKnot to Debian proper during this release cycle, since it's gotten somewhat more mature, although I'd like to make some backward-incompatible changes and improve the documentation first.

15 July, 2019 04:15AM

July 14, 2019

François Marier

Installing Debian buster on a GnuBee PC 2

Here is how I installed Debian 10 / buster on my GnuBee Personal Cloud 2, a free hardware device designed as a network file server / NAS.

Flashing the LibreCMC firmware with Debian support

Before we can install Debian, we need a firmware that includes all of the necessary tools.

On another machine, do the following:

  1. Download the latest librecmc-ramips-mt7621-gb-pc1-squashfs-sysupgrade_*.bin.
  2. Mount a vfat-formatted USB stick.
  3. Copy the file onto it and rename it to gnubee.bin.
  4. Unmount the USB stick

Then plug a network cable between your laptop and the black network port and plug the USB stick into the GnuBee before rebooting the GnuBee via ssh:

ssh 192.68.10.0
reboot

If you have a USB serial cable, you can use it to monitor the flashing process:

screen /dev/ttyUSB0 57600

otherwise keep an eye on the LEDs and wait until they are fully done flashing.

Getting ssh access to LibreCMC

Once the firmware has been updated, turn off the GnuBee manually using the power switch and turn it back on.

Now enable SSH access via the built-in LibreCMC firmware:

  1. Plug a network cable between your laptop and the black network port.
  2. Open web-based admin panel at http://192.168.10.0.
  3. Go to System | Administration.
  4. Set a root password.
  5. Disable ssh password auth and root password logins.
  6. Paste in your RSA ssh public key.
  7. Click Save & Apply.
  8. Go to Network | Firewall.
  9. Select "accept" for WAN Input.
  10. Click Save & Apply.

Finaly, go to Network | Interfaces and note the ipv4 address of the WAN port since that will be needed in the next step.

Installing Debian

The first step is to install Debian jessie on the GnuBee.

Connect the blue network port into your router/switch and ssh into the GnuBee using the IP address you noted earlier:

ssh root@192.168.1.xxx

and the root password you set in the previous section.

Then use fdisk /dev/sda to create the following partition layout on the first drive:

Device       Start       End   Sectors   Size Type
/dev/sda1     2048   8390655   8388608     4G Linux swap
/dev/sda2  8390656 234441614 226050959 107.8G Linux filesystem

Note that I used an 120GB solid-state drive as the system drive in order to minimize noise levels.

Then format the swap partition:

mkswap /dev/sda1

and download the latest version of the jessie installer:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/GnuBee_Docs/master/GB-PCx/scripts/jessie_3.10.14/debian-jessie-install

(Yes, the --no-check-certificate is really unfortunate. Please leave a comment if you find a way to work around it.)

The stock installer fails to bring up the correct networking configuration on my network and so I have modified the install script by changing the eth0.1 blurb to:

auto eth0.1
iface eth0.1 inet static
    address 192.168.10.1
    netmask 255.255.255.0

Then you should be able to run the installer succesfully:

sh ./debian-jessie-install

and reboot:

reboot

Restore ssh access in Debian jessie

Once the GnuBee has finished booting, login using the serial console:

  • username: root
  • password: GnuBee

and change the root password using passwd.

Look for the IPv4 address of eth0.2 in the output of the ip addr command and then ssh into the GnuBee from your desktop computer:

ssh root@192.168.1.xxx  # type password set above
mkdir .ssh
vim .ssh/authorized_keys  # paste your ed25519 ssh pubkey

Finish the jessie installation

With this in place, you should be able to ssh into the GnuBee using your public key:

ssh root@192.168.1.172

and then finish the jessie installation:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/gnubee-git.github.io/master/debian/debian-modules-install
bash ./debian-modules-install
reboot

After rebooting, I made a few tweaks to make the system more pleasant to use:

update-alternatives --config editor  # choose vim.basic
dpkg-reconfigure locales  # enable the locale that your desktop is using

Upgrade to stretch and then buster

To upgrade to stretch, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian stretch main
deb http://httpredir.debian.org/debian stretch-updates main
deb http://security.debian.org/ stretch/updates main

Then upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

To upgrade to buster, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian buster main
deb http://httpredir.debian.org/debian buster-updates main
deb http://security.debian.org/debian-security buster/updates main

and upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

Next steps

At this point, my GnuBee is running the latest version of Debian stable, however there are two remaining issues to fix:

  1. openssh-server doesn't work and I am forced to access the GnuBee via the serial interface.

  2. The firmware is running an outdated version of the Linux kernel though this is being worked on by community members.

I hope to resolve these issues soon, and will update this blog post once I do, but you are more than welcome to leave a comment if you know of a solution I may have overlooked.

14 July, 2019 10:30PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Hairdressers with Supposedly Funny Pun Names I’ve Visited Recently

Mika and I recently spent two weeks biking home to Seattle from our year in Palo Alto. The route was ~1400 kilometers and took us past 10 volcanoes and 4 hot springs.

Route of our bike trip from Davis, CA to Oregon City, OR. An elevation profile is also shown.

To my delight, the route also took us past at least 8 hairdressers with supposedly funny pun names! Plus two in Oakland on our way out.

As a result of this trip, I’ve now made 24 contributions to the Hairdressers with Supposedly Funny Pun Names Flickr group photo pool.

14 July, 2019 10:08PM by Benjamin Mako Hill

hackergotchi for Daniel Silverstone

Daniel Silverstone

A quarter in review - Halfway to 2020

The 2019 plan - Second-quarter review

At the start of the year I blogged about my plans for 2019. For those who don't want to go back to read that post, in summary they are:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday

At the point that I posted that, I promised myself to do quarterly reviews and so here is the second of those. The first can be found here.

1. Weight loss

So when I wrote in April, I was around 88.6kg and worried about how my body seemed to really like roughly 90kg. This is going to be a similar report. Despite managing to lose 10kg in the first quarter, the second quarter has been harder, and with me focussed on running rather than my full gym routine, loss has been less. I've recently started to push a bit lower though and I'm now around 83kg.

I could really shift my focus back to all-round gym exercise, but honestly I've been enjoying a lot of the spare time returned to me by switching back to my cycling and walking, plus now running a bit. I imagine as the weather returns to its more usual wet mess the gym will return to prominence for me, and with that maybe I'll shed a bit of more of this weight.

I continue give myself a solid "B" for this, though if I were generous, given everything else I might consider a "B+"

2. Couch to 5k

Last time I wrote, I'd just managed a 5k run for the first time. Since then I completed the couch-to-5k programme and have now done eight parkruns. I missed one week due to awful awful weather, but otherwise I've managed to be consistent and attended one parkrun per week. They've all been at the same course apart from one which was in Southampton. This gives me a clean ability to compare runs.

My first parkrun was 30m32s, though I remain aware that the course at Platt Fields is a smidge under 5k really, and I was really pleased with that. However as a colleague explained to me, It never gets easier… Each parkrun is just as hard, if not harder, than the previous one. However to continue his quote, …you just get faster. and I have. Since that first run, I have improved my personal record to 27m34s which is, to my mind at least, bloody brilliant. Even when this week I tried to force myself to go slower, aiming to pace out a 30m run, I ended up at 27m49s.

I am currently trying to convince myself that I can run a bit more slowly and thus increase my distance, but for now I think 5k is a stuck record for me. I'll continue to try and improve that time a little more.

I said last review that I'd be adjusting my goals in the light of how well I'd done with couch-2-5k at that point. Since I've now completed it, I'll be renaming this section the 'Fitness' section and hopefully next review I'll be able to report something other than running in it.

So far, so good, I'm continuing with giving myself an "A+"

3. Finishing projects

I did a bunch more on NetSurf this quarter. We had an amazing long-weekend where we worked on a whole bunch of NS stuff, and I've even managed to give up some of my other spare time to resolve bugs. I'm very pleased with how I've done with that.

Rob and I failed to do much with the pub software, but Lars and I continue to work on the Fable project.

So over-all, this one doesn't get better than the "C" from last time - still satisfactory but could do a lot better.

4. Be a net benefit

My efforts for Debian continue to be restricted, though I hope it continues to just about be a net benefit to the project. My efforts with the Lua community have not extended again, so pretty much the same.

I remain invested in Rust stuff, and have managed (just about) to avoid starting in on any other projects, so things are fairly much the same as before.

I remain doing "okay" here, and I want to be a little more positive than last review, so I'm upgrading to a "B".

5. Give back to the Rust community

My work with Rustup continues, though in the past month or so I've been pretty lax because I've had to travel a lot for work. I continue to be as heavily involved in Rust as I can be -- I've stepped up to the plate to lead the Rustup team, and that puts me into the Rust developer tools team proper. I attended a conference, in part to represent the Rust developer community, and I have some followup work on that which I still need to complete.

I still hang around on the #wg-rustup Discord channel and other channels on that server, helping where I can, and I've been trying to teach my colleagues about Rust so that they might also contribute to the community.

Previously I gave myself an 'A' but thought I could manage an 'A+' if I tried harder. Since I've been a little lax recently I'm dropping myself to an 'A-'.

6. Be better at tidying up

Once again, I came out of the previous review fired up to tidy more. Once again, that energy ebbed after about a week. Every time I feel like I might have the mental space to begin building a cleaning habit, something comes along to knock the wind out of my sails. Sometimes that's a big work related thing, but sometimes it's something as small as "Our internet connection is broken, so suddenly instead of having time to clean, I have no time because it's broken and so I can't do anything, even things which don't need an internet connection."

This remains an "F" for fail, sadly.

7. Save up money for renovations

The savings process continues. I've not managed to put quite as much away in this quarter as I did the quarter before, but I have been doing as much as I can. I've finally consolidated most of my savings into one place which also makes them look a little healthier.

The renovations bills continue to loom, but we're doing well, so I think I get to keep the "A" here.

8. Go on a proper holiday

Well, I had that week "off" but ended up doing so much stuff that it doesn't count as much of a holiday. Rob is now in Japan, but I've not managed to take the time as a holiday because my main project at work needs me there since our project manager and his usual stand-in are both also away in Japan.

We have made a basic plan to take some time around the August Bank Holiday to perhaps visit family etc, so I'm going to upgrade us to "C+" since we're making inroads, even if we've not achieved a holiday yet.

Summary

Last quarter, my scores were B, A+, C, B-, A, F, A, C, which, if we ignore the F is an average of A, though the F did ruin things a little.

This quarter I have a B+, A+, C, B, A-, F, A, C+, which ignoring the F is a little better, though still not great. I guess here's to another quarter.

14 July, 2019 03:54PM by Daniel Silverstone

hackergotchi for Ben Hutchings

Ben Hutchings

Talk: What goes into a Debian package?

Some months ago I gave a talk / live demo at work about how Debian source and binary packages are constructed.

Yesterday I repeated this talk (with minor updates) for the Chicago LUG. I had quite a small audience, but got some really good questions at the end. I have now put the notes up on my talks page.

No, I'm not in Chicago. This was a trial run of giving a talk remotely, which I'll also be doing for DebConf this year. I set up an RTMP server in the cloud (nginx) and ran OBS Studio on my laptop to capture and transmit video and audio. I'm generally very impressed with OBS Studio, although the X window capture source could do with improvement. I used the built-in camera and mic, but the mic picked up a fair amount of background noise (including fan noise, since the video encoding keeps the CPU fairly busy). I should probably switch to a wearable mic in future.

14 July, 2019 02:05PM

hackergotchi for Martin Pitt

Martin Pitt

Lightweight i3 developer desktop with OSTree and chroots

Introduction I’ve always liked a clean, slim, lightweight, and robust OS on my laptop (which is my only PC) – I’ve been running the i3 window manager for years, with some custom configuration to enable the Fn keys and set up my preferred desktop session layout. Initially on Ubuntu, for the last two and a half years under Fedora (since I moved to Red Hat). I started with a minimal server install and then had a post-install script that installed the packages that I need, restore my /etc files from git, and some other minor bits.

14 July, 2019 12:00AM

July 12, 2019

hackergotchi for Jonathan Carter

Jonathan Carter

My Debian 10 (buster) Report

In the early hours of Sunday morning (my time), Debian 10 (buster) was released. It’s amazing to be a part of an organisation where so many people work so hard to pull together and make something like this happen. Creating and supporting a stable release can be tedious work, but it’s essential for any kind of large-scale or long-term deployments. I feel honored to have had a small part in this release

Debian Live

My primary focus area for this release was to get Debian live images in a good shape. It’s not perfect yet, but I think we made some headway. The out of box experiences for the desktop environments on live images are better, and we added a new graphical installer that makes Debian easier to install for the average laptop/desktop user. For the bullseye release I intend to ramp up quality efforts and have a bunch of ideas to make that happen, but more on that another time.

Calamares installer on Cinnamon live image.

Other new stuff I’ve been working on in the Buster cycle

Gamemode

Gamemode is a library and tool that changes your computer’s settings for maximum performance when you launch a game. Some new games automatically invoke Gamemode when they’re launched, but for most games you have to do it manually, check their GitHub page for documentation.

Innocent de Marchi Packages

I was sad to learn about the passing of Innocent de Marchi, a math teacher who was also a Debian contributor for whom I’ve sponsored a few packages before. I didn’t know him personally but learned that he was really loved in his community, I’m continuing to maintain some of his packages that I also had an interest in:

  • calcoo – generic lightweight graphical calculator app that can be useful on desktop environments that doesn’t have one
  • connectagram – a word unscrambling game that gets its words from wiktionary
  • fracplanet – fractal planet generator
  • fractalnow – fast, advanced fractal generator
  • gnubik – 3D Rubik’s cube game
  • tanglet – single player word finding game based on Boggle
  • tetzle – jigsaw puzzle game (was also Debian package of the Day #44)
  • xabacus – simulation of the ancient calculator

Powerline Goodies

I wrote a blog post on vim-airline and powerlevel9k shortly after packaging those: New powerline goodies in Debian.

Debian Desktop

I helped co-ordinate the artwork for the Buster release, although Laura Arjona did most of the heavy lifting on that. I updated some of the artwork in the desktop-base package and in debian-installer. Working on the artwork packages exposed me to some of their bugs but not in time to fix them for buster, so that will be a goal for bullseye. I also packaged the font that’s widely used in the buster artwork called quicksand (Debian package: fonts-quicksand). This allows SVG versions of the artwork in the system to display with the correct font.

Bundlewrap

Bundlewrap is a configuration management system written in Python. If you’re familiar with bcfg2 and Ansible, the concepts in Bundlewrap will look very familiar to you. It’s not as featureful as either of those systems, but what it lacks in advanced features it more than makes up for in ease of use and how easy it is to learn. It’s immediately useful for the large amount of cases where you want to install some packages and manage some config files based on conditions with templates. For anything else you might need you can write small Python modules.

Catimg

catimg is a tool that converts jpeg, png, ico and gif files to terminal output. This was also Debian Package of the day #26.

Gnome Shell Extensions

  • gnome-shell-extension-dash-to-panel: dash-to-panel is an essential shell extension for me, and does more for me to make Gnome 3 feel like Gnome 2.x for me than the classic mode does. It’s the easiest way to get a nice single panel on the top of the screen that contains everything that’s useful.
  • gnome-shell-extension-hide-veth: If you use LXC or Docker (or similar), you’ll probably be somewhat annoyed at all the ‘veth’ interfaces you see in network manager. This extension will hide those from the GUI.
  • gnome-shell-extension-no-annoyance: No annoyance fixes something that should really be configurable in Gnome by default. It removes all those nasty “Window is ready” notifications that are intrusive and distracting.

Other

That’s a wrap for my new Debian packages I maintain in Buster. There’s a lot more that I’d like to talk about that happened during this cycle, like that crazy month when I ran for DPL! And also about DebConf stuff, but I’m all out of time and on that note, I’m heading to DebCamp/DebConf in around 12 hours and look forward to seeing many of my Debian colleagues there :-)

12 July, 2019 04:58PM by jonathan

hackergotchi for Jonathan McDowell

Jonathan McDowell

Burn it all

I am generally positive about my return to Northern Ireland, and decision to stay here. Things are much better than when I was growing up and there’s a lot more going on here these days. There’s an active tech scene and the quality of life is pretty decent. That said, this time of year is one that always dampens my optimism. TLDR: This post brings no joy. This is the darkest timeline.

First, we have the usual bonfire issues. I’m all for setting things on fire while having a drink, but when your bonfire is so big it leads to nearby flat residents being evacuated to a youth hostel for the night or you decide that adding 1800 tyres to your bonfire is a great idea, it’s time to question whether you’re celebrating your cultural identity while respecting those around you, or just a clampit (thanks, @Bolster). If you’re starting to displace people from their homes, or releasing lots of noxious fumes that are a risk to your health and that of your local community you need to take a hard look at the message you’re sending out.

Secondly, we have the House of Commons vote on Tuesday to amend the Northern Ireland (Executive Formation) Bill to require the government to bring forward legislation to legalise same-sex marriage and abortion in Northern Ireland. On the face of it this is a good thing; both are things the majority of the NI population want legalised and it’s an area of division between us and the rest of the UK (and, for that matter, Ireland). Dig deeper and it doesn’t tell a great story about the Northern Ireland Assembly. The bill is being brought in the first place because (at the time of writing) it’s been 907 days since Northern Ireland had a government. The current deadline for forming an executive is August 25th, or another election must be held. The bill extends this to October 21st, with an option to extend it further to January 13th. That’ll be 3 years since the assembly sat. That’s not what I voted for; I want my elected officials to actually do their jobs - I may not agree with all of their views, but it serves NI much more to have them turning up and making things happen than failing to do so. Especially during this time of uncertainty about borders and financial stability.

It’s also important to note that the amendments only kick in if an executive is not formed by October 21st - if there’s a functioning local government it’s expected to step in and enact the appropriate legislation to bring NI into compliance with its human rights obligations, as determined by the Supreme Court. It’s possible that this will provide some impetus to the DUP to re-form the assembly in NI. Equally it’s possible that it will make it less likely that Sinn Fein will rush to re-form it, as both amendments cover issues they have tried to resolve in the past.

Equally while I’m grateful to Stella Creasy and Conor McGinn for proposing these amendments, it’s a rare example of Westminster appearing to care about Northern Ireland at all. The ‘backstop’ has been bandied about as a political football, with more regard paid to how many points Tory leadership contenders can score off each other than what the real impact will be upon the people in Northern Ireland. It’s the most attention anyone has paid to us since the Good Friday Agreement, but it’s not exactly the right sort of attention.

I don’t know what the answer is. Since the GFA politics in Northern Ireland has mostly just got more polarised rather than us finding common ground. The most recent EU elections returned an Alliance MEP, Naomi Long, for the first time, which is perhaps some sign of a move to non-sectarian politics, but the real test would be what a new Assembly election would look like. I don’t hold out any hope that we’d get a different set of parties in power.

Still, I suppose at least it’s a public holiday today. Here’s hoping the pub is open for lunch.

12 July, 2019 11:17AM

July 11, 2019

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in June 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

First of all I want to thank Debian’s Release Team. Whenever there was something to unblock for Buster, I always got feedback within hours and in almost all cases the package could just migrate to testing. Good communication and clear rules helped a lot to make the whole freeze a great experience.

Debian Games

  • I reviewed and sponsored a couple of packages again this month.
  • Reiner Herrmann provided a complete overhaul of xbill, so that we all can fight those Wingdows Viruses again.
  • He also prepared a new upstream release of Supertuxkart, which is currently sitting in experimental but will hopefully be uploaded to unstable within the next days.
  • Bernhard Übelacker fixed two annoying bugs in Freeorion (#930417) and Warzone2100 (#930942).  Unfortunately it was too late to include the fixes for Debian 10 in time but I will prepare an update for the next point release.
  • Well, the freeze is over now (hooray) and I intend to upgrade a couple of games in the warm (if you live in the northern hemisphere) month of July again .

Debian Java

  • I prepared another security update for jackson-databind to fix CVE-2019-12814 and CVE-2019-12384 (#930750).
  • I worked on a security update for Tomcat 8 but have not finished it yet.

Debian LTS

This was my fortieth month as a paid contributor and I have been paid to work 17 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 10.06.2019 until 16.06.2019 and from 24.06.2019 until 30.06.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in wordpress, ansible, libqb, radare2, lemonldap-ng, irssi, libapache2-mod-auth-mellon and openjpeg2.
  • DLA-1827-1. Issued a security update for gvfs fixing 1 CVE.
  • DLA-1831-1. Issued a security update for jackson-databind fixing 2 CVE.
  • DLA-1822-1. Issued a security update for php-horde-form fixing 1 CVE.
  • DLA-1839-1. Issued a security update for expat fixing 1 CVE.
  • DLA-1845-1.  Issued a security update for dosbox fixing 2 CVE.
  • DLA-1846-1.  Issued a security update for unzip fixing 1 CVE.
  • DLA-1851-1. Issued a security update for openjpeg2 fixing 2 CVE.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my thirteenth month and I have been paid to work 22 hours on ELTS (15 hours were allocated + 7 hours from last month).

  • ELA-133-1. Issued a security update for linux fixing 9 CVE.
  • ELA-137-1. Issued a security update for libvirt fixing 1 CVE.
  • ELA-139-1. Issued a security update for bash fixing 1 CVE.
  • ELA-140-1. Issued a security update for glib2.0 fixing 3 CVE.
  • ELA-141-1. Issued a security update for unzip fixing 1 CVE.
  • ELA-142-1. Issued a security update for libxslt fixing 2 CVE.

Thanks for reading and see you next time.

11 July, 2019 08:32PM by Apo

Vincent Sanders

We can make it better than it was. Better...stronger...faster.

It is not a novel observation that computers have become so powerful that a reasonably recent system has a relatively long life before obsolescence. This is in stark contrast to the period between the nineties and the teens where it was not uncommon for users with even moderate needs from their computers to upgrade every few years.

This upgrade cycle was mainly driven by huge advances in processing power, memory capacity and ballooning data storage capability. Of course the software engineers used up more and more of the available resources and with each new release ensured users needed to update to have a reasonable experience.

And then sometime in the early teens this cycle slowed almost as quickly as it had begun as systems had become "good enough". I experienced this at a time I was relocating for a new job and had moved most of my computer use to my laptop which was just as powerful as my desktop but was far more flexible.

As a software engineer I used to have a pretty good computer for myself but I was never prepared to spend the money on "top of the range" equipment because it would always be obsolete and generally I had access to much more powerful servers if I needed more resources for a specific task.

To illustrate, the system specification of my desktop PC at the opening of the millennium was:
  • Single core Pentium 3 running at 500Mhz
  • Socket 370 motherboard with 100 Mhz Front Side Bus
  • 128 Megabytes of memory
  • A 25 Gigabyte Deskstar hard drive
  • 150 Mhz TNT 2 graphics card
  • 10 Megabit network card
  • Unbranded 150W PSU
But by 2013 the specification had become:
    2013 PC build still using an awesome beige case from 1999
  • Quad core i5-3330S Processor running at 2700Mhz
  • FCLGA1155 motherboard running memory at 1333 Mhz
  • 8 Gigabytes of memory
  • Terabyte HGST hard drive
  • 1,050 Mhz Integrated graphics
  • Integrated Intel Gigabit network
  • OCZ 500W 80+ PSU
The performance change between these systems was more than tenfold in fourteen years with an upgrade roughly once every couple of years.

I recently started using that system again in my home office mainly for Computer Aided Design (CAD), Computer Aided Manufacture (CAM) and Electronic Design Automation (EDA). The one addition was to add a widescreen monitor as there was not enough physical space for my usual dual display setup.

To my surprise I increasingly returned to this setup for programming tasks. Firstly because being at my desk acts as an indicator to family members I am concentrating where the laptop was no longer had that effect. Secondly I really like the ultra wide display for coding it has become my preferred display and I had been saving for a UWQHD

Alas last month the system started freezing, sometimes it would be stable for several days and then without warning the mouse pointer would stop, my music would cease and a power cycle was required. I tried several things to rectify the situation: replacing the thermal compound, the CPU cooler and trying different memory, all to no avail.

As fixing the system cheaply appeared unlikely I began looking for a replacement and was immediately troubled by the size of the task. Somewhere in the last six years while I was not paying attention the world had moved on, after a great deal of research I managed to come to an answer.

AMD have recently staged something of a comeback with their Ryzen processors after almost a decade of very poor offerings when compared to Intel. The value for money when considering the processor and motherboard combination is currently very much weighted towards AMD.

My timing also seems fortuitous as the new Ryzen 2 processors have just been announced which has resulted in the current generation being available at a substantial discount. I was also encouraged to see that the new processors use the same AM4 socket and are supported by the current motherboards allowing for future upgrades if required.

I Purchased a complete new system for under five hundred pounds, comprising:
    New PC assembled and wired up
  • Hex core Ryzen 5 2600X Processor 3600Mhz
  • MSI B450 TOMAHAWK AMD Socket AM4 Motherboard
  • 32 Gigabytes of PC3200 DDR4 memory
  • Aero Cool Project 7 P650 80+ platinum 650W Modular PSU
  • Integrated RTL Gigabit networking
  • Lite-On iHAS124 DVD Writer Optical Drive
  • Corsair CC-9011077-WW Carbide Series 100R Silent Mid-Tower ATX Computer Case
to which I added some recycled parts:
  • 250 Gigabyte SSD from laptop upgrade
  • GeForce GT 640 from a friend
I installed a fresh copy of Debian and all my CAD/CAM applications and have been using the system for a couple of weeks with no obvious issues.

An example of the performance difference is compiling NetSurf from a clean with empty ccache used to take 36 seconds and now takes 16 which is a nice improvement, however a clean build with the results cached has gone from 6 seconds to 3 which is far less noticeable and during development a normal edit, build, debug cycle affecting only of a small number of files has gone from 400 milliseconds to 200 which simply feels instant in both cases.

My conclusion is that the new system is completely stable but that I have gained very little in common usage. Objectively the system is over twice as fast as its predecessor but aside from compiling large programs or rendering huge CAD drawings this performance is not utilised. Given this I anticipate this system will remain unchanged until it starts failing and I can only hope that will be at least another six years away.

11 July, 2019 05:15PM by Vincent Sanders (noreply@blogger.com)

Arturo Borrero González

Netfilter workshop 2019 Malaga summary

Header

This week we had the annual Netfilter Workshop. This time the venue was in Malaga (Spain). We had the hotel right in the Malaga downtown and the meeting room was in University ETSII Malaga. We had plenty of talks, sessions, discussions and debates, and I will try to summarice in this post what it was about.

Florian Westphal, Linux kernel hacker, Netfilter coreteam member and engineer from Red Hat, started with a talk related to some work being done in the core of the Netfilter code in the kernel to convert packet processing to lists. He shared an overview of current problems and challenges. Processing in a list rather than per packet seems to have several benefits: code can be smarter and faster, so this seems like a good improvement. On the other hand, Florian thinks some of the pain to refactor all the code may not worth it. Other approaches may be considered to introduce even more fast forwarding paths (apart from the flow table mechanism which is already available).

Florian also followed up with the next topic: testing. We are starting to have a lot of duplicated code to do testing. Suggestion by Pablo is to introduce some dedicated tools to ease in maintenance and testing itself. Special mentions to nfqueue and tproxy, 2 mechanisms that require quite a bit of code to be well tested (and could be hard to setup anyway).

Ahmed Abdelsalam, engineer from Cisco, gave a talk on SRv6 Network programming. This protocol allows to simplify some interesting use cases from the network engineering point of view. For example, SRv6 aims to eliminate some tunneling and overlay protocols (VXLAN and friends), and increase native multi-tenancy support in IPv6 networks. Network Services Chaining is one of the main uses cases, which is really interesting in cloud environments. He mentioned that some Kubernetes CNI mechanisms are going to implement SRv6 soon. This protocol does not looks interesting only for the cloud use cases, but also from the general network engineering point of view. By the way, Ahmed shared some really interesting numbers and graphs regarding global IPv6 adoption. Ahmed shared the work that has been done in Linux in general and in nftables in particular to support such setups. I had the opportunity to talk more personally with Ahmed during the workshop to learn more about this mechanism and the different use cases and applications it has.

Fernando, GSoC student, gave us an overview of the OSF functionality of nftables to identify different operating systems from inside the ruleset. He shared some of the improvements he has been working on, and some of them are great, like version matching and wildcards.

Brett, engineer from Untangle, shared some plans to use a new nftables expression (nft_dict) to arbitrarily match on metadata. The proposed design is interesting because it covers some use cases from new perspectives. This triggered a debate on different design approaches and solutions to the issues presented.

Next day, Pablo Neira, head of the Netfilter project, started by opening a debate about extra features for nftables, like the ones provided via xtables-addons for iptables. The first we evaluated was GeoIP. I suggested having some generic infrastructure to be able to write/query external metadata from nftables, given we have more and more use cases looking for this (OSF, the dict expression, GeoIP). Other exhotics extension were discussed, like TARPIT, DELUDE, DHCPMAC, DNETMAP, ECHO, fuzzy, gradm, iface, ipp2p, etc.

A talk on connection tracking support for the linux bridge followed, led by Pablo. A status update on latest work was shared, and some debate happened regarding different use cases for ebtables & nftables.

Next topic was a complex one with no easy solutions: hosting of the Netfilter project infrastructure: git repositories, mailing lists, web pages, wiki deployments, bugzilla, etc. Right now the project has a couple of physical servers housed in a datacenter in Seville. But nobody has time to properly maintain them, upgrade them, and such. Also, part of our infra is getting old, for example the webpage. Some other stuff is mostly unmaintained, like project twitter accounts. Nobody actually has time to keep things updated, and this is probably the base problem. Many options were considered, including moving to github, gitlab, or other hosting providers.

After lunch, Pablo followed up with a status update on hardware flow offload capabilities for nftables. He started with an overview of the current status of ethtool_rx and tc offloads, capabilities and limitations. It should be possible for most commodity hardware to support some variable amount of offload capabilities, but apparently the code was not in very good shape. The new flow block API should improve this situation, while also giving support for nftables offload. There is a related article in LWN.

Next talk was by Phil, engineer at Red Hat. He commented on user-defined strings in nftables, which presents some challenges. Some debate happened, mostly to get to an agreement on how to proceed.

Group photo

Next day, Phil was the one to continue with the workshop talks. This time the talk was about sharing his TODO list for iptables-nft, presentation and discussion of planned work. This triggered a discussion on how to handle certain bugs in Debian Buster, which have a lot of patch dependencies (so we cannot simply cherry-pick a single patch for stable). It turns out I maintain most of the Debian Netfilter packages, and Sebastian Delafond was attending the workshop too, who is also a Debian Developer. We provided some Debian-side input on how to better proceed with fixes for specific bugs in Debian. Phil continued pointing out several improvements that we require in nftables in order to support some rather exhotic uses cases in both iptables-nft and ebtables-nft.

Yi-Hung Wei, engineer working in OpenVSwitch shared some intresting features related to using the conntrack engine in certain scenarios. OVS is really useful in cloud environments. Specifically, the open discussion was around the zone based timeout policy support for other Netfilter use cases. It was pointed out by Pablo that nftables already support this. By the way, the Wikimedia Cloud Services team plans to use OVS in the near future by means of Neutron (a VXLAN+OVS setup)

Phil gave another talk related to nftables undefined behaviour situations. He has been working lately in polishing the last gaps between -legacy and -nft flavors of iptables and friends. Mostly what we have yet to solve are some corner cases. Also some weird ICMP situation. Thanks to Phil for taking care of this. Actually, Phils has been contributing a lot to the Netfilter project in the past few years.

Stephen, engineer from secunet, followed up after lunch to bring up a couple of topics about improvements to the kernel datapath using XDP. Also, he commented on partitioning the system into control and dataplace CPUs. The nftables flow table infra is doing exactly this, as pointed out by Florian.

Florian continued with some open-for.discussion topics for pending features in nftables. It looks like every day we have more requests for more different setups and use cases with nftables. We need to define uses cases as well as possible, and also try to avoid reinventing the wheel for some stuff.

Laura, engineer from Zevenet, followed up with a really interesting talk on load balancing and clustering using nftables. The amount of features and improvements added to nftlb since the last year is amazing: stateless DNAT topologies, L7 helpers support, more topologies for virtual services and backends, improvements for affinities, security policies, diffrerent clustering architectures, etc. We had an interesting conversation about how we integrate with etcd in the Wikimedia Foundation for sharing information between load balancer and for pooling/depooling backends. They are also spearheading a proposal to include support for nftables into Kubernetes kube-proxy.

Abdessamad El Abbassi, also engineer from Zevenet, shared the project that this company is developing to create a nft-based L7 proxy capable of offloading. They showed some metrics in which this new L7 proxy outperforms HAproxy for some setups. Quite interesting. Also some debate happened around SSL termination and how to better handle that situation.

That very afternoon the core team of the Netfilter project had a meeting in which some internal topics were discussed. Among other things, we decided to invite Phil Sutter to join the Netfilter coreteam.

I really enjoyed this round of Netfilter workshop. Pretty much enjoyed the time with all the folks, old friends and new ones.

11 July, 2019 12:00PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

DebConf Video player

Last weekend, I sat down to learn a bit more about angular, a TypeScript-based programming environment for rich client webapps. According to their website, "TypeScript is a typed superset of JavaScript that compiles to plain JavaScript", which makes the programming environment slightly more easy to work with. Additionally, since TypeScript compiles to whatever subset of JavaScript that you want to target, it compiles to something that should work on almost every browser (that is, if it doesn't, in most cases the fix is to just tweak the compatibility settings a bit).

Since I think learning about a new environment is best done by actually writing a project that uses it, and since I think it was something we could really use, I wrote a video player for the DebConf video team. It makes use of the metadata archive that Stefano Rivera has been working on the last few years (or so). It's not quite ready yet (notably, I need to add routing so you can deep-link to a particular video), but I think it's gotten to a state where it is useful for more general consumption.

We'll see where this gets us...

11 July, 2019 10:14AM

hackergotchi for Steve Kemp

Steve Kemp

Building a computer - part 1

I've been tinkering with hardware for a couple of years now, most of this is trivial stuff if I'm honest, for example:

  • Wiring a display to a WiFi-enabled ESP8266 device.
    • Making it fetch data over the internet and display it.
  • Hooking up a temperature/humidity sensor to a device.
    • Submit readings to an MQ bus.

Off-hand I think the most complex projects I've built have been complex in terms of software. For example I recently hooked up a 933Mhz radio-receiver to an ESP8266 device, then had to reverse engineer the protocol of the device I wanted to listen for. I recorded a radio-burst using an SDR dongle on my laptop, broke the transmission into 1 and 0 manually, worked out the payload and then ported that code to the ESP8266 device.

Anyway I've decided I should do something more complex, I should build "a computer". Going old-school I'm going to stick to what I know best the Z80 microprocessor. I started programming as a child with a ZX Spectrum which is built around a Z80.

Initially I started with BASIC, later I moved on to assembly language mostly because I wanted to hack games for infinite lives. I suspect the reason I don't play video-games so often these days is because I'm just not very good without cheating ;)

Anyway the Z80 is a reasonably simple processor, available in a 40PIN DIP format. There are the obvious connectors for power, ground, and a clock-source to make the thing tick. After that there are pins for the address-bus, and pins for the data-bus. Wiring up a standalone Z80 seems to be pretty trivial.

Of course making the processor "go" doesn't really give you much. You can wire it up, turn on the power, and barring explosions what do you have? A processor executing NOP instructions with no way to prove it is alive.

So to make a computer I need to interface with it. There are two obvious things that are required:

  • The ability to get your code on the thing.
    • i.e. It needs to read from memory.
  • The ability to read/write externally.
    • i.e. Light an LED, or scan for keyboard input.

I'm going to keep things basic at the moment, no pun intended. Because I have no RAM, because I have no ROM, because I have no keyboard I'm going to .. fake it.

The Z80 has 40 pins, of which I reckon we need to cable up over half. Only the arduino mega has enough pins for that, but I think if I use a Mega I can wire it to the Z80 then use the Arduino to drive it:

  • That means the Arduino will generate a clock-signal to make the Z80 tick.
  • The arduino will monitor the address-bus
    • When the Z80 makes a request to read the RAM at address 0x0000 it will return something from its memory.
    • When the Z80 makes a request to write to the RAM at address 0xffff it will store it away in its memory.
  • Similarly I can monitor for requests for I/O and fake that.

In short the Arduino will run a sketch with a 1024 byte array, which the Z80 will believe is its memory. Via the serial console I can read/write to that RAM, or have the contents hardcoded.

I thought I was being creative with this approach, but it seems like it has been done before, numerous times. For example:

  • http://baltazarstudios.com/arduino-zilog-z80/
  • https://forum.arduino.cc/index.php?topic=60739.0
  • https://retrocomputing.stackexchange.com/questions/2070/wiring-a-zilog-z80

Anyway I've ordered a bunch of Z80 chips, and an Arduino mega (since I own only one Arduino, I moved on to ESP8266 devices pretty quickly), so once it arrives I'll document the process further.

Once it works I'll need to slowly remove the arduino stuff - I guess I'll start by trying to build an external RAM/ROM interface, or an external I/O circuit. But basically:

  • Hook the Z80 up to the Arduino such that I can run my own code.
  • Then replace the arduino over time with standalone stuff.

The end result? I guess I have no illusions I can connect a full-sized keyboard to the chip, and drive a TV. But I bet I could wire up four buttons and an LCD panel. That should be enough to program a game of Tetris in Z80 assembly, and declare success. Something like that anyway :)

Expect part two to appear after my order of parts arrives from China.

11 July, 2019 10:01AM

July 10, 2019

Sven Hoexter

Frankenstein JVM with flavour - jlink your own JVM with OpenJDK 11

While you can find a lot of information regarding the Java "Project Jigsaw", I could not really find a good example on "assembling" your own JVM. So I took a few minutes to figure that out. My usecase here is that someone would like to use Instana (non free tracing solution) which requires the java.instrument and jdk.attach module to be available. From an operations perspektive we do not want to ship the whole JDK in our production Docker Images, so we've to ship a modified JVM. Currently we base our images on the builds provided by AdoptOpenJDK.net, so my examples are based on those builds. You can just download and untar them to any directory to follow along.

You can check the available modules of your JVM by runnning:

$ jdk-11.0.3+7-jre/bin/java --list-modules | grep -E '(instrument|attach)'
java.instrument@11.0.3

As you can see only the java.instrument module is available. So let's assemble a custom JVM which includes all the modules provided by the default AdoptOpenJDK.net JRE builds and the missing jdk.attach module:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.attach --output myjvm

$ ./myjvm/bin/java --list-modules | grep -E '(instrument|attach)'
java.instrument@11.0.3
jdk.attach@11.0.3

Size wise the increase is, as expected, rather minimal:

$ du -hs myjvm jdk-11.0.3+7-jre jdk-11.0.3+7
141M    myjvm
121M    jdk-11.0.3+7-jre
310M    jdk-11.0.3+7

For the fun of it you could also add the compiler so you can execute source files directly:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.compiler --output myjvm2
$ ./myjvm2/bin/java HelloWorld.java
Hello World!

10 July, 2019 02:31PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Bose on-ear wireless headphones

Azoychka modelling the headphones

Azoychka modelling the headphones

Earlier this year, and after about five years, I've had to accept that my beloved AKG K451 fold-able headphones have finally died, despite the best efforts of a friendly colleague in the Newcastle Red Hat office, who had replaced and re-soldered all the wiring through the headband, and disassembled the left ear-cup to remove a stray metal ring that got jammed in the jack, most likely snapped from one of several headphone wires I'd gone through.

The K451's were really good phones. They didn't sound quite as good as my much larger, much less portable Sennheisers, but the difference was close, and the portability aspect gave them fantastic utility. They remained comfortable to wear and listen to for hours on end, and were surprisingly low-leaking. I became convinced that on-ear was a good form factor for portable headphones.

To replace them, I decided to finally give wireless headphones a try. There are not a lot of on-ear, smaller form-factor wireless headphone models. I really wanted to like the Sony WH-H800s, which (I thought) looked stylish, and reviews for their bigger brother (the 1000 series over-ear) are fantastic. The 800s proved very hard to audition. I could only find one vendor in Newcastle with a pair for evaluation, Currys PC World, but the circumstances were very poor: a noisy store, the headphones tethered to a security frame on a very short leash, so I had to stoop to put them on; no ability to try my own music through the headset. The headset in actuality seemed poorly constructed, with the hard plastic seeming to be ill-fitting such that the headset rattled when I picked it up.

I therefore ended up buying the Bose on-ear wireless headphones. I was able to audition them in several different environments, using my own music, both over Bluetooth and via a cable. They are very comfortable, which is important for the use-case. I was a little nervous about reports on Bose sound quality, which is described as more sculpted than true to the source material, but I was happy with what I could hear in my demonstrations. What clinched it was a few other circumstances (that I won't elaborate on here) which brought the price down to comparable to what I paid for the AKG K451s.

A few months in, and the only criticism I have of the Bose headphones is I can get some mild discomfort on my helix if I have positioned them poorly. This has not turned out to be a big problem. One consequence of having wireless headphones, asides from increased convenience in the same listening circumstances I used wired headphones, is all the situations that I can use them that I wouldn't have bothered before, including a far wider range of house work chores, going up and down ladders, DIY jobs, etc. I'm finding myself consuming a lot more podcasts and programmes from BBC Radio, and experimenting more with streaming music.

10 July, 2019 09:27AM

hackergotchi for Eddy Petrișor

Eddy Petrișor

Rust: How do we teach "Implementing traits in no_std for generics using lifetimes" without sutdents going mad?

I'm trying to go through Sergio Benitez's CS140E class and I am currently at Implementing StackVec. StackVec is something that currently, looks like this:

/// A contiguous array type backed by a slice.
///
/// `StackVec`'s functionality is similar to that of `std::Vec`. You can `push`
/// and `pop` and iterate over the vector. Unlike `Vec`, however, `StackVec`
/// requires no memory allocation as it is backed by a user-supplied slice. As a
/// result, `StackVec`'s capacity is _bounded_ by the user-supplied slice. This
/// results in `push` being fallible: if `push` is called when the vector is
/// full, an `Err` is returned.
#[derive(Debug)]
pub struct StackVec<'a, T: 'a> {
    storage: &'a mut [T],
    len: usize,
    capacity: usize,
}
The initial skeleton did not contain the derive Debug and the capacity field, I added them myself.

Now I am trying to understand what needs to happens behind:
  1. IntoIterator
  2. when in no_std
  3. with a custom type which has generics
  4. and has to use lifetimes
I don't now what I'm doing, I might have managed to do it:

pub struct StackVecIntoIterator<'a, T: 'a> {
    stackvec: StackVec<'a, T>,
    index: usize,
}

impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, &'a mut T> {
    type Item = &'a mut T;
    type IntoIter = StackVecIntoIterator<'a, T>;

    fn into_iter(self) -> Self::IntoIter {
        StackVecIntoIterator {
            stackvec: self,
            index: 0,
        }
    }
}

impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {
    type Item = &'a mut T;

    fn next(&mut self) -> Option {
        let result = self.stackvec.pop();
        self.index += 1;

        result
    }
}
I was really struggling to understand what should the returned iterator type be in my case, since, obviously, std::vec is out because a) I am trying to do a no_std implementation of something that should look a little like b) a std::vec.

That was until I found this wonderful example on a custom type without using any already implemented Iterator, but defining the helper PixelIntoIterator struct and its associated impl block:

struct Pixel {
    r: i8,
    g: i8,
    b: i8,
}

impl IntoIterator for Pixel {
    type Item = i8;
    type IntoIter = PixelIntoIterator;

    fn into_iter(self) -> Self::IntoIter {
        PixelIntoIterator {
            pixel: self,
            index: 0,
        }

    }
}

struct PixelIntoIterator {
    pixel: Pixel,
    index: usize,
}

impl Iterator for PixelIntoIterator {
    type Item = i8;
    fn next(&mut self) -> Option {
        let result = match self.index {
            0 => self.pixel.r,
            1 => self.pixel.g,
            2 => self.pixel.b,
            _ => return None,
        };
        self.index += 1;
        Some(result)
    }
}


fn main() {
    let p = Pixel {
        r: 54,
        g: 23,
        b: 74,
    };
    for component in p {
        println!("{}", component);
    }
}
The part in bold was what I was actually missing. Once I had that missing link, I was able to struggle through the generics part.

Note that, once I had only one new thing, the generics - luckly the lifetime part seemed it to be simply considered part of the generic thing - everything was easier to navigate.


Still, the fact there are so many new things at once, one of them being lifetimes - which can not be taught, only experienced @oli_obk - makes things very confusing.

Even if I think I managed it for IntoIterator, I am similarly confused about implementing "Deref for StackVec" for the same reasons.

I think I am seeing on my own skin what Oliver Scherer was saying about big infodumps at once at the beginning is not the way to go. I feel that if Sergio's class was now in its second year, things would have improved. OTOH, I am now very curious how does your curriculum look like, Oli?

All that aside, what should be the signature of the impl? Is this OK?

impl<'a, T: Clone + 'a> Deref for StackVec<'a, &'a mut T> {
    type Target = T;

    fn deref(&self) -> &Self::Target;
}
Trivial examples like wrapper structs over basic Copy types u8 make it more obvious what Target should be, but in this case it's so unclear, at least to me, at this point. And because of that I am unsure what should the implementation even look like.

I don't know what I'm doing, but I hope things will become clear with more exercise.

10 July, 2019 12:03AM by eddyp (noreply@blogger.com)

July 09, 2019

hackergotchi for Matthew Garrett

Matthew Garrett

Bug bounties and NDAs are an option, not the standard

Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comment count unavailable comments

09 July, 2019 09:15PM

hackergotchi for Sean Whitton

Sean Whitton

Upload to Debian with just 'git tag' and 'git push'

At a sprint over the weekend, Ian Jackson and I designed and implemented a system to make it possible for Debian Developers to upload new versions of packages by simply pushing a specially formatted git tag to salsa (Debian’s GitLab instance). That’s right: the only thing you will have to do to cause new source and binary packages to flow out to the mirror network is sign and push a git tag.

It works like this:

  1. DD signs and pushes a git tag containing some metadata. The tag is placed on the commit you want to release (which is probably the commit where you ran dch -r).

  2. This triggers a GitLab webhook, which passes the public clone URI of your salsa project and the name of the newly pushed tag to a cloud service called tag2upload.

  3. tag2upload verifies the signature on the tag against the Debian keyring,1 produces a .dsc and .changes, signs these, and uploads the result to ftp-master.2

    (tag2upload does not have, nor need, push access to anyone’s repos on salsa. It doesn’t make commits to the maintainer’s branch.)

  4. ftp-master and the autobuilder network push out the source and binary packages in the usual way.

The first step of this should be as easy as possible, so we’ve produced a new script, git debpush, which just wraps git tag and git push to sign and push the specially formatted git tag.

We’ve fully implemented tag2upload, though it’s not running in the cloud yet. However, you can try out this workflow today by running tag2upload on your laptop, as if in response to a webhook. We did this ourselves for a few real uploads to sid during the sprint.

  1. First get the tools installed. tag2upload reuses code from dgit and dgit-infrastructure, and lives in bin:dgit-infrastructure. git debpush is in a completely independent binary package which does not make any use of dgit.3

    % apt-get install git-debpush dgit-infrastructure dgit debian-keyring

    (you need version 9.1 of the first three of these packages, only in Debian unstable at the time of writing. However, the .debs will install on stretch & buster. And I’ll put them in buster-backports once that opens.)

  2. Prepare a source-only upload of some package that you normally push to salsa. When you are ready to upload this, just type git debpush.

    If the package is non-native, you will need to pass a quilt option to inform tag2upload what git branch layout you are using—it has to know this in order to produce a .dsc. See the git-debpush(1) manpage for the supported quilt options.

    The quilt option you select gets stored in the newly created tag, so for your next upload you won’t need it, and git debpush alone will be enough.

    See the git-debpush(1) manpage for more options, but we’ve tried hard to ensure most users won’t need any.

  3. Now you need to simulate salsa’s sending of a webhook to the tag2upload service. This is how you can do that:

    % mkdir -p ~/tmp/t2u
    % cd ~/tmp/t2u
    % DGIT_DRS_EMAIL_NOREPLY=myself@example.org dgit-repos-server \
        debian . /usr/share/keyrings/debian-keyring.gpg,a --tag2upload \
        https://salsa.debian.org/dgit-team/dgit-test-dummy.git debian/1.23
    

    … substituting your own service admin e-mail address, salsa repo URI and new tag name.

    Check the file ~/tmp/t2u/overall.log to see what happened, and perhaps take a quick look at Debian’s upload queue.

A few other notes about trying this out:

  • tag2upload will delete various files and directories in your working directory, so be sure to invoke it in an empty directory like ~/tmp/t2u.

  • You won’t see any console output, and the command might feel a bit slow. Neither of these will matter when tag2upload is running as a cloud service, of course. If there is an error, you’ll get an e-mail.

  • Running the script like this on your laptop will use your default PGP key to sign the .dsc and .changes. The real cloud service will have its own PGP key.

  • The shell invocation given above is complicated, but once the cloud service is deployed, no human is going to ever need to type it!

    What’s important to note is the two pieces of user input the command takes: your salsa repo URI, and the new tag name. The GitLab web hook will provide the tag2upload service with (only) these two parameters.

For some more discussion of this new workflow, see the git-debpush(1) manpage. We hope you have fun trying it out.


  1. Unfortunately, DMs can’t try tag2upload out on their laptops, though they will certainly be able to use the final cloud service version of tag2upload.
  2. Only source-only uploads are supported, but this is by design.
  3. Do not be fooled by the string ‘dgit’ appearing in the generated tags! We are just reusing a tag metadata convention that dgit also uses.

09 July, 2019 08:49PM

hackergotchi for Daniel Lange

Daniel Lange

Cleaning a broken GNUpg (gpg) key

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Update

09.07.2019

GNUpg 2.2.17 has been released with another set of quickly bolted together fixes:

  * gpg: Ignore all key-signatures received from keyservers.  This
    change is required to mitigate a DoS due to keys flooded with
    faked key-signatures.  The old behaviour can be achieved by adding
    keyserver-options no-self-sigs-only,no-import-clean
    to your gpg.conf.  [#4607]
  * gpg: If an imported keyblocks is too large to be stored in the
    keybox (pubring.kbx) do not error out but fallback to an import
    using the options "self-sigs-only,import-clean".  [#4591]
  * gpg: New command --locate-external-key which can be used to
    refresh keys from the Web Key Directory or via other methods
    configured with --auto-key-locate.
  * gpg: New import option "self-sigs-only".
  * gpg: In --auto-key-retrieve prefer WKD over keyservers.  [#4595]
  * dirmngr: Support the "openpgpkey" subdomain feature from
    draft-koch-openpgp-webkey-service-07. [#4590].
  * dirmngr: Add an exception for the "openpgpkey" subdomain to the
    CSRF protection.  [#4603]
  * dirmngr: Fix endless loop due to http errors 503 and 504.  [#4600]
  * dirmngr: Fix TLS bug during redirection of HKP requests.  [#4566]
  * gpgconf: Fix a race condition when killing components.  [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

09 July, 2019 05:44PM by Daniel Lange

hackergotchi for Thomas Lange

Thomas Lange

Talks, articles and a podcast in German

In April I gave a talk about FAI a the GUUG-Frühjahrsfachgespräch in Karlsruhe (Slides). At this nice meeting Ingo from RadioTux made an interview with me (https://www.radiotux.de/index.php?/archives/8050-RadioTux-Sendung-April-2019.html) (from 0:35:30).

Then I found an article in the iX Special 2019 magazine about automation in the data center which mentioned FAI. Nice. But I was very supprised and happy when I saw a whole article about FAI in the Linux Magazin 7/2019. A very good article with a some focus on network things, but also the class system and installing other distributions is described. And they will also publish another article about the FAI.me service in a few months. I'm excited!

In a few days, I going to DebConf19 in Curitiba for two weeks. I will work on Debian web stuff, check my other packages (rinse, dracut, tcsh) and hope to meet a lot of friendly people.

And in August I'm giving a talk at FrOSCon about FAI.

FAI

09 July, 2019 02:31PM

hackergotchi for Steve Kemp

Steve Kemp

Upgraded my first host to buster

I upgrade the first of my personal machines to Debian's new stable release, buster, yesterday. So far two minor niggles, but nothing major.

My hosts are controlled, sometimes, by puppet. The puppet-master is running stretch and has puppet 4.8.2 installed. After upgrading my test-host to the new stable I discovered it has puppet 5.5 installed:

root@git ~ # puppet --version 5.5.10

I was not sure if there would be compatibility problems, but after reading the release notes nothing jumped out. Things seemed to work, once I fixed this immediate problem:

     # puppet agent --test
     Warning: Unable to fetch my node definition, but the agent run will continue:
     Warning: SSL_connect returned=1 errno=0 state=error: dh key too small
     Info: Retrieving pluginfacts
     ..

This error-message was repeated multiple times:

SSL_connect returned=1 errno=0 state=error: dh key too small

To fix this comment out the line in /etc/ssl/openssl.cnf which reads:

CipherString = DEFAULT@SECLEVEL=2

The second problem was that I use borg to run backups, once per day on most systems, and twice per day on others. I have an invocation which looks like this:

borg create ${flags} --compression=zlib  --stats ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S) \
   --exclude=/proc \
   --exclude=/swap.file \
   --exclude=/sys  \
   --exclude=/run  \
   --exclude=/dev  \
   --exclude=/var/log \
   /

That started to fail :

borg: error: unrecognized arguments: /

I fixed this by re-ordering the arguments such that it ended "destination path", and changing --exclude=x to --exclude x:

borg create ${flags} --compression=zlib  --stats \
   --exclude /proc \
   --exclude /swap.file \
   --exclude /sys  \
   --exclude /run  \
   --exclude /dev  \
   --exclude /var/log \
   ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S)  /

That approach works on my old and new hosts.

I'll leave this single system updated for a few more days to see what else is broken, if anything. Then I'll upgrade them in turn.

Good job!

09 July, 2019 09:01AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

DANE OPENPGPKEY for debian.org

DANE OPENPGPKEY for debian.org

I recently announced the publication of Web Key Directory for @debian.org e-mail addresses. This blog post announces another way to fetch OpenPGP certificates for @debian.org e-mail addresses, this time using only the DNS. These two mechanisms are complementary, not in competition. We want to make sure that whatever certificate lookup scheme your OpenPGP client supports, you will be able to find the appropriate certificate.

The additional mechanism we're now supporting (since a few days ago) is DANE OPENPGPKEY, specified in RFC 7929.

How does it work?

DANE OPENPGPKEY works by storing a minimized OpenPGP certificate in the DNS, ideally in a subdomain at label based on a hashed version of the local part of the e-mail address.

With modern GnuPG, if you're interested in retrieving the OpenPGP certificate for dkg as served by the DNS, you can do:

gpg --auto-key-locate clear,nodefault,dane --locate-keys dkg@debian.org

If you're interested in how this DNS zone is populated, take a look at can the code that handles it. Please request improvements if you see ways that this could be improved.

Unfortunately, GnuPG does not currently do DNSSEC validation on these records, so the cryptographic protections offered by this client are not as strong as those provided by WKD (which at least checks the X.509 certificate for a given domain name against the list of trusted root CAs).

Why offer both DANE OPENPGPKEY and WKD?

I'm hoping that the Debian project can ensure that no matter whatever sensible mechanism any OpenPGP client implements for certificate lookup, it will be able to find the appropriate OpenPGP certificate for contacting someone within the @debian.org domain.

A clever OpenPGP client might even consider these two mechanisms -- DANE OPENPGPKEY and WKD -- as corroborative mechanisms, since an attacker who happens to compromise one of them may find it more difficult to compromise both simultaneously.

How to update?

If you are a Debian developer and you want your OpenPGP certificate updated in the DNS, please follow the normal procedures for Debian keyring maintenance like you always have. When a new debian-keyring package is released, we will update these DNS records at the same time.

Thanks

Setting this up would not have been possible without help from weasel on the Debian System Administration team, and Noodles from the keyring-maint team providing guidance.

DANE OPENPGPKEY was documented and shepherded through the IETF by Paul Wouters.

Thanks to all of these people for making it possible.

09 July, 2019 04:00AM by Daniel Kahn Gillmor (dkg)

Rodrigo Siqueira

Status Update, June 2019

For a long period of time, I’m cultivating the desire of having a habit of writing monthly status updates. Someway, Drew DeVault’s Blog posts and Martin Peres’s advice leverage me towards this direction. So, here I am! I have decided to embrace the challenge of composing a report per month. I hope this new habit helps me to improve my writing and communication skills but most importantly, help me to keep track of my work. I want to start this update by describing my work conditions and then focus on the technical stuff.

In the last two months, I’ve been facing an infrastructure problem to work. I’m dealing with obstacles such as restricted Internet access and long hours in public transportation from my home to my workplace. Unfortunately, I can’t work in my house due to the lack of space, and the best place to work is a public library at the University of Brasilia (UnB). Going to UnB every day makes me waste around 3h per day in a bus. The library has a great environment, but it also has thousands of internet restrictions. The fact that I can’t access websites with ‘.me’ domain and connect to my IRC bouncer is an example of that. In summary: It’s been hard to work these days. So let’s stop talking about non-technical stuff and get into the heart of the matter.

I really like working on VKMS. I know this is not news to anybody, and in June, most of my efforts were dedicated to VKMS. One of my paramount endeavors it was found and fixed a bug in vkms that makes kms_cursor_crc, and kms_pipe_crc_basic fails. I was chasing this bug for a long time as can be seen here [1]. After many hours debugging it, I sent a patch for handling this issue [2], however, after Daniel’s review, I realized that my patch didn’t fix correctly the problem. So Daniel decided to dig into this issue to find the root of the problem and later sent a final fix. If you want to see the solution, take a look at [3]. One day, I want to write a post about this fix since it is an interesting subject to discuss.

Daniel also noticed some concurrency problems in the CRC code and sent a patchset composed of 10 patches that tackle the issue. These patches focused on creating better framebuffers manipulation and avoiding race conditions. It took me around 4 days to take a look and test this series. During my review, I asked many things related to concurrency and other clarification about DRM. Daniel always replied with a very nice and detailed explanation. If you want to learn a little bit more about locks, I recommend you to take a look at [4]. Seriously, it is really nice!

I also worked for adding the writeback support in vkms; since XDC2018 I could not stop to think about the idea of adding writeback connector in vkms due to the benefits it could bring, such as new test and assist developers with visual output. As a result, I started some clumsy attempts to implement it in January; but I really dove in this issue in the middle of April, and in June I was focused on making it work. It was tough for me to implement these features due to the following reasons:

  1. There is not i-g-t test for writeback in the main repository, I had to use a WIP patchset made by Brian and Liviu.
  2. I was not familiar with framebuffer, connectors, and fancy manipulation.

As a result of the above limitations, I had to invest many hours reading the documentation and the DRM/IGT code. In the end, I think that add writeback connectors paid well for me since I feel much more comfortable with many things related to DRM these days. The writeback support was not landed yet, however, at this moment the patch is under review (V3) and changed a lot since the first version; for details about this series take a look at [5]. I’ll write a post about this feature after it gets merged.

After having the writeback connectors working in vkms, I felt so grateful for Brian, Liviu, and Daniel for all the assistance they provided to me. In particular, I was thrilled that Brian and Liviu made kms_writeback test which worked as an implementation guideline for me. As a result, I updated their patchsets for making it work in the latest version of IGT and made some tiny fixes. My goal was helping them to upstream kms_writeback. I submitted the series with the hope to see it landed in the IGT [9].

Parallel to my work with ‘writeback’ I was trying to figure out how I could expose vkms configurations to the userspace via configfs. After many efforts, I submitted the first version of configfs support; in this patchset I exposed the virtual and writeback connectors. Take a look at [6] for more information about this feature, and definitely, I’ll write a post about this feature after it gets landed.

Finally, I’m still trying to upstream a patch that makes drm_wait_vblank_ioctl return EOPNOTSUPP instead of EINVAL if the driver does not support vblank get landed. Since this change is in the DRM core and also change the userspace, it is not easy to make this patch get landed. For the details about this patch, you can take a look here [7]. I also implemented some changes in the kms_flip to validate the changes that I made in the function drm_wait_vblank_ioctl and it got landed [8].

July Aims

In June, I was totally dedicated to vkms, now I want to slow my road a little bit and study more about userspace. I want to take a step back and make some tiny programs using libdrm with the goal of understanding the interaction among userspace and kernel space. I also want to take a look at the theoretical part related to computer graphics.

I want to put some effort to improve a tool named kw that help me during my work with Linux Kernel. I also want to take a look at real overlay planes support in vkms. I noted that I have to find a “contribution protocol” (review/write code) that works for me in my current work conditions; otherwise, work will become painful for my relatives and me. Finally, and most importantly, I want to take some days off to enjoy my family.

Info: If you find any problem with this text, please let me know. I will be glad to fix it.

References

[1] “First discussion in the Shayenne’s patch about the CRC problem”. URL: https://lkml.org/lkml/2019/3/10/197

[2] “Patch fix for the CRC issue”. URL: https://patchwork.freedesktop.org/patch/308617/

[3] “Daniel final fix for CRC”. URL: https://patchwork.freedesktop.org/patch/308881/?series=61703&rev=1

[4] “Rework crc worker”. URL: https://patchwork.freedesktop.org/series/61737/

[5] “Introduces writeback support”. URL: https://patchwork.freedesktop.org/series/61738/

[6] “Introduce basic support for configfs”. URL: https://patchwork.freedesktop.org/series/63010/

[7] “Change EINVAL by EOPNOTSUPP when vblank is not supported”. URL: https://patchwork.freedesktop.org/patch/314399/?series=50697&rev=7

[8] “Skip VBlank tests in modules without VBlank”. URL: https://gitlab.freedesktop.org/drm/igt-gpu-tools/commit/2d244aed69165753f3adbbd6468db073dc1acf9a

[9] “Add support for testing writeback connectors”. URL: https://patchwork.freedesktop.org/series/39229/

09 July, 2019 03:00AM

July 08, 2019

Jonathan Wiltshire

Too close?

At times of stress I’m prone to topical nightmares, but they are usually fairly mundane – last night, for example, I dreamed that I’d mixed up bullseye and bookworm in one of the announcements of future code names.

But Saturday night was a whole different game. Imagine taking a rucksack out of the cupboard under the stairs, and thinking it a bit too heavy for an empty bag. You open the top and it’s full of small packages tied up with brown paper and string. As you take each one out and set it aside you realise, with mounting horror, that these are all packages missing from buster and which should have been in the release. But it’s too late to do anything about that now; you know the press release went out already because you signed it off yourself, so you can’t do anything else but get all the packages out of the bag and see how many were forgotten. And you dig, and count, and dig, and it’s like Mary Poppins’ carpet bag, and still they keep on coming…

Sometimes I wonder if I get too close to stuff!

08 July, 2019 08:39PM by Jon

hackergotchi for Andy Simpkins

Andy Simpkins

Buster Release Party – Cambridge, UK

With the release of Debian GNU/Linux 10.0.0 “Buster” completing in the small hours of yesterday morning (0200hrs UTC or thereabouts) most of the ‘release parties’ had already been and gone…. Not so for the Cambridge contingent who had scheduled a get together for the Sunday [0], knowing that various attendees would have been working on the release until the end.

The Sunday afternoon saw a gathering in the Haymakers pub to celebrate a successful release.   We would publicly like to thank the Raspberry Pi foundation [1], and Mythic Beasts [2] who between them picked up the tab for our bar bill – Cheers and thank you!

 

 

 

 

Ian and Sean also managed to join us, taking time out from the dgit sprint that had been running since Friday [3].

For most of the afternoon we had the inside of the pub all to ourselves.   

This was a friendly and low key event as we decompressed from the previous day’s activities, but we were also joined many other DDs, users and supporters, mostly hanging out in the garden but I confess to staying most of the time indoors in the shade and with a breeze through the pub…

Laptops are still needed even at a party

The start of the next project…

 

[0]    http://www.chiark.greenend.org.uk/pipermail/debian-uk/2019-June/000481.html
       https://wiki.debian.org/ReleasePartyBuster/UK/Cambridge

[1]   https://www.raspberrypi.org/

[2]    https://www.mythic-beasts.com/

[3]    https://wiki.debian.org/Sprints/2019/DgitDebrebase

08 July, 2019 02:21PM by andy

July 07, 2019

hackergotchi for Matthew Garrett

Matthew Garrett

Creating hardware where no hardware exists

The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

comment count unavailable comments

07 July, 2019 07:46PM

hackergotchi for Debian GSoC Kotlin project blog

Debian GSoC Kotlin project blog

Week 4 & 5 Update

Finished downgrading the project to be buildable by gradle 4.4.1

I have finished downgrading the project to be buildable using gradle 4.4.1. The project still needed a part of gradle 4.8 that I have successfully patched into sid gradle. here is the link to the changes that I have made.

Now we are officially done with making the project build with our gradle; so we can now go on ahead and finally start mapping out and packaging the dependencies.

Packaging dependencies for Kotlin-1.3.30

I split this task into two sub tasks that can be done independently. The 2 subtasks are as follows:
->part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build
--->part1.1:package these dependencies
->part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it.

The task has been split into this exact model because this folder has a variety of jars that the project uses and we ll have to minimize it and package the needed jars from it. Also the project uses other plugins and jars other than this one main folder which we can simultaneously map and package out.

I now have successfully mapped out the dependencies need by part 1: all that remains is to package em. I have copied the dependencies from the original cache(one created when I build the project using ./gradlew -Pteamcity=true dist) to /usr/share/maven-repo so some of these dependencies still need to have their dependencies clearly defined as in which of their dependencies we can omit and which we need. I have marked such dependencies with a *. So here are the dependencies:

jengeleman:shadow:4.0.3 --> https://github.com/johnrengelman/shadow (DONE: https://salsa.debian.org/m36-guest/jengelman-shadow)  
trove4j 1.x -> https://github.com/JetBrains/intellij-deps-trove4j (DONE: https://salsa.debian.org/java-team/libtrove-intellij-java)  
proguard:6.0.3 in jdk8 (DONE: released as libproguard-java 6.0.3-2)  
io.javaslang:2.0.6 --> https://github.com/vavr-io/vavr/tree/javaslang-v2.0.6 (DONE:https://salsa.debian.org/m36-guest/javaslang)  
jline 3.0.3  --> https://github.com/jline/jline3/tree/jline-3.3.1
protobuf-2.6.1 in jdk8 (DONE: https://salsa.debian.org/java-team/protobuf-2)
*com.jcabi:jcabi-aether:1.0  
*org.sonatype.aether:aether-api:1.13.1

!!NOTE:Please note that I might have missed out some; I ll add em to the list once I get em mapped out proper!!

So if any of you kind souls wanna help me out please kindly take on any of these and pacakge em.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

07 July, 2019 11:51AM by Saif Abdul Cassim

hackergotchi for Eriberto Mota

Eriberto Mota

Debian: repository changed its ‘Suite’ value from ‘testing’ to ‘stable’

Debian 10 (Buster) was released two hours ago. \o/

When using ‘apt-get update’, I can see the message:

E: Repository 'http://security.debian.org/debian-security buster/updates InRelease' changed its 'Suite' value from 'testing' to 'stable'
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.

 

Solution:

# apt-get --allow-releaseinfo-change update

 

Enjoy!

07 July, 2019 03:44AM by Eriberto

hackergotchi for Bits from Debian

Bits from Debian

Debian 10 "buster" has been released!

Alt Buster has been released

You've always dreamt of a faithful pet? He is here, and his name is Buster! We're happy to announce the release of Debian 10, codenamed buster.

Want to install it? Choose your favourite installation media and read the installation manual. You can also use an official cloud image directly on your cloud provider, or try Debian prior to installing it using our "live" images.

Already a happy Debian user and you only want to upgrade? You can easily upgrade from your current Debian 9 "stretch" installation; please read the release notes.

Do you want to celebrate the release? We provide some buster artwork that you can share or use as base for your own creations. Follow the conversation about buster in social media via the #ReleasingDebianBuster and #Debian10Buster hashtags or join an in-person or online Release Party!

07 July, 2019 01:25AM by Ana Guerrero Lopez, Laura Arjona Reina and Jean-Pierre Giraud

hackergotchi for Andy Simpkins

Andy Simpkins

Debian Buster Release

I spent all day smoke testing the install images for yesterday’s (this mornings – gee just after midnight local so we still had 11 hours to spare) Debian GNU/Linux 10.0.0 “Buster” release.

This year we had our “Biglyest test matrix ever”[0], 111 tests were completed at the point the release images were signed and the ftp team pushed the button.  Although more tests were reported to have taken place in IRC we had a total of 9 people completing tests called for in the wiki test matrix.

We also had a large number of people in irc during the image test phase of the release – peeking at 117…

Steve kindly hosted his front room a few of us – using a local mirror[1] of the images so our VM tests have the image available as an NFS share really speeds things up!  Between the 4 of us here in Cambridge we were testing on a total of 14 different machines, mainly AMD64, a couple of “only i386 capable laptops” and 3 entirely different ARM64 machines! (Mustang, Synquacer, & MacchiatoBin).

Room heaters on a hot day – photo RandomBird

In the photo are Sledge, Codehelp, Isy and myself…[2]

A Special thanks should also be given to Schweer who spent time testing the Debian Edu images and to Zdykstra for testing on ppc64el [2].

 

Finally sledge posted a link of the bandwidth utilization on the distribution network servers last week it looked like this:

Debian primary distribution network last week

That is seeing 500MBytes/s daily peek in traffic.

Pushing Buster today the graph looks like:

Guess when Buster release went live?

1.5GByte/Second – that is some network load :-)

 

Anyway – Time to head home.  I have a release party to attend later this afternoon and would like *some* sleep first.

 

 

[0] https://wiki.debian.org/Teams/DebianCD/ReleaseTesting/Buster_r0

[1] Local mirror is another ARM Synquacer ‘Server’ – 24 core Cortex A53  (A53 series is targeted at Mobile Devices)

[2] irc nicks have been used to protect the identity of the guilty :-)

07 July, 2019 12:30AM by andy

July 06, 2019

François Marier

SIP Encryption on VoIP.ms

My VoIP provider recently added support for TLS/SRTP-based call encryption. Here's what I did to enable this feature on my Asterisk server.

First of all, I changed the registration line in /etc/asterisk/sip.conf to use the "tls" scheme:

[general]
register => tls://mydid:mypassword@servername.voip.ms

then I enabled incoming TCP connections:

tcpenable=yes

and TLS:

tlsenable=yes
tlscapath=/etc/ssl/certs/

Finally, I changed my provider entry in the same file to:

[voipms]
type=friend
host=servername.voip.ms
secret=mypassword
username=mydid
context=from-voipms
allow=ulaw
allow=g729
insecure=port,invite
transport=tls
encryption=yes

(Note the last two lines.)

The dialplan didn't change and so I still have the following in /etc/asterisk/extensions.conf:

[pstn-voipms]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN})
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN})
exten => _NXXNXXXXXX,n,Hangup()
exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _011X.,n,Authenticate(1234) ; require password for international calls
exten => _011X.,n,Dial(SIP/voipms/${EXTEN})
exten => _011X.,n,Hangup(16)

Server certificate

The only thing I still need to fix is to make this error message go away in my logs:

asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>

It appears to be related to the fact that I didn't set tlscertfile in /etc/asterisk/sip.conf and that it's using its default value of asterisk.pem, a non-existent file.

Since my Asterisk server is only acting as a TLS client, and not a TLS server, there's probably no harm in not having a certificate. That said, it looks pretty easy to use a Let's Encrypt cert with Asterisk.

06 July, 2019 11:00PM

Jonathan Wiltshire

Daisy and George Help Debian

Daisy and George have decided to get stuck into testing images for #ReleasingDebianBuster.

George is driving the keyboard while Daisy takes notes about their test results.

Daisy and George testing Debian images

This test looks like a success. Next!

06 July, 2019 07:33PM by Jon

Testing in Teams

The Debian CD images are subjected to a battery of tests before release, even more so when the release is a major new version. Debian has volunteers all over the world testing images as they come off the production line, but it can be a lonely task.

Getting together in a group and having a bit of competitive fun always makes the day go faster:

CD testers in Cambridge, GBDebian CD testers in Cambridge, GB (photo: Jo McIntyre)

And, of course, they’re a valuable introduction to the life-cycle of a Debian release for future Debian Developers.

06 July, 2019 03:46PM by Jon

Niels Thykier

A decline in the use of hints in the release team

While we working the release, I had a look at how many hints we have deployed during buster like I did for wheezy a few years back.  As it seemed we were using a lot fewer hints than previously and I decided to take a detour into “stats-land”. 🙂

When I surfaced from “stats-land”, I confirmed that we have a clear decline in hints in the past two releases[1].

wheezy: 3301
jessie: 3699 (+398)
stretch: 2408 (-1291)
buster: 1478 (-930)

While it is certainly interesting, the number of hints on its own is not really an indicator of how much work we put into the release.  Notably, it says very little about the time spent on evaluating unblock requests before adding the hint.

 

[1]

Disclaimer: This are very rough estimate based on same method as from the previous blog post using entire complete months as smallest time unit. It is apparently not very accurate either.  The observant reader will note that the number for wheezy does not match the number I posted years ago (3254 vs 3301).  I am not entirely sure what causes the difference as I am using the same regex for wheezy and mtime for the files look unchanged.

 

06 July, 2019 10:16AM by Niels Thykier

Jonathan Wiltshire

What to expect on buster release day

The ‘buster’ release day is today! This is mostly a re-hash of previous checklists, since we’ve done this a few times now and we have a pretty good rhythm.

There have been some preparations going on in advance:

  1. Last week we imposed a “quiet period” on migrations. That’s about as frozen as we can get; it means that even RC bugs need to be exceptional if they aren’t to be deferred to the first point release. Only late-breaking documentation (like the install guide) was accepted.
  2. The security team opened buster-updates for business and carried out a test upload
  3. The debian-installer team made a final release.
  4. Final debtags data was updated.
  5. Yesterday the testing migration script britney and other automatic maintenance scripts that the release team run were disabled for the duration.
  6. We made final preparations of things that can be done in advance, such as drafting the publicity announcements. These have to be done in advance so translators get chance to do their work overnight (translations are starting to arrive right now!).

The following checklist makes the release actually happen:

  1. Once dinstall is completed at 07:52, archive maintenance is suspended – the FTP masters will do manual work for now.
  2. Very large quantities of coffee will be prepared all across Europe.
  3. Release managers carry out consistency checks of the buster index files, and confirm to FTP masters that there are no last-minute changes to be made. RMs get a break to make more coffee.
  4. While they’re away FTP masters begin the process of relabelling stretch as oldstable and buster as stable. If an installer needs to be, er, installed as well, that happens at this point. Old builds of the installer are removed.
  5. A new suite for bullseye (Debian 11) is initialised and populated, and labelled testing.
  6. Release managers check that the newly-generated suite index files look correct and consistent with the checks made earlier in the day. Everything is signed off – both in logistical and cryptographic terms.
  7. FTP masters trigger a push of all the changes to the CD-building mirror so that production of images can begin. As each image is completed, several volunteers download and test it in as many ways as they can dream up (booting and choosing different paths through the installer to check integrity).
  8. Finally a full mirror push is triggered by FTP masters, and the finished CD images are published.
  9. Announcements are sent by the publicity team to various places, and by the release team to the developers at large.
  10. Archive maintenance scripts are re-enabled.
  11. The release team take a break for a couple of weeks before getting back into the next cycle.

During the day much of the coordination happens in the #debian-release, #debian-ftp and #debian-cd IRC channels. You’re welcome to follow along if you’re interested in the process, although we ask that you are read-only while people are still concentrating (during the Squeeze release, a steady stream of people turned up to say “congratulations!” at the most critical junctures; it’s not particularly helpful while the process is still going on). The publicity team will be tweeting and denting progress as it happens, so that makes a good overview too.

If everything goes to plan, enjoy the parties!

(Disclaimer: inaccuracies are possible since so many people are involved and there’s a lot to happen in each step; all errors and omissions are entirely mine.)

06 July, 2019 07:19AM by Jon

July 05, 2019

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS/ELTS (June 2019)

In June 2019, I did not at all reach my goal of LTS/ELTS hours, unfortunately. (At this point, I could come up with a long story about our dog'ish family member and the infection diseases he got, the vet visits we did and the daily care and attention he needed, but I won't...).

I have worked on the Debian LTS project for 9,75 hours (of 17 hours planned) and on the Debian ELTS project just for 1 hour (of 12 hours planned) as a paid contributor.

LTS Work

  • LTS: Setup physical box running Debian jessie (for qemu testing)
  • LTS: Bug hunting mupdf regarding my CVE-2018-5686 patch backport
  • LTS: Upload to jessie-security: mupdf (DLA-1838-1), 3 CVEs [1]
  • LTS: Glib2.0: request CVE Id (CVE-2019-13012) + email communication with upstream [2] (minor issue for Glib2.0 << 2.60)
  • LTS: cfengine3: triage CVE-2019-9929, email communication with upstream (helping out security team) [3]

ELTS Work

  • Upload to wheezy-lts: expat (ELA 136-1), 1 CVE [4]

References

05 July, 2019 09:02PM by sunweaver

Thorsten Alteholz

My Debian Activities in June 2019

FTP master

As you might have noticed as well, this month has been a month with the highest average temperature of all June so far. So I spent more time in the lake than in NEW. I only accepted 12 packages and rejected 1 upload. The rest of the team probably did the same because the overall number of packages that got accepted was only 22. Let’s see whether July will be the same …

Debian LTS

This was my sixtieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 17h. During that time I did LTS uploads or prepared security uploads of:

  • [DLA 1830-1] znc security update for one CVE
  • [DLA 1833-1] bzip2 security update for two CVEs
  • [DLA 1841-1] gpac security update for three CVEs

I also prepared bzip2 debdiffs for Buster and Stretch and sent them to the maintainer and security team.
Further I created new packages for testing the patches of bind9 and wpa. I would be more confident to upload those if more people could give it a try. Especially after the python issue, I would really like to have some more people to do smoke tests …

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirteenth ELTS month.

During my allocated time I uploaded:

  • ELA-132-1 of bzip2 for two CVEs
  • ELA-138-1 of ntfs-3g for one CVE

As like LTS, I am a bit hesitant to upload bind9

I also did some days of frontdesk duties.

Other stuff

As already written above, I did not do much work in front of a computer, so there is nothing to report here.
Ok, maybe I can mention this email here instead of the LTS section above. This is a script to obtain the correct build order of Go packages in case of security patches. As well as mentioned in the other paragraphs I would like more people to have a look at it, but please be kind :-).

05 July, 2019 07:08PM by alteholz

Reproducible Builds

Reproducible Builds in June 2019

Welcome to the June 2019 report from the Reproducible Builds project! In our reports we outline the most important things that we have been up to over the past month.

In order that everyone knows what this is about, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In June’s report, we will cover:

  • Media coverageLego bricks, pizza and… Reproducible Builds‽
  • Upstream newsIs Trusting Trust close to a ‘rebuttal’?
  • EventsWhat happened at MiniDebConf Hamburg, the OpenWrt Summit, etc.
  • Software developmentPatches patches patches, etc.
  • Misc newsFrom our mailing list…
  • Getting in touchand how to contribute.

Media coverage

  • The Prototype Fund, an initiative to “aid software developers, hackers and creatives in furthering their ideas from concept to demo” produced a video featuring Holger Levsen explaining Reproducible Builds… using Lego bricks and pizza!

One key motivation for reproducible builds is to enable peak efficiency for the build caches used in modern build systems.


Upstream news



Events

There were a number of events that included or incorporated members of the Reproducible Builds community this month. If you know of any others, please do get in touch. In addition, a number of members of the Reproducible Builds project will be at DebConf 2019 in Curitiba, Brazil and will present on the status of their work.

MiniDebConf Hamburg 2019

Holger Levsen, Jelle van der Waa, kpcyrd and Alexander Couzens attended MiniDebConf Hamburg 2019 and worked on Reproducible Builds. As part of this, Holger gave a status update on the Project with a talk entitled Reproducible Builds aiming for bullseye, referring to the next Debian release name:


Jelle van der Waa kindly gifted Holger with a Reproducible Builds display:

In addition, Lukas Puehringer gave a talk titled Building reproducible builds into apt with in-toto:

As part of various hacking sessions:

  • Jelle van der Waa:

    • Improved the reproducible_json.py script to generate distribution-specific JSON, leading to the availability of an ArchLinux JSON file.
    • Investigated why the Arch Linux kernel package is not reproducible, finding out that KBUILD_BUILD_HOST and KGBUILD_BUILD_TIMESTAMP should be set. The enabling of CONFIG_MODULE_SIG_ALL causes the kernel modules to be signed with a (non-deterministic) build-time key if none is provided, leading to unreproducibility.
    • keyutils was fixed with respect to it embedding the build date in its binary. []
    • nspr was made reproducible in Arch Linux. []
  • kpcyrd:
    • Created various Jenkins jobs to generate Alpine build chroots, schedule new packages and to ultimately build them. [][][]
    • Created an Alpine reproducible testing overview page.
    • Provided a proof of concept SOURCE_DATE_EPOCH patch for abuild to fix timestamp issues in Alpine packages. []
  • Alexander Couzens:
    • Rewrote the database interaction routines for OpenWrt.
    • Migrated the OpenWrt package parser to use Python 3.x as Python 2.x will be reaching end-of-life at the end of this year.
    • Setup a test environment using a new README.development file.

Holger Levsen was on-hand to review and merge all the above commits, providing support and insight into the codebase. He additionally split out a README.development from the regular, more-generic README file.

OpenWrt summit

The OpenWrt project is a Linux operating system targeting embedded devices, particularly wireless network routers. In June, they hosted a summit that took place from 10th to 12th of the month.

Here, Holger participated in the discussions regarding .buildinfo build-attestation documents. As a result of this, Paul Spooren (aparcar) made a pull request to introduce/create a feeds.buildinfo (etc) for reproducibility in OpenWrt.


Software development

buildinfo.debian.net

Chris Lamb spent significant time working on buildinfo.debian.net, his experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them. This included:

  • Started making the move to Python 3.x (and Django 2.x) [][][][][][][] additionally performing a large number of adjacent cleanups including dropping the authentication framework [], fixing a number of flake8 warnings [], adding a setup.cfg to silence some warnings [], moving to __str__ and str.format(...) over %-style interpolation and u"Unicode" strings [], etc.

  • Added a number of (as-yet unreleased…) features, including caching the expensive landing page queries. []

  • Took the opportunity to start migrating the hosting from its current GitHub home to a more-centralised repository on salsa.debian.org, moving from the Travis to the GitLab continuous integration platform, updating the URL to the source in the footer [] and many other related changes [].

  • Applied the Black “uncompromising code formatter” to the codebase. []

Project website

There was a significant amount of effort on our website this month.

  • Chris Lamb:

    • Moved the remaining site to the newer website design. This was a long-outstanding task (#2) and required a huge number of changes, including moving all the event and documentation pages to the new design [] and migrating/merging the old _layouts/page.html into the new design [] too. This could then allow for many cleanups including moving/deleting files into cleaner directories, dropping a bunch of example layouts [] and dropping the old “home” layout. []

    • Added reports to the homepage. (#16)

    • Re-ordered and merged various top-level sections of the site to make the page easier to parse/navigate [][] and updated the documentation for SOURCE_DATE_EPOCH to clarify that the alternative -r call to date(1) is for compatibility with BSD variants of UNIX [].

    • Made a large number of visual fixups, particularly to accommodate the principles of responsive web design. [][][][][]

    • Updated the lint functionality of the build system to check for URIs that are not using /foo/-style relative URLs. []

  • Jelle van der Waa updated the Events page to correct invalid Markdown [] and fixed a typo of “distribution” on a previous event page [].

  • Thomas Vincent added a huge number of videos and slides to the Resources page [][][][][][] etc. as well as added a button to link to subtitles [] and fixing a bug when displaying metadata links [].

In addition, Atharva Lele added the Buildroot embedded Linux project to the “Who’s involved” page. []

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were done in the last month:

  • Alexander Couzens (OpenWrt):
  • Holger Levsen:
    • Show Alpine-related jobs on the job health page. []
    • Alpine needs the jq command-line JSON processor for the new scheduler. []
    • Start a dedicated README.development file. []
    • Add support for some nodes running Debian buster already. []
  • Jelle van der Waa:
    • Change Arch Linux and Alpine BLACKLIST status to blacklist [] and GOOD to reproducible [] respectfully.
    • Add a Jenkins job to generate Arch Linux HTML pages. []
    • Fix the Arch Linux suites in the reproducible.ini file. []
    • Add an Arch JSON export Jenkins job. []
    • Create per-distribution reproducible JSON files. []
  • kpcyrd (Alpine):

    • Start adding an Alpine theme. []
    • Add an Alpine website. [][][][]
    • Add #alpine-reproducible to the KGB chat bot. []
    • Use the apk version instead of vercmp. []
    • Install/configure various parts of the chroot including passing in Git options [], adding the abuild group onto more servers [][], installing GnuPG []
    • Build packages using its own scheduler. [] [][]
    • Misc maintenance and fixups. [][]
  • Mattia Rizzolo:
    • Adjust the setup_pbuilder script to use [check-valid-until=no] instead of Acquire::Check-Valid-Until (re. (#926242)). []

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Distribution work

In Debian, 39 reviews of packages were added, 3 were updated and 8 were removed this month, adding to our knowledge about identified issues.

Chris Lamb also did more work testing of the reproducibility status of Debian Installer images. In particular, he was working around and patching an issue stemming from us testing builds far into the “future”. (#926242)

In addition, following discussions at MiniDebConf Hamburg, Ivo De Decker reviewed the situation around Debian bug #869184 again (“dpkg: source uploads including _amd64.buildinfo cause problems”) and updated the bug with some recommendations for the next Debian release cycle.

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution.

Other tools

In diffoscope (our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues) Chris Lamb documented that run_diffoscope should not be considered a stable API [] and adjusted the configuration to build our Docker image from the current Git checkout, not the Debian archive []

Lastly, Chris Lamb added support for the clamping of tIME chunks in .png files [] to strip-nondeterminism, our tool to remove specific non-deterministic results from a completed build.


Misc news

On our mailing list this month Lars Wirzenius continued conversation regarding various questions about reproducible builds and their bearing on building a distributed continuous integration system which received many replies (thread index for May & June). In addition, Sebastian Huber asked whether anyone has attempted a reproducible build of a GCC compiler itself.


If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Alexander Borkowski, Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen, Jelle van der Waa, kpcyrd & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

05 July, 2019 01:58PM

Patrick Matthäi

Maintainance of GeoIP legacy databases

Since 9 months now Maxmind is not providing the CSV sources for their legacy database format, but only for their new GeoLite2 database. That is legitimate in my opinion, because the API is quite old and software projects should move to the new format, but mostly all (IMHO) important software projects still only support the old API.. :-(

So I have decided to spend again some more work in my geoip and geoip-database packages and I can say, that I will upload after the Buster release a new geoip source package, which also provides the converter I took from here:
https://github.com/mschmitt/GeoLite2xtables/

Using this converter (and some more magic etc) I am now able to build the country v4+v6 legacy edition by using the GeoLite2 CSV database source :-)

So testing will be welcome and if everything is fine buster and stretch will get backports from this work in the future.

But I had to drop now the geoip-database-extra package, which includes also the AS and City (v4) database. I didn’t find a way to convert the sources and IMO they are not so important.

05 July, 2019 01:19PM by the-me

hackergotchi for Bits from Debian

Bits from Debian

Upcoming Debian 10 "buster"!

Alt Buster is coming on 2019-07-06

The Debian Release Team in coordination with several other teams are preparing the last bits needed for releasing Debian 10 "buster" on Saturday 6 July 2019. Please, be patient! Lots of steps are involved and some of them take some time, such as building the images, propagating the release through the mirror network, and rebuilding the Debian website so that "stable" points to Debian 10.

If you are considering create some artwork on the occasion of buster release, feel free to send us links to your creations to the (publicly archived) debian-publicity mailing list, so that we can disseminate them throughout our community.

Follow the live coverage of the release on https://micronews.debian.org or the @debian profile in your favorite social network! We'll spread the word about what's new in this version of Debian 10, how the release process is progressing during the weekend and facts about Debian and the wide community of volunteer contributors that make it possible.

If you want to celebrate the release of Debian 10 buster, join one of the many release parties or consider organizing one in your city! Celebration will also happen online on the Debian Party Line.

05 July, 2019 06:00AM by Laura Arjona Reina, Jean-Pierre Giraud and Thomas Vincent

July 04, 2019

Petter Reinholdtsen

Teach kids to protect their privacy - the EDRi way

Childs need to learn how to guard their privacy too. To help them, European Digital Rights (EDRi) created a colorful booklet providing information on several privacy related topics, and tips on how to protect ones privacy in the digital age.

The 24 page booklet titled Digital Defenders is available in several languages. Thanks to the valuable contributions from members of the Electronic Foundation Norway (EFN) and others, it is also available in Norwegian Bokmål. If you would like to have it available in your language too, contribute via Weblate and get in touch.

But a funny, well written and good looking PDF do not have much impact, unless it is read by the right audience. To increase the chance of kids reading it, I am currently assisting EFN in getting copies printed on paper to distribute on the street and in class rooms. Print the booklet was made possible thanks to a small et of great sponsors. Thank you very much to each and every one of them! I hope to have the printed booklet ready to hand out on Tuesday, when the Norwegian Unix Users Group is organizing its yearly barbecue for geeks and free software zealots in the Oslo area. If you are nearby, feel free to come by and check out the party and the booklet.

If the booklet prove to be a success, it would be great to get more sponsoring and distribute it to every kid in the country. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

04 July, 2019 05:10PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.20

This morning, digest version 0.6.20 went to CRAN, and I will send a package to Debian shortly as well.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects.

This version contains only internal changes with a switch to the (excellent) tinytest package. This now allows you, dear user of the package, to run tinytest::test_package("digest") at any point post-installation to reassure yourself that all standard assertions and tests are still met in your installation. No other changes were made.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 July, 2019 04:42PM

hackergotchi for Charles Plessy

Charles Plessy

Inbox zero

I accidentally erased all the emails in my inbox. This is very easy with mutt. I have some experience in recovering files, but last time I did, it was not so useful in the end. So please send me a reminder if you were expecting some answer from me!

04 July, 2019 12:52PM

hackergotchi for Eddy Petri&#537;or

Eddy Petrișor

HOWTO: Rustup: Overriding the rustc compiler version just for some directory

If you need to use a specific version of the rustc compiler instead of the default, the rustup documentation tells you how to do that.


First install the desired version, e.g. nightly-2018-01-09

$ rustup install nightly-2018-01-09
info: syncing channel updates for 'nightly-2018-01-09-x86_64-pc-windows-msvc'
info: latest update on 2018-01-09, rust version 1.25.0-nightly (b5392f545 2018-01-08)
info: downloading component 'rustc'
info: downloading component 'rust-std'
info: downloading component 'cargo'
info: downloading component 'rust-docs'
info: installing component 'rustc'
info: installing component 'rust-std'
info: installing component 'cargo'
info: installing component 'rust-docs'

  nightly-2018-01-09-x86_64-pc-windows-msvc installed - rustc 1.25.0-nightly (b5392f545 2018-01-08)

info: checking for self-updates

Then override the default compiler with the desired one in the top directory of your choice:

$ rustup override set nightly-2018-01-09
info: using existing install for 'nightly-2018-01-09-x86_64-pc-windows-msvc'
info: override toolchain for 'C:\usr\src\rust\sbenitez-cs140e' set to 'nightly-2018-01-09-x86_64-pc-windows-msvc'

  nightly-2018-01-09-x86_64-pc-windows-msvc unchanged - rustc 1.25.0-nightly (b5392f545 2018-01-08)
That's it.

04 July, 2019 10:02AM by eddyp (noreply@blogger.com)

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

WKD for debian.org

WKD for debian.org

You can now fetch the OpenPGP certificate for any Debian developer who uses an @debian.org e-mail address using Web Key Directory (WKD).

How?

With modern GnuPG, if you're interested in the OpenPGP certificate for dkg just do:

gpg --locate-keys dkg@debian.org

By default, this will show you any matching certificate that you already have in your GnuPG local keyring. But if you don't have a matching certificate already, it will fall back to using WKD.

These certificates are extracted from the debian keyring and published at https://openpgpkey.debian.org/.well-known/openpgpkey/debian.org/, as defined in the WKD spec. We intend to keep them up-to-date when ever the keyring-maint team publishes a new batch of certificates. Our tooling uses some repeated invocations of gpg to extract and build the published tree of files.

Debian is current not implementing the Web Key Directory Update Protocol (and we have no plans to do so). If you are a Debian developer and you want your OpenPGP certificate updated in WKD, please follow the normal procedures for Debian keyring maintenance like you always have.

What about other domains?

Our update here works great for e-mail addresses in the @debian.org domain, but it has no direct effect for other e-mail addresses.

However, if you have an e-mail address in a domain you control, you can publish your own WKD. If you would rather use an e-mail service in a domain managed by other people, you might also be interested in GnuPG's list of e-mail service providers that offer WKD.

Why?

The SKS keyserver network has been vulnerable to abuse for years. The recent certificate flooding attacks make fetching an OpenPGP certificate from that pool a risky operation: potentially causing a denial of service against GnuPG. In particular, anyone can flood any certificate in SKS (or other common keyservers that are not resistant to abuse).

WKD avoids the problem of certificate flooding by arbitrary third parties. It's not a guaranteed defense against flooding though: the domain controller (and whoever they authorize to update the WKD) is still capable of offering a flooded certificate via WKD. On the plus side, at least some WKD clients do aggressive filtering on certificates found via WKD, which should limit the ability of an adversary to flood a certificate in your local keyring.

Thanks

Setting this up would not have been possible without help from weasel and jcristau from the Debian System Administration team, and Noodles from the keyring-maint team.

WKD was designed and implemented by Werner Koch and the GnuPG team, in anticipation of this specific need.

Thanks to all of these people for making it possible.

What next?

There's some talk about publishing similar OpenPGP certificates in the DNS as well, using RFC 7929 (OPENPGPKEY) records, but we haven't set that up yet.

04 July, 2019 04:00AM by Daniel Kahn Gillmor (dkg)

July 03, 2019

hackergotchi for Junichi Uekawa

Junichi Uekawa

I practice French more recently.

I practice French more recently. I don't usually carry a laptop when I travel, which seems to be a change. I thought that would give me better focus but maybe that's not really happening.

03 July, 2019 12:02PM by Junichi Uekawa

Enrico Zini

live-wrapper fork

I might have accidentally forked live-wrapper.

I sometimes need to build Debian live iso images for work, and some time ago got into an inconvenient situation in which live-wrapper required software not available in Debian anymore, and there was no obvious replacement for it, so I forked it and tried to forward-port things and fill the gaps.

Over time this kind of grew: I ported it to python3, removed difficult dependencies, added several new features that I needed, and removed several that I didn't need.

I recently had a chance to document the result, which makes it good enough to be announced, so here it is. The README has an introduction and links to documentation, recipes and examples.

I'm not actively maintaining this except when work requires, so if there's anything extra you need for it, the best way to get it is via a merge request.

I'm not sure how much of live-wrapper is still left in the fork. If anyone starts using it, we should probably look into a new name.

03 July, 2019 10:26AM

July 02, 2019

hackergotchi for Joey Hess

Joey Hess

custom type checker errors for propellor

Since propellor is configured by writing Haskell, type errors are an important part of its interface. As more type level machinery has been added to propellor, it's become more common for type errors to refer to hard to understand constraints. And sometimes simple mistakes in a propellor config result in the type checker getting confused and spewing an error that is thousands of lines of gobbledygook.

Yesterday's release of the new type-errors library got me excited to improve propellor's type errors.

Most of the early wins came from using ghc's TypeError class, not the new library. I wanted custom type errors that were able to talk about problems with Property targets, like these:

    • ensureProperty inner Property is missing support for: 
    FreeBSD

    • This use of tightenTargets would widen, not narrow, adding: 
        ArchLinux + FreeBSD

    • Cannot combine properties:
        Property FreeBSD
        Property HasInfo + Debian + Buntish + ArchLinux

So I wrote a type-level pretty-printer for propellor's MetaType lists. One interesting thing about it is that it rewrites types such as Targeting OSDebian back to the Debian type alias that the user expects to see.

To generate the first error message above, I used the pretty-printer like this:

(TypeError
    ('Text "ensureProperty inner Property is missing support for: "
        ':$$: PrettyPrintMetaTypes (Difference (Targets outer) (Targets inner))
    )
)

Often a property constructor in propellor gets a new argument added to it. A propellor config that has not been updated to include the new argument used to result in this kind of enormous and useless error message:

    • Couldn't match type ‘Propellor.Types.MetaTypes.CheckCombinable
                             (Propellor.Types.MetaTypes.Concat
                                (Propellor.Types.MetaTypes.NonTargets y0)
                                (Data.Type.Bool.If
                                   (Propellor.Types.MetaTypes.Elem
                                      ('Propellor.Types.MetaTypes.Targeting 'OSDebian)
                                      (Propellor.Types.MetaTypes.Targets y0))
                                   ('Propellor.Types.MetaTypes.Targeting 'OSDebian
                                      : Data.Type.Bool.If
                                          (Propellor.Types.MetaTypes.Elem
                                             ('Propellor.Types.MetaTypes.Targeting 'OSBuntish)
    -- many, many lines elided
    • In the first argument of ‘(&)’, namely
        ‘props & osDebian Unstable’

The type-errors library was a big help. It's able to detect when the type checker gets "stuck" reducing a type function, and is going to dump it all out to the user. And you can replace that with a custom type error, like this one:

    • Cannot combine properties:
        Property <unknown>
        Property HasInfo + Debian + Buntish + ArchLinux + FreeBSD
        (Property <unknown> is often caused by applying a Property constructor to the wrong number of arguments.)
    • In the first argument of ‘(&)’, namely
        ‘props & osDebian Unstable’

Detecting when the type checker is "stuck" also let me add some custom type errors to handle cases where type inference has failed:

    • ensureProperty outer Property type is not able to be inferred here.
      Consider adding a type annotation.
    • When checking the inferred type
        writeConfig :: forall (outer :: [Propellor.Types.MetaTypes.MetaType]) t.

    • Unable to infer desired Property type in this use of tightenTargets.
      Consider adding a type annotation.

Unfortunately, the use of TypeError caused one problem. When too many arguments are passed to a property constructor that's being combined with other properties, ghc used to give its usual error message about too many arguments, but now it gives the custom "Cannot combine properties" type error, which is not as useful.

Seems likely that's a ghc bug but I need a better test case to make progress on that front. Anyway, I decided I can live with this problem for now, to get all the other nice custom type errors.

The only other known problem with propellor's type errors is that, when there is a long list of properties being combined together, a single problem can result in a cascade of many errors. Sometimes that also causes ghc to use a lot of memory. While custom error messages don't help with this, at least the error cascade is nicer and individual messages are not as long.

Propellor 5.9.0 has all the custom type error messages discussed here. If you see a hard to understand error message when using it, get in touch and let's see if we can make it better.


This was sponsored by Jake Vosloo and Trenton Cronholm on Patreon.

02 July, 2019 08:53PM

hackergotchi for Thomas Lange

Thomas Lange

Xterm fonts problems after Buster installation

I've installed a desktop with buster to see if my FAI configuration was working. When I opened a xterm window I say that the font I use looks different than on stretch. It looks not that clean and a little more bold.

I've checked if the correct fonts were used using "xterm -report-fonts" which correctly showed

xos4-terminus-medium-r-normal--16-----*-iso10646-1

I'm setting this in my ~/.Xdefaults

Xft.hintstyle: hintfull

but on Buster the hintstyle (reported by xterm -report-fonts) was now 1 instead of 3. I found out that the package fontconfig-config has a now a new debconf question:

fontconfig-config fontconfig/hinting_style

I've set this to hintfull, but still no change. Then I found a very detailed description on FreeType Subpixel Hinting and Debian bug #867657.

The solution was to also set the variable

export FREETYPE_PROPERTIES=truetype:interpreter-version=35

buster

02 July, 2019 03:33PM

hackergotchi for Bits from Debian

Bits from Debian

DebConf19 Cheese and Wine Party

In less than one month we will be in Curitiba to start DebCamp and DebConf19 \o/

This C&W is the 15th official DebConf Cheese and Wine party. The first C&W was improvised in Helsinki during DebConf 5, in the so-called "French" room. Cheese and Wine parties are now a tradition for DebConf.

The event is very simple: bring good edible stuff from your country. We like cheese and wine, but we love the surprising stuff that people bring from all around the world or regions of Brazil. So, you can bring non-alcoholic drinks or a typical food that you would like to share as well. Even if you don't bring anything, feel free to participate: our priorities are our attendants and free cheese.

We have to organize for a great party. An important part is planning - We want to know what you are bringing, in order to prepare the labels and organizing other things.

So, please go to our wiki page and add what you will bring!

If you don't have time to buy before travel, we list some places where you can buy cheese and wine in Curitiba. There are more information about C&W, what you can bring, vegan cheese, Brazil customs regulations and non-alcoholic drinks at our site.

C&W will happen on July 22nd, 2019 (Monday) after 19h30min.

We are looking forward to seeing you all here!

DebConf19 logo

02 July, 2019 12:30PM by Adriana Cássia da Costa

July 01, 2019

hackergotchi for Keith Packard

Keith Packard

Joining-SiFive

Joining SiFive

I've accepted and offer for a full-time position with SiFive. I'll be starting on July 15th, 2019 and will be working on free software for RISC-V-based processors, among other tasks.

I really enjoyed my time at Hewlett Packard Labs and wish all the best for my colleagues there.

01 July, 2019 10:12PM

hackergotchi for Alessio Treglia

Alessio Treglia

Cosmos Hub and Reproducible Builds

Open source software allows us to build trust in a distributed, collaborative software development process, to know that the software behaves as expected and is reasonably secure. But the benefits of open source are strongest for those who directly interact with the source code. These people can use a computer which they trust to compile the source code into an operational version for themselves. Distributing binaries of open source software breaks this trust model, and reproducible builds restores it.

Tendermint Inc is taking the first steps towards a trustworthy binary distribution process. Our investment in reproducible builds makes doing binary distributions of the gaia software a possibility. We envision that the Cosmos Hub community will be our partners in building trust in this process. The governance features of the Cosmos Hub will enable a novel collaboration between Tendermint and that validator community to release only binaries that can be trusted by anyone.

Here is our game plan.

The release of the cosmoshub-3 will support our new reproducible build process. Tendermint developers will make a governance proposal with the hashes of all supported binaries. We will ask ATOM holders to reproduce the builds on computers they control and vote YES if the hashes match.

If the proposal passes, we will make the binaries available here via Github.

The benefits of reproducible builds

Gaia reproducible binaries then bring many significant advantages to developers and end users:

  • Build sanity — the guarantee that the gaia suite can always be built from sources.
  • Enable third-parties to independently verify executables to ensure that no vulnerabilities were introduced at build time.
  • Large body of independent builders can eventually come to consensus on the correct reproducible binary output and protect themselves from targeted attacks.

How to verify that gaia binaries correspond to a repository snapshot

The gaia repository comes with the required tooling to build both server and client applications deterministically. First you need to clone https://github.com/cosmos/gaia and checkout the release branch or the commit you want to produce the binaries from. For instance, if you intend to build and sign reproducible binaries for all supported platforms of gaia’s master branch, you may want to do the following:

git clone https://github.com/cosmos/gaia && cd gaia
chmod +x contrib/gitian-build.sh
./contrib/gitian-build.sh -s email@example.com all

Append the -c flag to the above command if you want to upload your signature to the http://github.com/gaia/gaia.sigs repository as well.

If you want to build the binaries only without signing the build result, just type:

./contrib/gitian-build.sh all

Further information can be found here: github.com/cosmos/gaia/…/docs/reproducible-builds.md

References

Credits

Co-authored with Zaki Manian

01 July, 2019 04:11PM by Alessio Treglia

John Goerzen

Treasuring Moments

“Treasure the moments you have. Savor them for as long as you can, for they will never come back again.”

– J. Michael Straczynski

This quote sits on a post-it note on my desk. Here are some moments of our fast-changing little girl that I’m remembering today — she’s almost 2!

Brothers & Sister

Martha loves to play with her siblings. She has names for them — Jacob is “beedoh” and Oliver is “ah-wah”. When she sees them come home, she gets a huge smile and will screech with excitement. Then she will ask them to play with her.

She loves to go down the slide with Jacob. “Beedoh sigh?” (Jacob slide) — that’s her request. He helps her up, then they go down together. She likes to swing side-by-side with Oliver. “Ahwah sing” (Oliver swing) when she wants him to get on the swing next to her. The boys enjoy teaching her new words and games.

[Video: Martha and Jacob on the slide]

Music

Martha loves music! To her, “sing” is a generic word for music. If we’re near a blue speaker, she’ll say “boo sing” (blue sing) and ask for it to play music.

But her favorite request is “daddy sing.” It doesn’t mean she wants me to sing. No, she wants me to play my xaphoon (a sax-like instrument). She’ll start jumping, clapping, and bopping her head to the music. Her favorite spot to do this is a set of soft climbing steps by the piano.

But that’s not enough — next she pulls out our hymnbooks and music books and pretends to sing along. “Wawawawawawa the end!”

If I decide to stop playing, that is most definitely not allowed. “Daddy sing!” And if I don’t comply, she gets louder and more insistent: “DADDY SING.”

[Videos: Martha singing and reading from hymn books, singing her ABCs]

Airplanes

Martha loves airplanes. She started to be able to say “airplane” — first “peen”, then “airpeen”, and now “airpane!” When we’re outside and she hears any kind of buzzing that might possibly be a plane, I’m supposed to instantly pick her up and carry her past our trees so we can look for it. “AIRPANE! AIRPANE! Ho me?” (hold me) Then when we actually see a plane, it’s “Airpane! Hi airpane!” And as it flies off, “Bye-bye airpane. Bye-bye. [sadly] Airpane all done.”

One day, Martha was trying to see airplanes, but it was cloudy. I bundled her up and we went to our local GA airport and stood in the grass watching planes. Now that was a hit! Now anytime Martha sees warehouse-type buildings, she thinks they are hangars, and begs to go to the airport. She loves to touch the airplane, climb inside it, and look at the airport beacon — even if we won’t be flying that day.

[Video: Hi big plane!]

Martha getting ready for a flight

This year, for Mother’s Day, we were going to fly to a nearby airport with a restaurant on the field. I took a photo of our family by the plane before we left. All were excited!

Mother’s Day photo

Mornings

We generally don’t let Martha watch TV, but make a few exceptions for watching a few videos and looking at family pictures. Awhile back, Martha made asked to play with me while I was getting ready for the day. “Martha, I have to get dressed first. Then I’ll play with you.” “OK,” she said.

She ran off into the closet, and came back with what she could reach of my clothing – a dirty shirt, and handed it up to me to wear. I now make sure to give her the chance to bring me socks, shirts, etc. And especially shoes. She really likes to bring me shoes.

Then we go downstairs. Sometimes she sits on my lap in the office and we watch Youtube videos of owls or fish. Or sometimes we go downstairs and start watching One Six Right, a wonderful aviation documentary. She and I jabber about what we see — she can identify the beacon (“bee”), big hangar door (“bih doh”), airplanes of different colors (“yellow one”), etc. She loves to see a little Piper Cub fly over some cows, and her favorite shot is a plane that flies behind the control tower at sunset. She’ll lean over and look for it as if it’s going around a corner.

Sometimes we look at family pictures and videos. Her favorite is a video of herself in a plane, jabbering and smiling. She’ll ask to watch it again and again.

Bedtime

Part of our bedtime routine is that I read a story to Martha. For a long time, I read her The Very Hungry Caterpillar by Eric Carle. She loved that book, and one night said “geecko” for pickle. She noticed I clapped for it, and so after that she always got excited for the geeckos and would clap for them.

Lately, though, she wants the “airpane book” – Clair Bear’s First Solo. We read through that book, she looks at the airplanes that fly, and always has an eye out for the “yellow one” and “boo one” (blue plane). At the end, she requests “more pane? More pane?”

After that, I wave goodnight to her. She used to wave back, but now she says “Goodnight, daddy!” and heads on up the stairs.

01 July, 2019 02:35PM by John Goerzen

Jonas Meurer

debian lts report 2019.06

Debian LTS report for June 2019

This month I was allocated 17 hours. I also had 1.75 hours left over from May, which makes a total of 18.75 hours. I spent 16.75h of them on the following issues, which means I again carry over 2h to the next month.

01 July, 2019 12:59PM