May 21, 2019

Enrico Zini

Molly de Blanc

remuneration

I am a leader in free software. As evidence for this claim, I like to point out that I once finagled an invitation to the Google OSCON luminaries dinner, and was once invited to a Facebook party for open source luminaries.

In spite of my humor, I am a leader and have taken on leadership roles for a number of years. I was in charge of guests of honor (and then some) at Penguicon for several years at the start of my involvement in FOSS. I’m a delegate on the Debian Outreach team. My participation in Debian A-H is a leadership role as well. I’m president of the OSI Board of Directors. I’ve given keynote presentations on two continents, and talks on four. And that’s not even getting into my paid professional life. My compensated labor has been nearly exclusively for nonprofits.

Listing my credentials in such concentration feels a bit distasteful, but sometimes I think it’s important. Right now, I want to convey that I know a thing or two about free/open source leadership. I’ve even given talks on that.

Other than my full-time job, my leadership positions come without material renumeration — that is to say I don’t get paid for any of them — though I’ve accepted many a free meal and have had travel compensated on a number of occasions. I am not interested in getting paid for my leadership work, though I have come to believe that more leadership positions should be paid.

One of my criticisms about unpaid project/org leadership positions is that they are so time consuming it means that the people who can do the jobs are:

  • students
  • contractors
  • unemployed
  • those with few to no other responsibilities
  • those with very supportive partners
  • those with very supportive employers
  • those who don’t need much sleep
  • those with other forms of financial privilege

I have few responsibilities beyond some finicky plants and Bash (my cat). I also have extremely helpful roommates and modern technology (e.g. automatic feeders) that assist with these things while traveling. I can spend my evenings and weekends holed up in my office plugging away on my free software work. I have a lot of freedom and flexibility — economic, social, professional — that affords me this opportunity. Very few of us do.

This is is a problem! One solution is to pay more leadership positions; another is to have these projects hire someone in an executive director-like capacity and turn their leadership roles into advisory roles; or replace the positions with committees (the problem with the latter is that most committees still have/need a leader).

Diversity is good.

The time requirements for leadership roles severely limit the pool of potential participants. This limits the perspectives and experiences brought to the positions — and diversity in experience is widely considered to be good. People from underrepresented backgrounds generally overlap with marginalized communities — including ethnic, geographic, gender, race, and socio-economic minorities.

Volunteer work is not “more pure.”

One of the arguments for not paying people for these positions is that their motives will be more pure if they are doing it as a volunteer — because they aren’t “in it for the money. I would argue that your motives can be less pure if you aren’t being paid for your labor.

In mission-driven nonprofits, you want as much of your funding as possible to come from individual or community donors rather than corporate sponsors. You want the number of individual and community donors and members to be greater than that of your sponsors. You want to ensure you have enough money that should a corporate sponsor drop you (or you drop them), you are still in a sustainable position. You want to do this so that you are not beholden to any of your corporate or government sponsors. Freeing yourself from corporate influence allows you to focus on the mission of your work.

When searching for a volunteer leader, you need to look at them as a mission-driven nonprofit. Ask: What are their conflicts of interest? What happens if their employers pull away their support? What sort of financial threats are they susceptible to?

In a capitalist system, when someone is being paid for their labor, they are able to prioritize that labor. Adequate compensation enables a person to invest more fully in their work. When your responsibilities as the leader of a free software project, for which you are unpaid, come into direct conflict with the interests of your employer, who is going to win?

Note, however, that it’s important to make sure the funding to pay your leadership does not come with strings attached so that your work isn’t contingent upon any particular sponsor or set of sponsors getting what they want.

It’s a lot of work. Like, a lot of work.

By turning a leadership role into a job (even a part-time one), the associated labor can be prioritized over other labor. Many volunteer leadership positions require the same commitment as a part-time job, and some can be close to if not actually full-time jobs.

Someone’s full-time employer needs to be supportive of their volunteer leadership activities. I have some flexibility in the schedule for my day job, so I can plan meetings with people who are doing their day jobs, or in different time zones, that will work for them. Not everyone has this flexibility when they have a full-time job that isn’t their leadership role. Many people in leadership roles — I know past presidents of the OSI and previous Debian Project Leaders who will attest to this — are only able to do so because their employer allows them to shift their work schedule in order to do their volunteer work. Even when you’re “just” attending meetings, you’re doing so either with your employer giving you the time off, or using your PTO to do so.

A few final thoughts.

Many of us live in capitalist societies. One of the ways you show respect for someone’s labor is by paying them for it. This isn’t to say I think all FOSS contributions should be paid (though some argue they ought to be!), but that certain things require levels of dedication that go significantly above and beyond that which is reasonable. Our free software leaders are incredible, and we need to change how we recognize that.

(Please note that I don’t feel as though I should be paid for any of my leadership roles and, in fact, have reasons why I believe they should be unpaid.)

21 May, 2019 09:08PM by mollydb

Jonathan Wiltshire

RC candidate of the day (1)

Sometimes the list of release-critical bugs is overwhelming, and it’s hard to find something to tackle.

So I invite you to have a go at #928040, which may only be a case of reviewing and uploading the included patch.

21 May, 2019 06:39PM by Jon

Enrico Zini

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, April 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 204 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 4 hours (out of 14 hours allocated, thus carrying over 10 hours to May).
  • Adrian Bunk did 8 hours (out of 8 hours allocated).
  • Ben Hutchings did 31.25 hours (out of 17.25 hours allocated plus 14 extra hours from April).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 17 hours (out of 17.25 hours allocated, thus carrying over 0.25h to May).
  • Emilio Pozuelo Monfort did 8 hours (out of 17.25 hours allocated + 6 extra hours from March, thus carrying over 15.25h to May).
  • Hugo Lefeuvre did 17.25 hours.
  • Jonas Meurer did 14 hours (out of 14 hours allocated).
  • Markus Koschany did 17.25 hours.
  • Mike Gabriel did 11.5 hours (out of 17.25 hours allocated, thus carrying over 5.75h to May).
  • Ola Lundqvist did 5.5 hours (out of 8 hours allocated + 1.5 extra hours from last month, thus carrying over 4h to May).
  • Roberto C. Sanchez did 1.75 hours (out of 12 hours allocated, thus carrying over 10.25h to May).
  • Sylvain Beucler did 17.25 hours.
  • Thorsten Alteholz did 17.25 hours.

Evolution of the situation

During this month, and after a two-year break, Jonas Meurer became again an active LTS contributor. Still, we continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The number of sponsors did not change. There are 58 organizations sponsoring 215 work hours per month.

The security tracker currently lists 33 packages with a known CVE and the dla-needed.txt file has 31 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

21 May, 2019 02:11PM by Raphaël Hertzog

May 20, 2019

hackergotchi for Neil Williams

Neil Williams

New directions

It's been a difficult time, the last few months, but I've finally got some short updates.

First, in two short weeks I will be gainfully employed again at (UltraSoc) as Senior Software Tester, developing test framework solutions for SoC debugging, including on RISC-V. Despite vast numbers of discussions with a long list of recruitment agences, success came from a face to face encounter at a local Job Fair. Many thanks to Cambridge Network for hosting the event.

Second, I've finally accepted that https://www.codehelp.co.uk was too old to retain and I'm simply redirecting the index page to this blog. The old codehelp site hasn't kept up with new technology and the CSS handles modern screen resolutions particularly badly. I don't expect that many people were finding the PHP and XML content useful, let alone the now redundant WML content. In time, I'll add redirects to the other codehelp.co.uk pages.

Third, my job hunting has shown that the centralisation of decentralised version control is still a thing. As far as recruitment is concerned, if the code isn't visible on GitHub, it doesn't exist. (It's not the recruitment agencies asking for GitHub links, it is the company HR departments themselves.) So I had to add a bunch of projects to GitHub and there's a link now in the blog.

Time to pick up some Debian work again, well after I pay a visit or two to the Cambridge Beer Festival 2019, of course.

20 May, 2019 02:15PM by Neil Williams

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.19

Overnight, digest version 0.6.19 arrived on CRAN. It will get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects.

This version contains two new functions adding new digest functionality. First, Dmitriy Selivanov added a fast and vectorized digest2int to convert (arbitrary) strings into 32 bit integers using one-at-a-time hashing. Second, Kendon Bell, over a series of PRs, put together a nice implementation of spookyhash as a first streaming hash algorithm in digest. So big thanks to both Dmitriy and Kendon.

No other changes were made.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 May, 2019 11:48AM

Candy Tsai

How I Got In The Outreachy 2019 March-August Internship – The Application Process

Blah: Introduction

Really excited to be accepted for the project “Debian Continuous Integration: user experience improvements” (referred to as debci in this post) of the 2019 March-August round of the Outreachy internship! A huge thanks to my company and my manager Frank for letting me do this since I mentioned it out of the blue. Thanks to the Women Techmakers community for letting me know this program exists.

There are already blog posts that also has an introduction of the program, such as:

  1. How I got into Mozilla’s Outreachy open source internship program
  2. My Pathway to Outreachy with Mozilla
  3. Outreachy: What? How? Why?

To me, the biggest difference between Outreachy and Google Summer of Code (GSoC) is that you don’t have to be a student in order to apply.

This post won’t be going into the details of “What is Outreachy” in this post, and will focus on the process, where everyone will have a different story. This is my version, and hope that you can find yours in the near future!

Table of contents:

Goals: The Why

What I like about Outreachy’s application process is that it definitely lets you think about why you want to apply. For me, things were pretty simple and straightforward:

  • Experience what it is like to work remotely
  • Use my current knowledge to contribute to open source
  • Learn something different from my current job

Actually the most important reason that I kind of feel bad mentioning here is that I felt like leaving the male-dominated tech space for a bit. My colleagues are really nice and friendly, but… it’s hard to put into words.

Mindset: Start Right Away

The two main reasons I failed in the past:

  1. Hesitation
  2. Spent too much time browsing the project list

Hesitation

I have known about the Outreachy since 2017, but because it requires you to make a few contributions in order to apply, any bit of hesitation will result in a late start. It was a bit scary to approach the project mentors and thought my code has to be perfect in order to make a contribution. The truth is, without discussion, you might not know the details of the issue, hence you can’t even start coding. Almost every accepted applicant mentions the importance of starting early. To be precise, just start at the day when applications are open.

Spent too much time browsing the project list

Another reason that kept me from starting right away was that I had been browsing the project list for too long. Since the project list on the first day is not complete, it means that there will be projects that I might be more interested in joining the list as the time passes. Past projects can be referenced to get a better picture of which organizations were involved, but it is never a 100% sure bet. Also, the organizations participating for the March-August round is different from the December-March round. To avoid the starting too late scenario, the strategy I used was to choose 2 projects to contribute to. One in the initial phase (first week or so), and another during the following weeks.

Strategy: Choose 2 Projects

Choosing how many projects to work on really depends on the time you have available. The main idea of this strategy was to eliminate the cause of spending too much time on browsing the project list. Since already having a full-time job at the time, I really had to weigh my priorities. To be honest, I barely had time to work on the second project.

On the day the project list was announced, I quickly assessed my skills with the projects available and decided to try applying for Mozilla. Yep, you heard me right, my first choice wasn’t Debian because Mozilla seemed more familiar to me. Then I instantly realized that there were a flooding number of applicants also applying for Mozilla. All of the newcomer issues were claimed and it all happened in just a matter of days!

I started to look for other projects that were also in line with my goals, which led me to debci. Never have I used Ruby in projects and neither the Debian system. On the other hand, I’m familiar with the other skills listed in the requirements, so some of my knowledge can still be utilized.

The second project was announced at a later stage and came from Openstack. Had to admit it was a little too hard for me to setup the Ironic baremetal environment so wasn’t able to put in much.

Plan: Learn About the Community

An important aspect through the application process was to get in touch with the community. Although Debci and Openstack Ironic both use IRC chat, it feels very different. From a wiki search, it seems Openstack is backed by a corporate entity while Debian by volunteers.

Despite the difference, both communities were pretty active with Openstack involving more members. As long as the community was active and friendly, it fits the vision I was looking for.

Execution: Communication

Looking back at the contribution process, it actually took more time than I initially imagined. The whole application process consists of three stages:

  1. Initial application: has to wait a few days for it to be approved
  2. Record Contributions: the main part
  3. Final application: final dash

Except for the initial application, which can be done by myself, the rest involves communicating with others. Communication differs a lot compared to an office environment. My first merge request (a.k.a. pull request) had a ton of comments, and I couldn’t understand what the comments were suggesting at first. It became to clear up after some discussion and I was really excited to have it being merged. This was huge for me since this all happened online, which contains a bit of a time lag since in an office environment, my colleague would just come around for a face-to-face discussion.

TL;DR

Had no idea that so many words were written, so guess I will stop for now. Up until now, I haven’t mentioned much about writing code and that’s because you will feel for yourself whether you can get through during the process. So the TL;DR version of this post is:

  • Do not hesitate, just do it
  • Start as soon as applications are open
  • Do not lurk around the project list
  • Get in touch with the community and mentors
  • Communicate about the issues

Really excited to begin this Outreachy journey with debci and grateful for this opportunity. Stay tuned for more articles about the project itself!

20 May, 2019 09:57AM by Candy Tsai

hackergotchi for Bits from Debian

Bits from Debian

Lenovo Platinum Sponsor of DebConf19

lenovologo

We are very pleased to announce that Lenovo has committed to supporting DebConf19 as a Platinum sponsor.

"Lenovo is proud to sponsor the 20th Annual Debian Conference." said Egbert Gracias, Senior Software Development Manager at Lenovo. "We’re excited to see, up close, the great work being done in the community and to meet the developers and volunteers that keep the Debian Project moving forward!”

Lenovo is a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office solutions and data center solutions.

With this commitment as Platinum Sponsor, Lenovo is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Lenovo, for your support of DebConf19!

Become a sponsor too!

DebConf19 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf19 website at https://debconf19.debconf.org.

20 May, 2019 08:00AM by Laura Arjona Reina

hackergotchi for Keith Packard

Keith Packard

itsybitsy-snek

ItsyBitsy Snek — snek on the Adafruit ItsyBitsy

I got an ItsyBitsy board from Adafruit a few days ago. This board is about as minimal an Arduino-compatible device as I can imagine. All it's got is an Atmel ATmega 32U4 SoC, one LED, and a few passive components.

I'd done a bit of work with the 32u4 under AltOS a few years ago when Bdale and I built a 'companion' board called TeleScience for TeleMetrum to try and measure rocket airframe temperatures in flight. So, I already had some basic drivers for some of the peripherals, including a USB driver.

USB Adventures

The 32u4 USB hardware is simple, and actually fairly easy to use. The AltOS driver used a separate thread to manage the setup messages on endpoint 0. I didn't imagine I'd have space for threading on this device, so I modified that USB driver to manage setup processing from the interrupt handler. I'd done that on a bunch of other USB parts, so while it took longer than I'd hoped, I did manage to get it working.

Then I spent a whole bunch of time reducing the code size of this driver. It started at about 2kB and is now almost down to 1kB. It's a bit less robust now; hosts sending odd setup messages may get unexpected results.

The last thing I did was to add a FIFO for OUT data. That's because we want to be able to see ^C keystrokes even while Snek is executing code.

Reset as longjmp

On the ATmega 328P, to reset Snek, I just reset the whole chip. Nice and clean. With integrated USB, I can't reset the chip without losing the USB connection, and that would be pretty annoying. Resetting Snek's state back to startup would take a pile of code, so instead, I gathered all of the snek-related .data and .bss variables by changing the linker script. Then, I wrote a reset function that does pretty much what the libc startup code does and then jumps back to main:

snek_poly_t
snek_builtin_reset(void)
{
    /* reset data */
    memcpy_P(&__snek_data_start__,
         (&__text_end__ + (&__snek_data_start__ - &__data_start__)),
          &__snek_data_end__ - &__snek_data_start__);

    /* reset bss */
    memset(&__snek_bss_start__, '\0', &__snek_bss_end__ - &__snek_bss_start__);

    /* and off we go! */
    longjmp(snek_reset_buf, 1);
    return SNEK_NULL;
}

I still need to write code to reset the GPIO pins.

Development Environment

To flash firmware to the device, I stuck the board into a proto board and ran jumpers from my AVRISP cable to the board.

Next, I hooked up a FTDI USB to Serial converter to the 32u4 TX/RX pins. Serial is always easier than USB, and this was certainly the case here.

Finally, I dug out my trusty Beagle USB analyzer. This lets me see every USB packet going between the host and the device and is invaluable for debugging USB issues.

You can see all of these pieces in the picture above. They're sitting on top of a knitting colorwork pattern of snakes and pyramids, which I may have to make something out of.

Current Status

Code for this part is on the master branch, which is available on my home machine as well as github:

I think this is the last major task to finish before I release snek version 1.0. I really wanted to see if I could get snek running on this tiny target. It's nearly there; I want to squeeze a few more things onto this chip.

20 May, 2019 07:17AM

Petter Reinholdtsen

MIME type "text/vnd.sosi" for SOSI map data

As part of my involvement in the work to standardise a REST based API for Noark 5, the Norwegian archiving standard, I spent some time the last few months to try to register a MIME type and PRONOM code for the SOSI file format. The background is that there is a set of formats approved for long term storage and archiving in Norway, and among these formats, SOSI is the only format missing a MIME type and PRONOM code.

What is SOSI, you might ask? To quote Wikipedia: SOSI is short for Samordnet Opplegg for Stedfestet Informasjon (literally "Coordinated Approach for Spatial Information", but more commonly expanded in English to Systematic Organization of Spatial Information). It is a text based file format for geo-spatial vector information used in Norway. Information about the SOSI format can be found in English from Wikipedia. The specification is available in Norwegian from the Norwegian mapping authority. The SOSI standard, which originated in the beginning of nineteen eighties, was the inspiration and formed the basis for the XML based Geography Markup Language.

I have so far written a pattern matching rule for the file(1) unix tool to recognize SOSI files, submitted a request to the PRONOM project to have a PRONOM ID assigned to the format (reference TNA1555078202S60), and today send a request to IANA to register the "text/vnd.sosi" MIME type for this format (referanse IANA #1143144). If all goes well, in a few months, anyone implementing the Noark 5 Tjenestegrensesnitt API spesification should be able to use an official MIME type and PRONOM code for SOSI files. In addition, anyone using SOSI files on Linux should be able to automatically recognise the format and web sites handing out SOSI files can begin providing a more specific MIME type. So far, SOSI files has been handed out from web sites using the "application/octet-stream" MIME type, which is just a nice way of stating "I do not know". Soon, we will know. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20 May, 2019 06:35AM

May 19, 2019

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Am I Fomu ?

A few months ago at FOSDEM 2019 I got my hands on a pre-production version of the Fomu, a tiny open-hardware FPGA board that fits in your USB port. Building on the smash hit of the Tomu, the Fomu uses an ICE40UP5K FPGA instead of an ARM core.

I've never really been into hardware hacking, and much like hacking on the Linux kernel, messing with wires and soldering PCB boards always intimidated me. From my perspective, playing around with the Fomu looked like a nice way to test the water without drowning in it.

Since the bootloader wasn't written at the time, when I first got my Fomu hacker board there was no easy way to test if the board was working. Lucky for me, Giovanni Mascellani was around and flashed a test program on it using his Raspberry Pi and a bunch of hardware probes. I was really impressed by the feat, but it also seemed easy enough that I could do it.

My flashing jig

Back at home, I ordered a Raspberry Pi, bought some IC hooks and borrowed a soldering iron from my neighbour. It had been a while since I had soldered anything! Last time I did I was 14 years old and trying to save a buck making my own fencing mask and body cords...

My goal was to test foboot, the new DFU-compatible bootloader recently written by Sean Cross (xobs) to make flashing programs on the board more convenient. Replicating Giovanni's setup, I flashed the Fomu Raspbian image on my Pi and compiled the bootloader.

It took me a good 15 minutes to connect the IC hooks to the board, but I was successfully able to flash foboot on the Fomu! The board now greets me with:

[ 9751.556784] usb 8-2.4: new full-speed USB device number 31 using xhci_hcd
[ 9751.841038] usb 8-2.4: New USB device found, idVendor=1209, idProduct=70b1, bcdDevice= 1.01
[ 9751.841043] usb 8-2.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 9751.841046] usb 8-2.4: Product: Fomu Bootloader (0) v1.4-2-g1913767
[ 9751.841049] usb 8-2.4: Manufacturer: Kosagi

I don't have a use case for the Fomu yet, but I am sure by the time the production version ships out, people will have written interesting programs I can flash on it. In the meantime, it'll blink slowly in my laptop's USB port.

19 May, 2019 09:30PM by Louis-Philippe Véronneau

hackergotchi for Joey Hess

Joey Hess

80 percent

I added dh to debhelper a decade ago, and now Debian is considering making use of dh mandatory. Not being part of Debian anymore, I'm in the position of needing to point out something important about it anyway. So this post is less about pointing in a specific direction as giving a different angle to think about things.

debhelper was intentionally designed as a 100% solution for simplifying building Debian packages. Any package it's used with gets simplified and streamlined and made less a bother to maintain. The way debhelper succeeds at 100% is not by doing everything, but by being usable in little pieces, that build up to a larger, more consistent whole, but that can just as well be used sparingly.

dh was intentionally not designed to be a 100% solution, because it is not a collection of little pieces, but a framework. I first built an 80% solution, which is the canned sequences of commands it runs plus things like dh_auto_build that guess at how to build any software. Then I iterated to get closer to 100%. The main iteration was override targets in the debian/rules file, to let commands be skipped or run out of order or with options. That closed dh's gap by a further 80%.

So, dh is probably somewhere around a 96% solution now. It may have crept closer still to 100%, but it seems likely there is still a gap, because it was never intended to completely close the gap.

Starting at 100% and incrementally approaching 100% are very different design choices. The end results can look very similar, since in both cases it can appear that nearly everyone has settled on doing things in the same way. I feel though, that the underlying difference is important.

PS: It's perhaps worth re-reading the original debhelper email and see how much my original problems with debstd would also apply to dh if its use were mandatory!

19 May, 2019 04:43PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.9: Another small updates

A new version 0.4.9 of RQuantLib reached CRAN and Debian. It completes the change of some internals of RQuantLib to follow suit to an upstream change in QuantLib. We can now seamlessly switch between shared_ptr<> from Boost and from C++11 – Luigi wrote about the how and why in an excellent blog post that is part of a larger (and also excellent) series of posts on QuantLib internals.

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.9 (2019-05-15)

  • Changes in RQuantLib code:

    • Completed switch to QuantLib::ext namespace wrappers for either shared_ptr use started in 0.4.8.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 May, 2019 01:28PM

Andrew Cater

systemd.unit=rescue.target

Just another quick one liner: a Grub config argument which I had to dig for but which is really useful when this sort of thing happens

Faced with a server that was rebooting after an upgrade and dropping to sytemd emergency target:

Rebooting and adding

ssytemd.unit=rescue.target

to the end of the Linux command line in the Grub config as the machine booted and then pressing F10 allowed me to drop to a full featured rescue environment with read/write access to the disk and to sort out the partial upgrade mess.

19 May, 2019 11:17AM by Andrew Cater (noreply@blogger.com)

David Kalnischkies

Newbie contributor: A decade later

Time flies. On this day, 10 years ago, a certain someone sent in his first contribution to Debian in Debbugs#433007: --dry-run can mark a package manually installed (in real life). What follows is me babbling randomly about what lead to and happened after that first patch.

That wasn't my first contribution to open source: I implemented (more like copy-pasted) mercurial support in the VCS plugin in the editor I was using back in 2008: Geany – I am pretty sure my code is completely replaced by now, I just remain being named in THANKS, which is very nice considering I am not a user anymore. My contributions to apt were coded in vim(-nox) already.

It was the first time I put my patch under public scrutiny through – my contribution to geanyvc was by private mail to the plugin maintainer – and not by just anyone but by the venerable masters operating in a mailing list called deity@

I had started looking into apt code earlier and had even written some patches for me without actually believing that I would go as far as handing them in. Some got in anyhow later, like the first commit with my name dated May the 7th allowing codenames to be used in pinning which dates manpage changes as being written on the 4th. So then I really started with apt is lost to history by now, but today (a decade ago) I got serious: I joined IRC, the mailing list and commented the bugreport mentioned above. I even pushed my branch of random things I had done to apt to launchpad (which back then was hosting the bzr repository).

The response was overwhelming. The bugreport has no indication of it, but Michael jumped at me. I realized only later that he was the only remaining active team member in the C++ parts. Julian was mostly busy with Python at the time and Christian turned out to be Mr. L18n with duties all around Debian. The old guard had left as well as the old-old guard before them.

I got quickly entangled in everything. Michael made sure I got invited by Canonical to UDS-L in November of 2009 – 6 months after saying hi. I still can't really believe that 21y old me made his first-ever fly across the ocean to Dallas, Texas (USA) because some people on the internet invited him over. So there was I, standing in front of the airport with the slow realisation that while I had been busy being scared about the fly, the week and everything I never really had worried about how to get from the airport to the hotel. An inner monologue started: "You got this, you just need the name of the hotel and look for a taxi. You wrote the name down right? No? Okay, you can remember the name anyhow, right? Just say it and … why are you so silent? Say it! … Goddammit, you are …" – "David?" was interrupting my inner voice. Of all people in the world, I happened to meet Michael for the first time right in front of the airport. "Just as planned you meany inner voice", I was kidding myself after getting in a taxi with a few more people.

I meet so many people over the following days! It was kinda scary, very taxing for an introvert but also 100% fun. I also meet the project that would turn me from promising newbie contributor to APT developer via Google Summer of Code 2010: MultiArch. There was a session about it and this time around it should really happen. I was sitting in the back, hiding but listening closely. Thankfully nobody had called me out as I was scared: I can't remember who it was, but someone said that in dpkg MultiArch could be added in two weeks. Nobody had to say it, for me it was clear that this meant APT would be the blocker as that most definitely would not happen in two weeks. Not even months. More like years if at all. What was I to do? Cut my looses and run? Na, sunk cost fallacy be damned. I hadn't lost anything, I had learned and enjoyed plenty of things granted to me by supercow and that seemed like a good opportunity to give back.

But there was so much to do. The cache had to grow dynamically (remember "mmap ran out of room" and feel old), commandline interfaces needed to be adapted, the resolver… oh my god, the resolver! And to top it all of APT had no tests to speak of. So after the UDS I started tackling them all: My weekly reports for GSoC2010 provide a glimpse into the abyss but before and after lots happened still. Many of the decisions I made back then are still powering APT. The shell scripting framework I wrote to be able to perform some automatic testing of apt as I got quickly tired of manual testing consists as of today of 255 scripts run not only by me but many CI services including autopkgtest. It probably prevented me from introducing thousands of regressions over the years. Even through it grew into kind of a monster (2000+ lines of posix shellscript providing the test framework alone), can be a bit slow (it can take more than the default 30min on salsa; for me locally it is about 3 minutes) and it has a strange function naming convention (all lowercase no separator: e.g. insertinstalledpackage). Nobody said you can't make mistakes.

And I made them all: First bug caused by me. First regression with complains hitting d-devel. First security bug. It was always scary. It still is, especially as simple probability kicks in and the numbers increase combined with seemingly more hate generated on the internet: The last security bug had people identify me as purposefully malicious. All my contributions should be removed – reading that made me smile.

Lots and lots of things happened since my first patch. git tells me that 174+ people contributed to APT over the years. The top 5 of contributors of all time (as of today) list:

  • 2904 commits by Michael Vogt (active mostly as wizard)
  • 2647 commits by David Kalnischkies (active)
  • 1304 commits by Arch Librarian (all retired, see note)
  • 1008 commits by Julian Andres Klode (active)
  • 641 commits by Christian Perrier (retired)

Note that "Arch Librarian" isn't a person, but a conversion artefact: Development started in 1998 in CVS which was later converted to arch (which eventually turned into bzr) and this CVS→arch conversion preserved the names of the initial team as CVS call signs in the commit messages only. Many of them belong hence to Jason Gunthorpe (jgg). Christians commits meanwhile are often times imports of po files for others, but there is still lots of work involved with this so that spot is well earned even if nowadays with git we have the possibility of attributing the translator not only in the changelog but also as author in the commit.

There is a huge gap after the top 5 with runner up Matt Zimmerman with 116 counted commits (but some Arch Librarian commits are his, too). And that gap for me to claim the throne isn't that small either, but I am working on it… 😉︎ I have also put enough distance between me and Julian that it will still take a while for him to catch up even if he is trying hard at the moment.

The next decade will be interesting: Various changes are queuing up in the master branch for a major break in ABI and API and a bunch of new stuff is still in the pipeline or on the drawing board. Some of these things I patched in all these years ago never made it into apt so far: I intend to change that this decade – you are supposed to have read this in "to the moon" style and erupt in a mighty cheer now so that you can't hear the following – time permitting, as so far this is all talk on my part.

The last year(s) had me not contribute as much as I would have liked due to – pardon my french – crazy shit I will hopefully be able to leave behind this (or at least next) year. I hadn't thought it would show that drastically in the stats, but looking back it is kinda obvious:

  • In year 2009 David made 167 commits
  • In year 2010 David made 395 commits
  • In year 2011 David made 378 commits
  • In year 2012 David made 274 commits
  • In year 2013 David made 161 commits
  • In year 2014 David made 352 commits
  • In year 2015 David made 333 commits
  • In year 2016 David made 381 commits
  • In year 2017 David made 110 commits
  • In year 2018 David made 78 commits
  • In year 2019 David made 18 commits so far

Lets make that number great again this year as I finally applied and got approved as DD in 2016 (I didn't want to apply earlier) and decreasing contributions (completely unrelated but still) since then aren't a proper response! 😉︎

Also: I enjoyed the many UDSes, the DebConfs and other events I got to participate in in the last decade and hope there are many more yet to come!

tl;dr: Looking back at the last decade made me realize that a) I seem to have a high luck stat, b) too few people contribute to apt given that I remain the newest team member and c) I love working on apt for all the things which happened due to it. If only I could do that full-time like I did as part of summer of code…

P.S.: The series APT for … will return next week with a post I had promised months ago.

19 May, 2019 10:34AM

May 17, 2019

hackergotchi for Debian XMPP Team

Debian XMPP Team

Debian XMPP Team Starts a Blog

The Debian XMPP Team, the people who package Dino, Gajim, Mcabber, Movim, Profanity, Prosody, Psi+, Salut à Toi, Taningia, and a couple of other packages related to XMPP a.k.a. Jabber for Debian, have this blog now. We will try to post interesting stuff here — when it's ready!

17 May, 2019 11:30PM by Martin

May 16, 2019

hackergotchi for Matrix on Debian blog

Matrix on Debian blog

Welcome to Matrix on Debian blog

This is the first blog post on this Matrix on Debian blog. The Debian Matrix team will be regularly posting here updates on the progress of the packaging work we do, and the overall status of Matrix.org software in Debian.

Come chat to us in the Matrix room (#debian-matrix:matrix.org) we created!

16 May, 2019 01:02PM by Andrej Shadura

hackergotchi for Jonathan Dowland

Jonathan Dowland

PhD Proposal

For my PhD, I'm currently working on my "1st" year Progression Report. The last formal deliverable I produced was my Project Proposal a (calendar) year ago. I've just realised I hadn't shared that here, so here we go, in the hope that this is interesting and/or useful to someone: PhD Proposal - Jon Dowland.pdf

When I started working on that, I cast around to find examples of other people's. I've attempted to do much the same for my Progression Report. In both cases I have been unable to find a great deal of examples of other people's proposals or reports. The exact format of these things is likely specific to your particular institution, or even your academic unit within your institution, and so a document produced for another institution's expectations might not be directly applicable to another. (I didn't want to directly apply such a thing of course.) If you do find a sample, you don't have any idea whether it has been judged to be particularly good or bad one by those who received it (you can make your own judgements). This is true of my own Proposal too.

In a "normal", full-time PhD, you would likely produce a proposal within a few months of starting, and your first Progression Report towards the end of your first academic (not calendar) year: so, a mere 6 months or so later. Since I am doing things part-time, this is all stretched out: I submitted the proposal in March last year, and my Progression Report is due next month, in June. Looking back at the Proposal now (for the first time in a while, I must admit), it's remarkable to me how "far" the formulation of my goals from then is compared to now.

Once I've had my Progression report passed I hope to share it here, too.

16 May, 2019 10:51AM

RHEL8-based OpenShift OpenJDK containers

I'm pleased to announce that something I've been working on for the last 6 or so months is now public: Red Hat Enterprise Linux 8-based OpenJDK containers, for use with OpenShift. There are two flavours, one with OpenJDK 1.8 (8) and another for OpenJDK 11. These join the existing two RHEL7-based images.

If you have a Red Hat OpenShift subscription, follow the instructions in this Errata to update your image streams. The new images are named:

registry.redhat.io/openjdk/openjdk-8-rhel8
registry.redhat.io/openjdk/openjdk-11-rhel8

Last week Red Hat announced the Universal Base Image initiative: RHEL-based container images, intended to be a suitable base for other images to build upon, and available without a Red Hat subscription under a new EULA.

Our new OpenShift OpenJDK RHEL8 containers are built upon the UBI, as are (I believe) any RHEL8-based containers, but are not currently available under the UBI EULA as we incorporate content from the regular RHEL8 repositories not present in the UBI. If a UBI-based OpenJDK image, distributed under the UBI terms, would be interesting to you, please get in touch! What could it look like? Small, or kitchen-sink? Would you want builder content in it, or pure run-time? What environment would you want to use it in: OpenShift, or something else?

16 May, 2019 10:01AM

May 15, 2019

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in April 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • This was a very quiet month compared to pre-freeze time. I reported three security vulnerabilities for Teeworlds (#927152) which were later fixed by Dylan Aïssi. Thank you.
  • I also reviewed and sponsored a new revision of OpenMW for Bret Curtis. I’m not sure why he didn’t ask the release team for an unblock but there may be a reason.

Debian Java

  • I fixed a security vulnerability in robocode (#926088) and asked for an unblock.
  • I corrected a mistake in solr-tomcat and learned, if you want to override a service file of another package (tomcat9) the conf file has to be installed into
    /etc/systemd/system/tomcat9.service.d/

    instead of /etc/systemd/system/tomcat9.d.*sigh*

Misc

  • Last month I wrote about the challenges of the ublock-origin addon (#926586). We came to the conclusion that we can no longer provide one version for Firefox and Chromium but that we don’t have to create two binary packages either. Now we use symlinks  and two different directories and hopefully this will solve all the troubles we had before. It is not a great solution but hopefully we can maintain the addon without relying on patches.  Thanks to Michael Meskes who implemented the changes. I will probably upload a new version to experimental in May, so that people can try it out and report back.

Debian LTS

This was my thirty-eight month as a paid contributor and I have been paid to work 17,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 29.04.2019 until 05.05.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in rebar, filezilla, lucene-solr, librecad, apparmor, phpbb3, jakarta-jmeter, jetty8, jetty, php-imagick and node-tar.
  • DLA-1753-2. Issued a regression update for proftpd-dfsg because it became clear that neither version 1.3.5.e nor 1.3.6 was a way forward to address the memory leaks because those versions also introduced new bugs that affected sftp setups negatively (#926719). I resolved these problems by backporting the patches for the memory leaks and by reverting to version 1.3.5 again.
  • DLA-1773-1. Issued a security update for signing-party fixing 1 CVE.
  • DLA-1774-1. Issued a security update for otrs2 fixing 1 CVE.
  • DLA-1775-1. Issued a security update for phpbb3 fixing 1 CVE.
  • DLA-1776-1. Issued a security update for librecad fixing 1 CVE.
  • DLA-1785-1. Issued a security update for imagemagick together with Hugo Lefeuvre (3 CVE) fixing 50 CVE in total.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my eleventh month and I have been paid to work 14,5 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 15.04.2019 until 21.04.2019 and I triaged CVE in openjdk7, php5 and libvirt.
  • ELA-72-2. Issued a regression update for jasper which corrected the patch for CVE-2018-19542.
  • ELA-109-1. Issued a security update for jquery fixing 1 CVE.
  • ELA-111-1. Issued a security update for linux and linux-latest fixing 24 CVE.
  • ELA-117-1. Issued a security update for apache2 fixing 2 CVE and investigated four more CVE which I triaged as not-affected.

Thanks for reading and see you next time.

15 May, 2019 09:25PM by Apo

hackergotchi for Jonathan McDowell

Jonathan McDowell

Go Baby Go

I’m starting a new job next month and their language of choice is Go. Which means I have a good reason to finally get around to learning it (far too many years after I saw Marga talk about it at DebConf). For that I find I need a project - it’s hard to find the time to just do programming exercises, whereas if I’m working towards something it’s a bit easier. Naturally I decided to do something home automation related. In particular I bought a couple of Xiaomi Mijia Temperature/Humidity sensors a while back which also report via Bluetooth. I had a set of shell scripts polling them every so often to get the details, but it turns out they broadcast the current status every 2 seconds. Passively listening for that is a better method as it reduces power consumption on the device - no need for a 2 way handshake like with a manual poll. So, the project: passively listen for BLE advertisements, make sure they’re from the Xiaomi device and publish them via MQTT every minute.

One thing that puts me off new languages is when they have a fast moving implementation - telling me I just need to fetch the latest nightly to get all the features I’m looking for is a sure fire way to make me hold off trying something. Go is well beyond that stage, so I grabbed the 1.11 package from Debian buster. That’s only one release behind current, so I felt reasonably confident I was using a good enough variant. For MQTT the obvious choice was the Eclipse Paho MQTT client. Bluetooth was a bit trickier - there were more options than I expected (including one by Paypal), but I settled on go-ble (sadly now in archived mode), primarily because it was the first one where I could easily figure out how to passively scan without needing to hack up any of the library code.

With all those pieces it was fairly easy to throw together something that does the required steps in about 200 lines of code. That seems comparable to what I think it would have taken in Python, and to a large extent the process felt a lot closer to writing something in Python than in C.

Now, this wasn’t a big task in any way, but it was a real problem I wanted to solve and it brought together various pieces that helped provide me with an introduction to Go. I’ve a lot more to learn, but I figure I should write up my initial takeaways. There’s no mention of goroutines or channels or things like that - I’m aware of them, but I haven’t yet had a reason to use them so don’t have an informed opinion at this point.

I should point out I read Rob Pike’s Go at Google talk first, which helped understand the mindset behind Go a lot - it’s not trying to solve the same problem as Rust, for example, but very much tailored towards a set of the problems that Google see with large scale software development. Also I’m primarily coming from a background in C and C++ with a bit of Perl and Python thrown in.

The Ecosystem is richer than I expected

I was surprised at the variety of Bluetooth libraries available to me. For a while I wasn’t sure I was going to find one that could do what I needed without hackery, but most of the Python BLE modules have the same problem.

Static binaries are convenient

Go builds a mostly static binary - my tool only links against various libraries from libc, with the Bluetooth and MQTT Go modules statically linked into the executable file. With my distro minded head on I object to this; it means I need a complete rebuild in case of any modification to the underlying modules. However the machine I’m running the tool on is different than the machine I developed on and there’s no doubt that being able to copy a single binary over rather than having to worry about all the supporting bits as well is a real time saver.

The binaries are huge

This is the flip side of static binaries, I guess. My tool is a 7.6MB binary file. That’s not a problem on my AMD64 server, but even though Go seems to have Linux/MIPS support I doubt I’ll be running things built using it on my OpenWRT router. Memory usage seems sane enough, but that size of file is a significant chunk of the available flash storage for small devices.

Module versioning isn’t as horrible as I expected

A few years back I attended a Go talk locally and asked a question about module versioning and the fact that by default modules were pulled directly from Git repositories, seemingly without any form of versioning. The speaker admitted that their example code had in fact failed to compile the previous day because of a change upstream that changed an API. These days things seem better; I was pointed at go mod and in particular setting GO111MODULE=on for my 1.11 compiler, and when I first built my code Go created a go.mod with a set of versioned dependencies. I’m still wary of build systems that automatically grab code from the internet, and the pinning of versions conflicts with an ability to be able to automatically rebuild and pick up module security fixes, but at least there seems to be some thought going into this these days.

I love maps

Really this is more a generic thing I miss when I write C. Perl hashes, Python dicts, Go maps. An ability to easily stash things by arbitrary reference without having to worry about reallocation of the holding structure. I haven’t delved into other features Go has over C particularly yet so I’m sure there’s more to take advantage of, but maps are a good start.

The syntax is easy enough

The syntax for Go felt comfortable enough to me. I had to look a few bits and pieces up, but nothing grated. go fmt is a nice touch; I like the fact that modern languages are starting to have a well defined preferred style. It’s a long time since I wrote any Pascal, but as a C programmer things made sense.

I’m still not convinced about garbage collection

One of the issues I hit while developing my tool was that it would sit and spin and take more and more memory. This turned out to be a combination of some flaky Bluetooth hardware returning odd responses, and my failure to handle the returned error message. Ultimately this resulted in a resource leak causing the growing memory use. This would still have been possible without garbage collection, but I think not having to think about memory allocation/deallocation made me more complacent. Relying on the garbage collector to free up resources means you have to be sure nothing is holding a reference any more (even if it won’t use it). I think it will take further time with Go development to fully make my mind up, but for now I’m still wary.

Code, in the unlikely event it’s helpful to anyone, is on GitHub.

15 May, 2019 07:57PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Bug fest

Yesterday:

It's a great time to be alive! Honorable mention goes to Alabama's new abortion laws. :-/

15 May, 2019 07:45AM

hackergotchi for Keith Packard

Keith Packard

snek-neopixel

Snek and Neopixels

(click on the picture to see the movie)

Adafruit sells a bunch of things using the Neopixel name that incorporate Worldsemi WS2812B full-color LEDs with built-in drivers. These devices use a 1-wire link to program a 24-bit rgb value and can be daisy-chained to connect as many devices as you like using only one GPIO.

Bit-banging Neopixels

The one-wire protocol used by Neopixels has three signals:

  • Short high followed by long low for a 0 bit
  • Long high followed by a short low for a 1 bit
  • Really long low for a reset code

Short pulses are about 400ns, long pulses are around 800ns. The reset pulse is anything over about 50us.

I'd like to use some nice clocked signal coming out of the part to generate these pulses. A SPI output would be ideal; set the bit rate to 400ns and then send three SPI bits for each LED bit, either 100 or 110. Alas, none of the boards I've got connect the Neopixels to a pin that can be used as for SPI MOSI.

As a fallback, I tried using DMAC to toggle the GPIO outputs. Alas, on the SAMD21G part included in these boards, the DMAC controller can't actually write to the GPIO control registers. There's a missing connection inside the chip.

So, like all of the examples I found, I fell back to driving the GPIO registers directly with the processor, relying on a carefully written sequence of operations to get the timing within the tolerance required by the Neopixels. I have to disable interrupts during this process to avoid messing up the timing though.

Current Snek Neopixel API

I looked at the Circuit Python Neopixel API to see if there was anything I could adapt for Snek. That API uses 3-element tuples for the R,G,B values, and then places those in a list, one for each pixel in the chain. That seemed like a good idea. However, that API also has a lot of allocation churn, with new colors being created in newly allocated lists. Doing that with Snek would probably be too slow as Snek uses a garbage collector for allocation.

So, we'll allow mutable lists inside of a list or tuple, then Neopixel colors can be changed by modifying the value within the per-Neopixel lists.

Snek doesn't have objects, so we'll just create a function to send color data for a list of Neopixels out a pin. We'll use the existing Snek GPIO function, talkto, to select the pin. Finally, I'm using color values from 0-1 instead of 0-255 to make this API work more like the other analog interfaces.

> pixels = ([0.2, 0, 0],)
> talkto(NEOPIXEL)
> neopixel(pixels)

That make the first Neopixel a not-quite-blinding red. Now we can turn it green with:

> pixels[0][0] = 0
> pixels[0][1] = 0.2
> neopixel(pixels)

You can, of course, use tuples like with Circuit Python:

> pixels = [(0.2, 0, 0)]
> talkto(NEOPIXEL)
> neopixel(pixels)
> pixels[0] = (0, 0.2, 0)
> neopixel(pixels)

This does allocate a new list though.

Snek on Circuit Playground Express

As you can see in the pictures above, Snek is running on the Adafruit Circuit Playground Express. This board has a bunch of built-in hardware. At this point, I've got the buttons, switches, lights and analog input sensors (temperature and light intensity) all working. I don't have the motion sensor or audio bits going. I'll probably leave those pieces until after Snek v1.0 has been released.

15 May, 2019 12:02AM

May 14, 2019

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

TLS SIP support on the Cisco SPA112 ATA

A few days ago, my SIP provider (the ever reliable VoIP.ms) rolled out TLS+SRTP support. As much as I like their service, it was about time.

Following their wiki, I was able to make my Android smartphone work with TLS. Sadly, the Android SIP stack does not support TLS and I had to migrate to Linphone. It's a small price to pay for greatly increased security, but the Linphone interface and integration to the rest of the OS isn't as good.

I did have a lot of trouble getting my old Cisco SPA112 ATA working with TLS though. Although I setup the device correctly, I couldn't get it to register.

As always, the VoIP.ms support staff was incredibly helpful and reproduced the error I was getting in their lab1. Apparently, the trouble spawns from the latest firmware (1.4.1 SR3). After downgrading to 1.4.1 SR1, I was able to have the device successfully register with TLS.

Note that since SRTP is mandatory with TLS on VoIP.ms's servers, you'll need to active the Secure Call Serv option in the Line 1 menu and the Secure Call Setting in the User 1 menu in addition of changing the protocol and the port.

If like me you had the device running a more recent firmware version and want to downgrade, you will have to disable the HTTPS web interface since the snakeoil certificate used interferes with the firmware upgrade process.

2019-05-14 update

One of the changes in 1.4.1 SR3 firmware is that the SPA112 now validates TLS certificates, as per issue CSCvm49157 in the release notes. The problem I had with being unable to register the device was being caused by a missing Let’s Encrypt root certificate in its certificate store.

Thanks to Michael Davie for pointing this out to me! It turns out VoIP.ms also did their job and updated their documentation to include a section on adding a new root CA cert to the device. Sadly, the link they provide on their wiki is a plain HTTP one. I'd recommend you use the LE Root CA directly: https://letsencrypt.org/certs/isrgrootx1.pem.txt

One last thing: if like me you wondered what the heck was the new beep beep sound during the call, it turns out it's the "Secure Call Indication Tone". You can turn it off by following these instructions.


  1. Yes, you heard that right: they have a lab on hand with tons of devices so that they can help you debug your problems live. 

14 May, 2019 11:41PM by Louis-Philippe Véronneau

Molly de Blanc

advice

Recently I was asked two very good questions about being involved in free/open source software: How do you balance your paid/volunteer activities? What sort of career advice do you have for people looking to get involved professionally?

I liked answering these in part because I have very little to do with the software side, and also because, much like many technical volunteers, my activities between my volunteer work and my paid work have been similar-to-identical over the years.

How do you balance paid/volunteer activities?

My answer at the time was, effectively: I set aside clearly defined time to work on my different activities, usually once a week — generally on Sundays. I check my email a few times a day, and respond to things that are immediate within a few hours, but I handle the bulk of my work at one time. The Anti-harassment team has a regularly scheduled meeting/work time during which we handle the bulk of our necessary labor. I’ve learned to say no, I’ve learned how to delegate, and I’ve learned how to say “I’m not going to be able to finish this thing I said I could do, how can we as a team make sure it gets completed.”

This works for me because 1) I’ve put a lot of work into developing my confidence and the skills needed for working collaboratively; and 2) my biggest responsibilities outside of my job (and free software, in general) are taking care of plants and having bash. (Note: Bash is my cat.) I don’t have children or a partner. I have a band and climbing partners, but these things, much like my free software activities, are time constrained. My band meets for practice at the same time each week; I sneak in moments to play a song or run through scales during the rest of the week. I climb with the same people at the same times each week. With my fancy new job, I work remotely and am able to now even work at the climbing gym, and take little breaks to run through a few bouldering problems.

Because of all these factors — my limited and optional responsibilities towards others (I travel a lot for free software, and miss band practice and climbing sometimes, for example) — I have been able to take up leadership positions in Debian and the open source community at large. Because of my job, I was able to take on even more responsibility at the OSI. I’ve held leadership positions in my unpaid work for over ten years now, since I was a student and able to use my lack of responsibilities beyond my studies (and student job) to focus on helping to stack chairs for open source. (Note: “Stack chairs” is Molly for “perform often unseen labor, often for events.”)

As an aside, one of my criticisms about unpaid project/org leader positions is that it means that the people who can do the jobs are:

  • students
  • contractors
  • unemployed
  • those with few to no other responsibilities
  • those with very supportive partners
  • those with very supportive employers
  • those who don’t need much sleep.

I’ve slowly been swayed into the belief that many (not all) leadership positions should be paid, grant funded, come with a stipend, or be led by committee. More on this in a future blog post.

In summary: learn to tell other people you can’t do things and work on those scheduling skillz.

What sort of career advice do you have for people looking to get involved professionally?

This question was asked in an evening of panels and one thing that really stood out to me was many of the panelists saying — in response to completely different questions — that they no longer cold apply for jobs, or that all of their jobs have come from social connections, or that they just don’t apply to jobs (and only go work for places where they have been given soft offers or are invited to go straight to an interview stage).

An acquaintance of mine once said to me: I don’t believe in luck, I believe in social connections.

Our social connections form complex causal graphs, which lead to many, if not all, of the good things that happen in our lives. I got my first job in free software not because of my cool skillz, but because I happened to hang out with a friend of someone who had a friend looking for an intern.

I actually have gotten (two) jobs where I cold applied — but in both cases the people were interested in me because of a certain social connection I had — whether they realized that or not. Even my job in college, at the school library, came because I had a friend who worked there.

Telling people to network really is general job advice that works for everyone in every field of endeavor.

If you’re an introvert (like myself!) one of the best ways to form social connections is through public speaking. When you give a talk at a conference not only are you building up your personal brand and letting other people know about your skills, competency, and expertise, but you’re also giving people something to talk to you about — and they will talk to you. Giving a talk is like putting a sign on yourself saying “Come talk to me about X,” when X is something you’re actually passionate about. It’s great because you don’t have to put yourself out there to talk to strangers — strangers come to you!

Public speaking also increases your visibility in the community — this is good if you want a job. That way, when someone sees your CV/resume your name will stand out because they’ll remember seeing it before. They might not remember your talk, or maybe they didn’t even attend your talk, but they will remember seeing your name. Having a section on your CV that lists presentations you’ve given helps you stand out from everyone else because it shows you can share information well and are actually interested in what you do. Where you speak and have spoken is a shibboleth for where you see yourself in the community and what values you have: Seeing “Open Source Bridge” tells me that you’re interested in communities and building spaces where everyone is a welcome participant; OSCON and PyCon convey confidence because you know you’re opening yourself up to a potentially big audience; local meetups and conferences share a value of wanting to participate in and build up your local community; international events say that you really understand that we’re looking at a global scale here.

We also just learn to communicate better when we speak publicly. We learn better ways to share ideas, how to turn thoughts into a cohesive narrative, and how to appear confident even if we’re not.

Building off of that, learning to write is extremely, extremely important. There is an art to written communication, whether it’s a brief letter between colleagues, presentations, comments in code or other documentation, blog posts, cover letters, etc. Communicating well through writing will take you so far, especially as more jobs, especially in tech, become increasingly focused on using chat tools for collaboration.

All of the things that are true for public speaking are also true for writing well: it helps you become a recognized and valued member of your community. When I was a community manager I loved the developers (and translators and doc writers and…) who were interested in writing blog posts, participating in community Q&A/round table sessions, etc because they were the ones who made us an approachable project, who made us a great place that included people whether they were getting paid to work on the project or not.

Anyone can learn to be a passable developer (or fill in your specific role here), and anyone can learn to be a passable writer. Someone who chooses to do both is special and who I want on my team.

In summary: Learn to talk to strangers, learn public speaking, learn to write.

The people asking me these questions were, I believe, developers or at least people with technical skill sets rather than administrative, community, organizational, social, etc skill sets. This advice holds true across the spectrum of paid labor.

One person came up to me later and explained that they had been working in generating content, but wanted to switch to a more organizational role managing the aggregation and sharing of content. They asked me how they could make that transition, and my advice was exactly the same: learn to talk to people so you can learn who has opportunities and learn to communicate well because it will help you stand out and also just make your life a lot easier.

I’d also like to briefly point out that ehash gave some great answers geared towards technical roles, and I hope will share them in some public forum.

14 May, 2019 04:05PM by mollydb

May 13, 2019

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

The Shifting Dynamics of Participation in an Online Programming Community

Informal online learning communities are one of the most exciting and successful ways to engage young people in technology. As the most successful example of the approach, over 40 million children from around the world have created accounts on the Scratch online community where they learn to code by creating interactive art, games, and stories. However, despite its enormous reach and its focus on inclusiveness, participation in Scratch is not as broad as one would hope. For example, reflecting a trend in the broader computing community, more boys have signed up on the Scratch website than girls.

In a recently published paper, I worked with several colleagues from the Community Data Science Collective to unpack the dynamics of unequal participation by gender in Scratch by looking at whether Scratch users choose to share the projects they create. Our analysis took advantage of the fact that less than a third of projects created in Scratch are ever shared publicly. By never sharing, creators never open themselves to the benefits associated with interaction, feedback, socialization, and learning—all things that research has shown participation in Scratch can support.

Overall, we found that boys on Scratch share their projects at a slightly higher rate than girls. Digging deeper, we found that this overall average hid an important dynamic that emerged over time. The graph below shows the proportion of Scratch projects shared for male and female Scratch users’ 1st created projects, 2nd created projects, 3rd created projects, and so on. It reflects the fact that although girls share less often initially, this trend flips over time. Experienced girls share much more than often than boys!

Proportion of projects shared by gender across experience levels, measured as the number of projects created, for 1.1 million Scratch users. Projects created by girls are less likely to be shared than those by boys until about the 9th project is created. The relationship is subsequently reversed.

We unpacked this dynamic using a series of statistical models estimated using data from over 5 million projects by over a million Scratch users. This set of analyses echoed our earlier preliminary finding—while girls were less likely to share initially, more experienced girls shared projects at consistently higher rates than boys. We further found that initial differences in sharing between boys and girls could be explained by controlling for differences in project complexity and in the social connectedness of the project creator.

Another surprising finding is that users who had received more positive peer feedback, at least as measured by receipt of “love its” (similar to “likes” on Facebook), were less likely to share their subsequent projects than users who had received less. This relation was especially strong for boys and for more experienced Scratch users. We speculate that this could be due to a phenomenon known in the music industry as “sophomore album syndrome” or “second album syndrome”—a term used to describe a musician who has had a successful first album but struggles to produce a second because of increased pressure and expectations caused by their previous success


This blog post (published first on the Community Data Science Collective blog) and the paper are collaborative work with Emilia Gan and Sayamindu Dasgupta. You can find more details about our methodology and results in the text of our paper, “Gender, Feedback, and Learners’ Decisions to Share Their Creative Computing Projects” which is freely available and published open access in the Proceedings of the ACM on Human-Computer Interaction 2 (CSCW): 54:1-54:23.

13 May, 2019 11:09PM by Benjamin Mako Hill

hackergotchi for Holger Levsen

Holger Levsen

20190513-minidebconf-hamburg-beds+cfp

Some beds, some talk slots and many seats still available for the Mini-DebConf in Hamburg in June 2019

Moin!

We still have 14 affordable beds available for the the MiniDebConf Hamburg 2019, which will take place in Hamburg (Germany) from June 5 to 9, with three days of Debcamp style hacking, followed by two days of talks, workshops and more hacking. If you were unsure about coming because of accomodation, please reconsider and come around! (And please mail me directly if you would like to sleep in a bed on site.)

It's going to be awesome. You should all come! Register now!

Moar talks wanted

We also would like to receive more talk submissions at cfp@minidebconfhamburg.debian.net - please consider presenting your work. The DebConf videoteam will be present to preserve your presentation :)

Suggested topics include:

  • Packaging
  • Security
  • Debian usability
  • Cloud and containers
  • Automating with Debian
  • Debian social
  • New technologies & infrastructure (gitlab, autopkgtest, dgit, debomatic, etc)

This list is not exhaustive, and anything not listed here is welcome, as long as it's somehow related to Debian. If in doubt, propose a talk and we'll give feedback.

We will have talks on Saturday and Sunday, the exact slots are yet to be determined. We expect submissions and talks to be held in English, as this is the working language in Debian and at this event.

Moar info

Is available on the event wiki page.

Looking forward to see you in Hamburg!

13 May, 2019 08:28PM

hackergotchi for Joachim Breitner

Joachim Breitner

Artsy desktop background

Friends of mine took part in a competition where they had to present an art project of theirs using a video. At some point we had the plan of creating a time lapse video of a drawing being created, and for that mounted a camera above the drawing desk.

With paper

With paper

In the end we did not actually use the video, but it turns out that the still from the beginning (with blank paper) and the end of the video (no paper) are pretty nice, too. So I am sharing them here, in case anyone wants to use them as a desktop background or what not.

Without paper

Without paper

Feel free to re-use these photos under the terms of the Creative Commons Attribution 4.0 International License.

13 May, 2019 06:37AM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAnnoy 0.0.12

A new release of RcppAnnoy is now on CRAN.

RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the famous Spotify music discovery algorithm.

This release brings several updates: Seed settings follow up on changes in the previous release 0.0.11, this is also documented in the vignette thanks to James Melville; more documentation was added thanks to Adam Spannbauer, unit tests now use the brandnew tinytest package, and vignette building was decoupled from package building. All these changes in this version are summarized with appropriate links below:

Changes in version 0.0.12 (2019-05-12)

  • Allow setting of seed (Dirk in #41 fixing #40).

  • Document setSeed (James Melville in #42 documenting #41).

  • Added documentation (Adam Spannbauer in #44 closing #43).

  • Switched unit testing to the new tinytest package (Dirk in #45).

  • The vignette is now pre-made in included as-is in Sweave document reducing the number of suggested packages.

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 May, 2019 01:38AM

May 11, 2019

hackergotchi for Bastian Venthur

Bastian Venthur

Dotenv CLI

As a small weekend project I wrote my own dotenv CLI. Dotenv-CLI is a simple package that provides the dotenv command. It reads the .env file from the current directory puts the contents in the environment and executes the given command.

Example .env file:

BASIC=basic basic
export EXPORT=foo
EMPTY=
INNER_QUOTES=this 'is' a test
INNER_QUOTES2=this "is" a test
TRIM_WHITESPACE= foo
KEEP_WHITESPACE="  foo  "
MULTILINE="multi\nline"
# some comment

becomes:

$ dotenv env
BASIC=basic basic
EXPORT=foo
EMPTY=
INNER_QUOTES=this 'is' a test
INNER_QUOTES2=this "is" a test
TRIM_WHITESPACE=foo
KEEP_WHITESPACE=  foo
MULTILINE=multi
line

where env is a simple command that outputs the current environment variables. A dotenv CLI comes in handy if you follow the 12 factor app methodology or just need to run a program locally with specific environment variables set.

While dotenv-cli is certainly not the first dotenv implementation, this one is written in Python and has no external dependencies except Python itself. It also provides a bash completion, so you can prefix any command with dotenv while still be able to use completion:

$ dotenv make <TAB>
all      clean    docs     lint     release  test

Since there's also some other popular shells out there and I already struggled writing the simple bash completion script, it would be very nice if some more experienced zsh-, fish-, etc.- users could help me out in providing completions for them as well.

dotenv-cli is available on Debian and Ubuntu:

$ sudo apt-get install python3-dotenv-cli

and on PyPi as well:

$ pip install dotenv-cli

11 May, 2019 10:00PM by Bastian Venthur

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.9.400.3.0

armadillo image

The recent 0.9.400.2.0 release of RcppArmadillo required a bug fix release. Conrad follow up on Armadillo 9.400.2 with 9.400.3 – which we packaged (and tested extensively as usual). It is now on CRAN and will get to Debian shortly.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 597 other packages on CRAN.

A brief discussion of possibly issues under 0.9.400.2.0 is at this GitHub issue ticket. The list of changes in 0.9.400.3.0 is below:

Changes in RcppArmadillo version 0.9.400.3.0 (2019-05-09)

  • Upgraded to Armadillo release 9.400.3 (Surrogate Miscreant)

    • check for symmetric / hermitian matrices (used by decomposition functions) has been made more robust

    • linspace() and logspace() now honour requests for generation of vectors with zero elements

    • fix for vectorisation / flattening of complex sparse matrices

The previous changes in 0.9.400.2.0 were:

Changes in RcppArmadillo version 0.9.400.2.0 (2019-04-28)

  • Upgraded to Armadillo release 9.400.2 (Surrogate Miscreant)

    • faster cov() and cor()

    • added .as_col() and .as_row()

    • expanded .shed_rows() / .shed_cols() / .shed_slices() to remove rows/columns/slices specified in a vector

    • expanded vectorise() to handle sparse matrices

    • expanded element-wise versions of max() and min() to handle sparse matrices

    • optimised handling of sparse matrix expressions: sparse % (sparse +- scalar) and sparse / (sparse +- scalar)

    • expanded eig_sym(), chol(), expmat_sym(), logmat_sympd(), sqrtmat_sympd(), inv_sympd() to print a warning if the given matrix is not symmetric

    • more consistent detection of vector expressions

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 May, 2019 05:49PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (March and April 2019)

The following contributors got their Debian Developer accounts in the last two months:

  • Jean-Baptiste Favre (jbfavre)
  • Andrius Merkys (merkys)

The following contributors were added as Debian Maintainers in the last two months:

  • Christian Ehrhardt
  • Aniol Marti
  • Utkarsh Gupta
  • Nicolas Schier
  • Stewart Ferguson
  • Hilmar Preusse

Congratulations!

11 May, 2019 02:35PM by Jean-Pierre Giraud

Ian Jackson

Rust doubly-linked list

I have now released (and published on crates.io) my doubly-linked list library for Rust.

Of course in Rust you don't usually want a doubly-linked list. The VecDeque array-based double-ended queue is usually much better. I discuss this in detail in my module's documentation.

Why a new library


Weirdly, there is a doubly linked list in the Rust standard library but it is good for literally nothing at all. Its API is so limited that you can always do better with a VecDeque. There's a discussion (sorry, requires JS) about maybe deprecating it.

There's also another doubly-linked list available but despite being an 'intrusive' list (in C terminology) list it only supports one link per node, and insists on owning the items you put into it. I needed several links per node for my planar graph work, and I needed Rc-based ownership.

Indeed given my analysis of when a doubly-linked list is needed, rather than a VecDeque, I think it will nearly always involve something like Rc too.

My module


You can read the documentation online.

It provides the facilities I needed, including lists where each node can be on multiple lists with runtime selection of the list link within each node. It's not threadsafe (so Rust will stop you using it across multiple threads) and would be hard to make threadsafe, I think.

Notable wishlist items: entrypoints for splitting and joining lists, and good examples in the documentation. Both of these would be quite easy to add.

Further annoyance from Cargo


As I wrote earlier, because I am some kind of paranoid from the last century, I have hit cargo on the head so that it doesn't randomly download and run code from the internet.

This is done with stuff in my ~/.cargo/config. Of course this stops me actually accessing the real public repository (cargo automatically looks for .cargo/config in all parent directories, not just in $PWD and $HOME). No problem - I was expecting to have to override it.

However there is no way to sensibly override a config file!

So I have had to override it in a silly way: I made a separate area on my laptop which belongs to me but which is not underneath my home directory. Whenever I want to run cargo publish, I copy the crate to be published to that other area, which is not a direct or indirect subdirectory of anything containing my usual .cargo/config.

Cargo really is quite annoying: it has opinions about how everything is and how everything ought to be done. I wouldn't mind that, but unfortunately when it happens to be wrong it is often lacking a good way to tell it what should be done instead. This kind of thing is a serious problem in a build system tool.

Edited 2019-05-11: minor grammar fix.


comment count unavailable comments

11 May, 2019 11:14AM

Rust doubly-linked list, redux

I have declared rc-dlist-deque, my doubly-linked list library for Rust, to be 1.0.0. Little has changed, apart from the version number and some documentation updates.

In particular, I thought I would expand on my previous comments to the effect that you don't want a doubly linked list in Rust.

I've added a survey of the existing doubly linked list crates. (Please click through; I would prefer not to put a copy here in this blog which I would then also have to update if I update the table...)

Most of these crates, sadly, are not really useful. Perhaps people have been publishing their training exercises? (A doubly linked list makes a really bad Rust training exercise, too...)

comment count unavailable comments

11 May, 2019 12:04AM

May 10, 2019

hackergotchi for Keith Packard

Keith Packard

snek-amusement

Snek and the Amusement Park

(you can click on the picture to watch the model in action)

Here's an update to my previous post about Snek in a balloon. We also hooked up a Ferris wheel and controlled them both with the same Arduino Duemilanove compatible board. This one has sound so you can hear how quiet the new Circuit Cube motors are.

10 May, 2019 11:45PM

hackergotchi for Jonathan Carter

Jonathan Carter

#debian-meeting revival

Picture: Wasps participating in a BoF during DebConf15 in Heidelberg.

As part of my DPL campaign I suggested that we have more open community meetings, and also suggested that we have more generic open team meetings in a well-known public channel. Fortunately, that idea doesn’t really need a DPL to implement it, and on top of that our new DPL (Sam Hartman) supports the initiative. We do have a #debian-meeting IRC channel that’s been dormant for years, so we’re reviving that for these kind of meetings.

Today we had our first session, it was the first meeting on that channel since 2011 (almost 8 years!). The topic was “Meet the new DPL and ask him anything!”. It was announced on some of the Debian channels, most notably on Bits from Debian, I played it careful by not announcing too widely because we don’t yet have much in the way of moderation and I think if we had to deal with many trolls it would’ve been tough. This was also really early for people in the Americas (6am East Coast) so future sessions will be staggered across different times and days of the week. The session was a bit quieter than I expected, but Sam gave really nice answers and I learned a few new things so it all worked out ok, I would rather start small and build on it than it have been too chaotic and a mess. In 2017 I started a community channel called #debian-til (TIL standing for “Today I Learned”). The idea is that people share interesting Debian related things that they have learned, and it started with a hand full of people and took a year to grow to a hundred, but I’m very happy with how that worked out and how the culture of that channel has evolved, I’m hoping that #debian-meeting can also grow and evolve to be something useful and fun for our community, instead of only a channel to schedule meetings in.

You can view the full logs of the meeting stored by meetbot in html or plain text.

Upcoming meetings:

Date and TimeTitle
2019-06-03 12:00 UTCLearn all about Debian-Edu
2019-06-17 19:00 UTCBrainstorming for 100 paper cuts project kick-off

More details about both those meetings should be available soon. For the latest information, refer to the Debian Meeting wiki page. You can also subscribe to Bits from Debian (RSS/Atom) for the latest community news.

Anyone can schedule a meeting on the debian-meeting wiki page, so if you’re considering using it for a team meeting or any other kind of session idea, then please go ahead and do so!

10 May, 2019 02:40PM by jonathan

Molly de Blanc

OSI Update: May 2019

A brick buildig with a wooden sign that says "Come in we're open source!"

At the most recent Open Source Initiative face-to-face board meeting I was elected president of the board of directors. In the spirit of transparency, I wanted to share a bit about my goals and my vision for the organization over the next five years. These thoughts are my own, not reflecting official organization policy or plans. They do not speak to the intentions nor desires of other members of the board. I am representing my own thoughts, and where I’d like to see the future of the OSI go.

A little context on the OSI

You can read all about the history of “open source” and the OSI, so I will spare you the history lesson for now. I believe the following are the organization’s main activities:

There are lots of other things the OSI does, to support the above activities and in addition to them. As I mentioned in my 2019 election campaign, most of what we do vacillates between “niche interesting” to “onerous,” with “boring” and “tedious” also on that list. We table at events, give talks, write and approve budgets, answer questions, have meetings, maintain our own pet projects, read mailing lists, keep up with the FLOSS/tech news, tweet, host events, and a number of other things I am inevitably forgetting.

The OSI, along with the affiliate and individual membership, defines the future of open source, through the above activities and then some.

Why I decided to run for president

I’ve been called an ideologue, an idealist, a true believer, a wonk, and a number of other things — flattering, embarrassing, and offensive — concerning my relationship to free and open source software. I recently said that “user freedom is the hill I will die on, and let the carrion birds feast on my remains.” While we are increasingly discussing the ethical considerations of technology we need to also raise awareness of the ways user freedom and software freedom are entwined with the ethical considerations of computing. These philosophies need to be in the foundational design of all modern technologies in order for us to build technology that is ethical.

I have a vision for the way the OSI should fit into the future of technology, I think it’s a good vision, and I thought that being president would be a good way to help move that forward. It also gave me a very concrete and candid opportunity to share my hopes for the present and the future with my fellow board directors, to see where they agree and where they dissent, and to collaboratively build a cohesive organizational mission.

So, what is my vision?

I have two main goals for my presidency: 1) strategic growth of the organization while encouraging sustainability and 2) re-examining and revising the license approval process where necessary.

I have a five point list of things I would like to see be true for the OSI over the next five years:

  • Organizational relevance: The OSI should continue its important mission of stewarding the OSD, the license list, and the integrity of the term open source.
  • Provide expert guidance on open source: Have others approach us for opinions and advice, and be looked to as an authority on issues and questions.
  • Coordinate contact within the community: Have a role connecting people with others within the community in order to share expertise and become better open source citizens.
  • A clear, effective license approval process: Have a clear licensing process, comprised of experts in the field of licensing, with a range of opinions and points of view, in order to create and maintain a healthy list of open source licenses.
  • Support growing projects: Provide strategic assistance wherever OSI is best placed to do so. For example, providing fiscal sponsorship where we are uniquely qualified to help a project flourish.

An additional disclaimer

As I mentioned above, these are my thoughts and opinions, and do not represent plans for the organization nor the opinions of the rest of the board. They are some things I think would be nice to see. After all, according to the bylaws my actual privileges and responsibilities as president are to “preside over all board meetings,” accept resignations, and call “special meetings.”

10 May, 2019 02:11PM by mollydb

hackergotchi for Jonathan Dowland

Jonathan Dowland

Debian Buster and Wayland

The next release of Debian OS (codename "Buster") is due very soon. It's currently in deep freeze, with no new package updates permitted unless they fix Release Critical (RC) bugs. The RC bug count is at 123 at the time of writing: this is towards the low end of the scale, consistent with being at a late stage of the freeze.

As things currently stand, the default graphical desktop in Buster will be GNOME, using the Wayland desktop technology. This will be the first time that Debian has defaulted to Wayland, rather than Xorg.

For major technology switches like this, Debian has traditionally taken a very conservative approach to adoption, with a lot of reasoned debate by lots of developers. The switch to systemd by default is an example of this (and here's one good example of LWN coverage of the process we went through for that decision).

Switching to Wayland, however, has not gone through a process like this. In fact it's happened as a result of two entirely separate decisions:

  1. The decision that the default desktop environment for Debian should be GNOME (here's some notes on this decision being re-evaluated for Jessie, demonstrating how rigorous this was)

  2. The GNOME team's decision that the default GNOME session should be Wayland, not Xorg, consistent with upstream GNOME.

In isolation, decision #2 can be justified in a number of ways: within the limited scope of the GNOME desktop environment, Wayland works well; the GNOME stack has been thoroughly tested, it's the default now upstream.

But in a wider context than just the GNOME community, there are still problems to be worked out. This all came to my attention because for a while the popular Synaptic package manager was to be ejected from Debian for not working under Wayland. That bug has now been worked around to prevent removal (although it's still not functional in a Wayland environment). Tilda was also at risk of removal under the same rationale, and there may be more such packages that I am not aware of.

In the last couple of weeks I switched my desktop over to Wayland in order to get a better idea of how well it worked. It's been a mostly pleasant experience: things are generally very good, and I'm quite excited about some of innovative things that are available in the Wayland ecosystem, such as the Sway compositor/window manager and interesting experiments like a re-implementation of Plan 9's rio called wio. However, in this short time I have hit a few fairly serious bugs, including #928030 (desktop and session manager lock up immediately if the root disk fills and #928002 (Drag and Drop from Firefox to the file manager locks up all X-based desktop applications) that have led me to believe that things are not well integrated enough — yet — to be the default desktop technology in Debian. I believe that a key feature of Debian is that we incorporate tools and technologies from a very wide set of communities, and you can expect to mix and match GNOME apps with KDE ones or esoteric X-based applications, old or new, or terminal-based apps, etc., to get things done. That's at least how I work, and one of the major attractions of Debian as a desktop distribution. I argue this case in #927667.

I think we should default to GNOME/Xorg for Buster, and push to default to Wayland for the next release. If we are clear that this a release goal, hopefully we can get wider project engagement and testing and ensure that the whole Debian ecosystem is more tightly integrated and a solid experience.

If you are running a Buster-based desktop now, please consider trying GNOME/Wayland and seeing whether the things you care about work well in that environment. If you find any problems, please file bugs, so we can improve the experience, no matter the outcome for Buster.

10 May, 2019 10:19AM

hackergotchi for Ricardo Mones

Ricardo Mones

Did you know...

...that cowbuilder raises an Spanish flag when it fails?
cowbuilder create error

;-)

10 May, 2019 09:21AM by mones

May 09, 2019

hackergotchi for Thomas Goirand

Thomas Goirand

OpenStack-cluster-installer in Buster

I’ve been working on this for more than a year, and finally, I am acheiving my goal. I wrote a OpenStack cluster installer that is fully in Debian, and running in production for Infomaniak.

Note: I originally wrote this blog post a few weeks ago, though it was pending validation from my company (to make sure I wouldn’t disclose company business information).

What is it?

As per the package description and the package name, OCI (OpenStack Cluster Installer) is a software to provision an OpenStack cluster automatically, with a “push button” interface. The OCI package depends on a DHCP server, a PXE (tftp-hpa) boot server, a web server, and a puppet-master.

Once computers in the cluster boot for the first time over network (PXE boot), a Debian live system squashfs image is served by OCI (served by Apache), to act as a discovery image. This live system then reports the hardware features of the booted machine back to OCI (CPU, memory, HDDs, network interfaces, etc.). The computers can then be installed with Debian from that live system. During this process, a puppet-agent is configured so that it will connect to the puppet-master of OCI. Uppong first boot, OpenStack services are then installed and configured, depending on the server role in the cluster.

OCI is fully packaged in Debian, including all of the Puppet modules and so on. So just doing “apt-get install openstack-cluster-installer” is enough to bring absolutely all dependencies, and no other artifact are needed. This is very important so one only needs a local Debian mirror to install an OpenStack cluster. No external components must be downloaded from internet.

OCI setting-up a Swift cluster

At the begining of OCI’s life, we first used it at Infomaniak (my employer) to setup a Swift cluster. Swift is the object server of OpenStack. It is perfect solution for a (very) large backup system.

Think of a massive highly available cluster, with a capacity reaching peta bytes, storing millions of objects/files 3 times (for redundancy). Swift can virtually scale to infinity as long as you size your ring correctly.

The Infomaniak setup is also redundant at the data center level, as our cluster spans over 2 data centers, with at least one copy everything stored on each data center (the location of the 3rd copy depends on many things, and explaining it is not in the scope of this post).

If one wishes to use swift, it’s ok to start with 7 machines to begin with: 3 machines for the controller (holding the Keystone authentication, and a bit more), at least 1 swift-proxy machine, and 3 storage nodes. Though for redundancy purpose, it is IMO not good enough to start with only 3 storage node: if one fails, the proxy server will fall into timeouts waiting for the 3rd storage node. So 6 storage nodes feels like a better minimum. Though it doesn’t have to be top-noch servers, a cluster made of refurbished old hardware with only a few disks can do it, if you don’t need to store too much data.

Setting-up an OpenStack compute cluster

Though swift was the first thing OCI did for us, it now can do a way more than just Swift. Indeed, it can also setup a full OpenStack cluster with Nova (compute), Neutron (networking) and Cinder (network block devices). We also started using all of that, setup by OCI, at Infomaniak. Here’s the list services currently supported:

  • Keystone (identity)
  • Heat (orchestration)
  • Aodh (alarming)
  • Barbican (key/secret manager)
  • Nova (compute)
  • Glance (VM images)
  • Swift (object store)
  • Panko (event)
  • Ceilometer (resource monitoring)
  • Neutron (networking)
  • Cinder (network block device)

On the backend, OCI can use LVM or Ceph for Cinder, local storage or Ceph for Nova instances.

Full HA redundancy

The nice thing is, absolutely every component setup by OCI is done in a high availability way. Each machine of the control plane of OpenStack is setup with an instance of the components: all OpenStack controller components, a MariaDB server part of the Galera cluster, etc.

HAProxy is also setup on all controllers, in front of all of the REST API servers of OpenStack. And finally, the web address where final clients will connect is in fact a virtual IP, that can move from one server to another, thanks to corosync. Routing to that VIP can be done either over L2 (ie: a static address on a local network), or over BGP (useful if you need multi-datacenter redundancy). So if one of the controllers is down, it’s not such a big deal, HAproxy will detect this within seconds, and if it was the server that had the virtual IP (matching the API endpoint), then this IP will move to one of the other servers.

Full SSL transport

One of the things that OCI does when installing Debian, is setup a PKI (ie: SSL certificates signed by a local root CA) so that everything in the cluster is transported over SSL. Haproxy, of course does the SSL, but it also connects to the different API servers over SSL too. All connections to the RabbitMQ servers are also performed SSL. If one wishes, it’s possible to replace the self-signed SSL certificates before the cluster is deployed, so that the OpenStack API endpoint can be exposed on a public address.

OCI as a quite modular system

If one decides to use Ceph for storage, then for every compute node of the cluster, it is possible to choose to use either Ceph for the storage of /var/lib/nova/instance, or use local storage. On the later case, then of course, using RAID is strongly advised, to avoid any possible loss of data. It is possible to mix both types of compute node storage in a single cluster, and create server aggregates so it is later possible to decide which type of compute server to run the workload on.

If a cluster Ceph is part of the cluster, then on every compute node, the cinder-volume and cider-backup services will be provisioned. They will be in use to control the Cinder volumes of the Ceph cluster. Even though the network block storage itself will not run on the compute machines, it makes sense to do that. The idea is that the amount of these process needs to scale at the same time as the amount of compute nodes, so it makes sense to do that. Also, on compute servers, the Ceph secret is already setup using libvirt, so it was also convenient to re-use this.

As for Glance, if you have Ceph, it will use it as backend. If not, it will use Swift. And if you don’t have a Swift cluster, it will fall-back to the normal file backend, with a simple rsync from the first controller to the others. On such a setup, then only the first controller is used for glance-api. The other controllers also run glance-api, but haproxy doesn’t use them, as we really want the images to be stored on the first controller, so they can be rsync to the others. In practice, it’s not such a big deal, because the images are anyway in the cache of the compute servers when in use.

If one setup cinder volume nodes, then cinder-volume and cinder-backup will be installed there, and the system will automatically know that there’s cinder with LVM backend. Both Cinder over LVM and over Ceph can be setup on the same cluster (I never really tried this, though I don’t see why it wouldn’t work, normally, simply both backend will be available).

OCI in Buster vs current development

Lots of new features are being added to OCI. These, unfortunately, wont make it to Buster. Though the Buster release has just enough to be able to provision a working OpenStack cluster.

Future features

What I envision for OCI, is to make it able to provision a cluster ready for serving as a public cloud. This means having all of the resource accounting setup, as well as cloudkitty (which is OpenStack resource rating engine). I’ve already played a bit with this, and it should be out fast. Then the only missing bit to go public will be billing of the rated resources, which obviously, has to be done in-house, and doesn’t need to live within the OpenStack cluster itself.

The other things I am planning to do, is add more and more services. Currently, even though OCI can setup a fully working OpenStack, it is still a basic one. I do want to add advanced features like Octavia (load balancer as a service), Magnum (kubernets cluster as a service), Designate (DNS), Manila (shared filesystems) and much more if possible. The number of available projects is really big, so it probably will keep me busy for a very long time.

At this point, what OCI misses as well, is a custom ISO debian installer image that would include absolutely all. It shouldn’t be hard to write, though I lack the basic knowledge on how to do this. Maybe I will work on this at this summer’s DebConf. At the end, it could be a debian pure blend (ie: a fully integrated distro-in-the-distro system, just like debian-edu or debian-meds). It’d be nice if this ISO image could include all of the packages for the cluster, so that no external resources would be needed. The setting-up an OpenStack cluster with no internet connectivity at all would become possible. Because in fact, only the API endpoint on the port 443, and the virtual machines need internet access, your management network shouldn’t be connected (it’s much safer this way).

No, there wasn’t 80 engineers that burned-out in the process of implementing OCI

One thing that makes me proud, is that I wrote all of my OpenStack installer nearly alone (truth: leveraging all the work of puppet-openstack, it woudn’t have been possible without it…). That’s unique in the (small) OpenStack world. Companies like my previous employer, or a famous companies working on RPM based distros, this kind of product is the work of dozens of engineers. I heard that Red Hat has nearly 100 employees working on TripleO. This was possible because I tried to keep OCI in the spirit of “keep it simple stupid”. It is doing only what’s needed, and implemented the mot simple way possible, so that it is easy to maintain.

For example, the hardware discovery agent is made of 63 lines of ISO shell script (that is: not even bash… but dash), while I’ve seen others using really over engineered stuff, like heavy ruby or Python modules. Ironic-inspector, for example, in the Rocky release, is made of 98 files, for a total of 17974 lines. I really wonder what they are doing with all of this (I didn’t dare to look). There is one thing I’m sure: what I did is really enough for OCI’s needs, and I don’t want to run a 250+ MB initrd as the discovery system: OCI’s live build based discovery image loaded over the web rather than PXE is a way smarter.

On the same spirit, the part that does the bare-metal provisioning, is the same shell script that I wrote to create the official Debian OpenStack images. It was about 700 lines of shell script to install Debian on a .qcow2 image, it’s not about 1500 lines, and made of a single file. That’s the smallest footprint you’ll ever find. However, it does all what’s needed, still, and probably even more.

In comparison, in Fuel, there was a super-complicated scheduler, written in Ruby, used to be able to provision a full cluster by only a single click of a button. There’s no such thing in OCI, because I do believe that’s a useless gadget. With OCI, a user simply needs to remember the order for setting-up a cluster: Cephmon nodes needs to be setup first, then CephOSD nodes, then controllers, then finally, in no particular order, the computes, swiftproxy, swiftstore and volume nodes last. That’s really not a big deal to let this done by the final user, as it is not expected that one will setup multiple OpenStack every day. And even so, if you use the “ocicli” tool, it shouldn’t be hard to do these final bits of the automation. But I would consider this a useless gadget.

While every company jumped into the micro-service in container thing, even at this time, I continue to believe this is useless, and mostly driven by the needs marketing people that needs to sell features. Running OpenStack directly on bare metal is already hard, and the amount of complexity added by running OpenStack services in Docker is useless: it doesn’t bring any feature. I’ve been told that it makes upgrades easier, I very much doubt it: upgrades are complex for other reasons than just upgrading the running services themselves. Rather, they are complex because one needs to upgrade the cluster components with a given order, and scheduling this isn’t easy.

So this is how I managed to write an OpenStack installer alone, in less than a year, without compromising on features: because I wrote things simply, and avoided the over-engineering I saw at all levels on other products.

OpenStack Stein is comming

I’ve just pushed to Debian Experimental, and to https://buster-stein.debian.net/debian the last release of OpenStack (code name: Stein), which was released upstream on the 10th or April (yesterday, as I write these lines). I’ve been able to install Stein on top of Debian Buster, and I could start VMs on it: it’s all working as expected after a bit of changes in the puppet manifests of OCI. What’s needed now, is testing upgrades from Stretch + Rocky to Buster + Stein. Normally, puppet-openstack can do that. Let’s see…

Want to know more?

Read on… the README.md is on https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer

Last words, last thanks

This concludes a bit more than a year of development. All of this wouldn’t have been possible without my employer, Infomaniak, giving me a total freedom on the way I implement things for going into production. So a big thanks to them, and also for being a platinium sponsor for this year’s Debconf in Brazil.

Also a big thanks to the whole of the OpenStack project, including (but not limited to) the Infra team and the puppet-openstack team.

09 May, 2019 02:53PM by Goirand Thomas

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate blog moved

I've been publishing updates about Weblate on my blog for past seven years. Now the project has grown up enough to deserve own space to publish posts. Please update your bookmarks and RSS readers to new location directly on Weblate website.

The Weblate website will receive another updates in upcoming weeks, I'm really looking forward to these.

New address for Weblate blog is https://weblate.org/news/.

New address for the RSS feed is https://weblate.org/feed/.

Filed under: Debian English SUSE Weblate

09 May, 2019 12:30PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2019 in Debian

About one week ago we have released upstream TeX Live 2019, and after a few iterations fixing packaging bugs, the Debian experimental suite now has TeX Live packages based on 2019.

All the changes listed in the upstream release blog apply also to the Debian packages, but we have rebuilt binaries from the sources in current svn, which means there are several fixes for dvipdfmx, and updates to the ptex family of engines.

The currently available versions in experimental are

There are also updates to texworks and texinfo available in experimental.

As soon as buster is released and sid is open for updates again, these packages will be uploaded to unstable/sid.

Enjoy.

09 May, 2019 01:08AM by Norbert Preining

May 08, 2019

hackergotchi for Gunnar Wolf

Gunnar Wolf

Made with Creative Commons (Spanish translation): Copyedits done!

Uff!
Remember almost two years ago I announced on this same blog I would start coordinating a translation effort for the (excellent!) Made with Creative Commons book into Spanish? Having made the vey wise decision to choose Weblate as our translation platform, only four months later and with the collaboration from people all over Latin America, we amazingly reached 100% translated strings only four months later! Not only that — other languages were also started, and Norwegian (coordinated by Petter Reinholdtsen) also reached 100%.

But editting a book is not just a matter of translating it. In my case, as I publish via the National University, the translation had to undergo peer review –as any university-published book would– which took several months (!)
Once we got academic approval for the University to host the edition, resources were approved for our editors to do the style correction reading. And, of course, being us so diverse geographically, our linguistic styles were really not coherent. Some ideological issues appear in the resulting text, which also becomes easily aparent. Plus, not all of us are in the habit of writing — And it also shows.

So, the copyediting process was long and painful for our readers and for me, who incorporated their comments into the source. Oh — Eat your own dogfood: Given we did our translation based on a nice and nifty gettext+DocBook environment... Well, gettext is meant for programming, not for whole texts. I basically did all the copyediting by opening the .po file as plain text. Surprisingly, I broke things very few times!

The process still has many stops in the horizon. But at least I already finished a huge chunk of the pending work. I am happy! ☺

AttachmentSize
mwcc_copyedits_web.jpg277.99 KB
copyedits_web.jpg289.56 KB

08 May, 2019 11:07PM by gwolf

Mike Hommey

Announcing git-cinnabar 0.5.1

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0?

  • Updated git to 2.21.0 for the helper.
  • Experimental native mercurial support (used when mercurial libraries are not available) now has feature parity.
  • Try to read the git system config from the same place as git does. This fixes native HTTPS support with Git on Windows.
  • Avoid pushing more commits than necessary in some corner cases (see e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=1529360).
  • Added an –abbrev argument for git cinnabar {git2hg,hg2git} to display shortened sha1s.
  • Can now pass multiple revisions to git cinnabar fetch.
  • Don’t require the requests python module for git cinnabar download.
  • Fixed git cinnabar fsck file checks to actually report errors.
  • Properly return an error code from git cinnabar rollback.
  • Track last fsck’ed metadata and allow git cinnabar rollback --fsck to go back to last known good metadata directly.
  • git cinnabar reclone can now be rolled back.
  • Added support for git bundles as cinnabarclone source.
  • Added alternate styles of remote refs.
  • More resilient to interruptions when HTTP Range requests are supported.
  • Fixed off-by-one when storing mercurial heads.
  • Better handling of mercurial branchmap tips.
  • Better support for end of parts in bundle v2.
  • Improvements handling urls to local mercurial repositories.
  • Fixed compatibility with (very) old mercurial servers when using mercurial 5.0 libraries.
  • Converted Continuous Integration scripts to Python 3.

08 May, 2019 09:57PM by glandium

Molly de Blanc

Free software activities (April, 2019)

April was a very exciting month for my free software life. Namely, I switched jobs, sadly leaving the FSF and excitedly starting at the GNOME Foundation. No one was mean to me in April, which is exciting as always.

Pink tulips and two red poppies from above.

Personal activities

  • I started my second term on the Board of Directors of the Open Source Initiative. Three more years!
  • With Debian, we took further steps with GSoC and Outreachy.
  • I attended my second bug squashing party. Yay, Debian!
  • I spoke at FOSS North and Linux Fest Northwest.
  • There was an OSI board phone call.

Professional activities

  • Wrapped up work at the FSF!
  • Started at the GNOME Foundation as the Strategic Initiatives Manager!
  • Attended and tables at Linux Fest Northwest on behalf of GNOME.
  • Failed to determine if I want to pronounce the “g” in GNOME.

08 May, 2019 04:40PM by mollydb

hackergotchi for Daniel Lange

Daniel Lange

Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?

Problem

Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issue #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [Archive.org mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].
Update: RDRAND doesn't return random data on pre-Ryzen AMD CPUs (AMD CPU family <23) as per systemd bug #11810. It will always be 0xFFFFFFFFFFFFFFFF (264-1). This is a known issue since 2014, see kernel bug #85991.

Debian

Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.

Solutions

You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make recent Intel / AMD systems trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Update: Since Linux kernel build 4.19.20-1 CONFIG_RANDOM_TRUST_CPU has been enabled by default in Debian.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add

rng_core.default_quality=1000

to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.

VirtIO

For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
virtio_rng.0
# cat /sys/devices/virtual/misc/hw_random/rng_current
virtio_rng.0

Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.

Chaoskey

The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.

Jitterentropy_RNG

Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.

Haveged

apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 :-).

early-rng-init-tools

Thorsten Glaser has posted newly developed early-rng-init-tools in a debian-devel thread. He provides packages at http://fish.mirbsd.org/~tg/Debs/dists/sid/wtf/Pkgs/early-rng-init-tools/ .

First he deserves kudos for naming a tool for what it does. This makes it much more easily discoverable than the trend to name things after girlfriends, pets or anime characters. The implementation hooks into the early boot via initrd integration and carries over a seed generated during the previous shutdown. This and some other implementation details are not ideal and there has been quite extensive scrutiny but none that discovered serious issues. Early-rng-init-tools look like a good option for non-RDRAND (~CONFIG_RANDOM_TRUST_CPU) capable platforms.

Updates

14.01.2019

Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.

The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.

Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons and the "ssh/apache stance" (fix systemd, credit entropy).

The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:

> I'd prefer having this documented in the release notes:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916690
> with possible solutions like installing haveged, configuring virtio-rng,
> etc. depending on the situation.

That would be an extremely user-unfriendly "solution" and would lead to 
countless hours of debugging and useless bug reports.

This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.

Some interesting tidbits were mentioned within the thread:

Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.

Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.

Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.

16.01.2019

Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:

If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al -- https://factorable.net/weakkeys12.extended.pdf)
I've just opened https://bugs.launchpad.net/qemu/+bug/1811758 to report a way to improve that situation in qemu by default.

So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.

12.03.2019

Stefan Fritsch revived the thread on debian-devel again and got a few more interesting titbits out of the developer community:

Ben Hutchings has enabled CONFIG_RANDOM_TRUST_CPU for Debian kernels from 4.19.20-1 so the problem is somewhat contained for recent CPU AMD64 systems (RDRAND capable) in Buster.

Thorsten Glaser developed early-rng-init-tools which combine a few options to try and get entropy carried across boot and generated early during boot. He received some scrutiny as can be expected but none that would discourage me from using it. He explains that this is for early boot and thus has initrd integration. It complements safer randomness sources or haveged.

16.04.2019

The Debian installer for Buster is running into the same problem now as indicated in the release notes for RC1. Bug #923675 has details. Essentially choose-mirror waits serveral minutes for entropy when used with https mirrors.

08.05.2019

The RDRAND use introduced in systemd to bypass the kernel random number generator during boot falls for a AMD pre-Ryzen bug as RDRAND on these systems doesn't return random data after a suspend / resume cycle. Added an update note to the systemd section above.


  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (2018-12-17) 

08 May, 2019 06:58AM by Daniel Lange

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

x13binary 1.1.39-2

An updated x13binary package 1.1.39-2 of the X-13ARIMA-SEATS program by the US Census Bureau (with upstream release 1.1.39) is now on CRAN, pretty much exactly two years after the previous release 1.1.39-1.

The x13binary package takes the pain out of installing X-13ARIMA-SEATS by making it a fully resolved CRAN dependency. For example, if you install the excellent seasonal package by Christoph, then X-13ARIMA-SEATS will get pulled in via the x13binary package and things just work: Depend on x13binary and on all relevant OSs supported by R, you should have an X-13ARIMA-SEATS binary installed which will be called seamlessly by the higher-level packages such as seasonal or gunsales. With this the full power of the what is likely the world’s most sophisticated deseasonalization and forecasting package is now at your fingertips and the R prompt, just like any other of the 14100+ CRAN packages. You can read more about this (and the seasonal package) in the recent Journal of Statistical Software paper by Christoph and myself.

There is almost no change in this release – apart from having to force StagedInstall: no following the R 3.6.0 release as the macOS build is otherwise broken now.

Courtesy of CRANberries, there is also a diffstat report for this release showing changes to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 May, 2019 02:26AM

May 06, 2019

hackergotchi for Shirish Agarwal

Shirish Agarwal

Debutsav Mumbai and itsfoss.com changes

Before starting of today’s blog post I will be sharing a video of food which will definitely make you hungry. So make sure you are either eating something or have eaten fully before seeing the video below.

The video is from a place called Aaoji Khaaoji Restaurant which is on J.M. Road . It is a proper foodie place. They have two offers there, if you can eat their whole Thali by yourself they will either feed you your whole life or you can take double the money you paid. Don’t think anybody has been even able to finish half of that thali.

Debutsav Mumbai

While I and a few members of Debian India has been trying to get a debutsav Mumbai happening, now we have the dates for the event as it was announced today on the mailing list. While there are definitely lot of things that would need to be done in order for a successful Debtusav to happen, at least we have got the dates so other things can get start moving.

Itsfoss.com

Another thing that I am sharing is that itsfoss.com, where I write debian-related articles would be going through some interesting changes. While I’m not at liberty to share the changes, the changes are and would be interesting to say the least. The only hint or giveaway I can give is that the changes may be seen in the new few weeks or at the most a month. All the rest, till later 🙂

06 May, 2019 06:29PM by shirishag75

hackergotchi for Olivier Berger

Olivier Berger

Running networking labs over Kubernetes with Antidote

I’ve just come across Antidote a recent project that intends at running networking-oriented labs over Kubernetes. It is developped by members of the Network Reliability Engineering community (Juniper-related, AFAIU), to power the NRE Labs platform.

It looks very similar to other platforms that allow you to run labs remotely in consoles opened on cloud machines, alongside lab instructions.

I find it interesting as the code is published under FLOSS license (Apache), and seems to be runable over any Kubernetes installation: you can test it with Minikube through the selfmedicate repo.

Antidote demo running virtual labs in Kubernetes with selfmedicate/minikube, running locally from Olivier Berger on Vimeo.

Internally, it uses Guacamole to provide the Web consoles connected via SSH to the hosts (or emulated devices) running on the k8s cluster. Each learner will get her own k8s namespace running the corresponding PODs.

In principle, it’s rather easy to package any app that can be used from the CLI to run it over Antidote.

The main drawback I’ve found so far, wrt our experiments with virtual labs, is the limitation to SSH access for a CLI: the Guacamole setup doesn’t provide access to VNC, AFAICS (yet).

Quite interesting and promising project anyway.

06 May, 2019 02:42PM by Olivier Berger

hackergotchi for Julien Danjou

Julien Danjou

An Introduction to Functional Programming with Python

An Introduction to Functional Programming with Python

Many Python developers are unaware of the extent to which you can use functional programming in Python, which is a shame: with few exceptions, functional programming allows you to write more concise and efficient code. Moreover, Python’s support for functional programming is extensive.

Here I'd like to talk a bit about how you can actually have a functional approach to programming with our favorite language.

Pure Functions

When you write code using a functional style, your functions are designed to have no side effects: instead, they take an input and produce an output without keeping state or modifying anything not reflected in the return value. Functions that follow this ideal are referred to as purely functional.

Let’s start with an example of a regular, non-pure function that removes the last item in a list:

def remove_last_item(mylist):
    """Removes the last item from a list."""
    mylist.pop(-1)  # This modifies mylist

This function is not pure: it has a side effect as it modifies the argument it is given. Let's rewrite it as purely functional:

def butlast(mylist):
    """Like butlast in Lisp; returns the list without the last element."""
    return mylist[:-1]  # This returns a copy of mylist

We define a butlast() function (like butlast in Lisp) that returns the list without the last element without modifying the original list. Instead, it returns a copy of the list that has the modifications in place, allowing us to keep the original. The practical advantages of using functional programming include the following:

  • Modularity. Writing with a functional style forces a certain degree of
    separation in solving your individual problems and makes sections of code
    easier to reuse in other contexts. Since the function does not depend on any
    external variable or state, call it from a different piece of code is
    straightforward.
  • Brevity. Functional programming is often less verbose than other paradigms.
  • Concurrency. Purely functional functions are thread-safe and can run
    concurrently. Some functional languages do this automatically, which can be
    a big help if you ever need to scale your application, though this is not
    quite the case yet in Python.
  • Testability. Testing a functional program is incredibly easy: all you need
    is a set of inputs and an expected set of outputs. They are idempotent,
    meaning that calling the same function over and over with the same arguments
    will always return the same result.

Note that concepts such as list comprehension in Python are already functionals in their approach, as they are designed to avoid side effects. We'll see in the following that some of the functional functions Python provide can actually be expressed as list comprehension!

Python Functional Functions

You might repeatedly encounter the same set of problems when manipulating data using functional programming. To help you deal with this situation efficiently, Python includes a number of functions for functional programming. Here, we'll see with a quick overview some of these built-in functions that allows you to build fully functional programs. Once you have an idea of what’s available, I encourage you to research further and try out functions where they might apply in your own code.

Applying Functions to Items with map

The map() function takes the form map(function, iterable) and applies function to each item in iterable to return an iterable map object:

>>> map(lambda x: x + "bzz!", ["I think", "I'm good"])
<map object at 0x7fe7101abdd0>
>>> list(map(lambda x: x + "bzz!", ["I think", "I'm good"]))
['I thinkbzz!', "I'm goodbzz!"]

You could also write an equivalent of map() using list comprehension, which
would look like this:

>>> (x + "bzz!" for x in ["I think", "I'm good"])
<generator object <genexpr> at 0x7f9a0d697dc0>
>>> [x + "bzz!" for x in ["I think", "I'm good"]]
['I thinkbzz!', "I'm goodbzz!"]

Filtering Lists with filter

The filter() function takes the form filter(function or None, iterable) and filters the items in iterable based on the result returned by function. This will return iterable filter object:

>>> filter(lambda x: x.startswith("I "), ["I think", "I'm good"])
<filter object at 0x7f9a0d636dd0>
>>> list(filter(lambda x: x.startswith("I "), ["I think", "I'm good"]))
['I think']

You could also write an equivalent of filter() using list comprehension, like
so:

>>> (x for x in ["I think", "I'm good"] if x.startswith("I "))
<generator object <genexpr> at 0x7f9a0d697dc0>
>>> [x for x in ["I think", "I'm good"] if x.startswith("I ")]
['I think']

Getting Indexes with enumerate

The enumerate() function takes the form enumerate(iterable[, start]) and returns an iterable object that provides a sequence of tuples, each consisting of an integer index (starting with start, if provided) and the corresponding item in iterable. This function is useful when you need to write code that refers to array indexes. For example, instead of writing this:

i = 0
while i < len(mylist):
    print("Item %d: %s" % (i, mylist[i]))
    i += 1

You could accomplish the same thing more efficiently with enumerate(), like so:

for i, item in enumerate(mylist):
    print("Item %d: %s" % (i, item))

Sorting a List with sorted

The sorted() function takes the form sorted(iterable, key=None, reverse=False) and returns a sorted version of iterable. The key argument allows you to provide a function that returns the value to sort on:

>>> sorted([("a", 2), ("c", 1), ("d", 4)])
[('a', 2), ('c', 1), ('d', 4)]
>>> sorted([("a", 2), ("c", 1), ("d", 4)], key=lambda x: x[1])
[('c', 1), ('a', 2), ('d', 4)]

Finding Items That Satisfy Conditions with any and all

The any(iterable) and all(iterable) functions both return a Boolean depending on the values returned by iterable. These simple functions are equivalent to the following full Python code:

def all(iterable):
    for x in iterable:
        if not x:
            return False
    return True

def any(iterable):
    for x in iterable:
        if x:
            return True
    return False

These functions are useful for checking whether any or all of the values in an iterable satisfy a given condition. For example, the following checks a list for two conditions:

mylist = [0, 1, 3, -1]
if all(map(lambda x: x > 0, mylist)):
    print("All items are greater than 0")
if any(map(lambda x: x > 0, mylist)):
    print("At least one item is greater than 0")

The key difference here, as you can see, is that any() returns True when at least one element meets the condition, while all() returns True only if every element meets the condition. The all() function will also return True for an empty iterable, since none of the elements is False.

Combining Lists with zip

The zip() function takes the form zip(iter1 [,iter2 [...]]) and takes multiple sequences and combines them into tuples. This is useful when you need to combine a list of keys and a list of values into a dict. Like the other functions described here, zip() returns an iterable. Here we have a list of keys that we map to a list of values to create a dictionary:

>>> keys = ["foobar", "barzz", "ba!"]
>>> map(len, keys)
<map object at 0x7fc1686100d0>
>>> zip(keys, map(len, keys))
<zip object at 0x7fc16860d440>
>>> list(zip(keys, map(len, keys)))
[('foobar', 6), ('barzz', 5), ('ba!', 3)]
>>> dict(zip(keys, map(len, keys)))
{'foobar': 6, 'barzz': 5, 'ba!': 3}

What's Next?

While Python is often advertised as being object oriented, it can be used in a very functional manner. A lot of its built-in concepts, such as generators and list comprehension, are functionally oriented and don’t conflict with an object-oriented approach. Python provides a large set of builtin functions that can help you keeping your code with no side effects. That also limits the reliance on a program’s global state, for your own good.

In the next blog post, we'll see how you can leverage Python functools and itertools module to enhance your functional adventure. Stay tuned!

06 May, 2019 08:58AM by Julien Danjou

Sergio Durigan Junior

Improve gcore and support dumping ELF headers

Back in 2016, when life was simpler, a Fedora GDB user reported a bug (or a feature request, depending on how you interpret it) saying that GDB's gcore command did not respect the COREFILTER_ELF_HEADERS flag, which instructs it to dump memory pages containing ELF headers. As you may or may not remember, I have already written about the broader topic of revamping GDB's internal corefile dump algorithm; it's an interesting read and I recommend it if you don't know how Linux (or GDB) decides which mappings to dump to a corefile.

Anyway, even though the bug was interesting and had to do with a work I'd done before, I couldn't really work on it at the time, so I decided to put it in the TODO list. Of course, the "TODO list" is actually a crack where most things fall through and are usually never seen again, so I was blissfully ignoring this request because I had other major priorities to deal with. That is, until a seemingly unrelated problem forced me to face this once and for all!

What? A regression? Since when?

As the Fedora GDB maintainer, I'm routinely preparing new releases for Fedora Rawhide distribution, and sometimes for the stable versions of the distro as well. And I try to be very careful when dealing with new releases, because a regression introduced now can come and bite us (i.e., the Red Hat GDB team) back many years in the future, when it's sometimes too late or too difficult to fix things. So, a mandatory part of every release preparation is to actually run a regression test against the previous release, and make sure that everything is working correctly.

One of these days, some weeks ago, I had finished running the regression check for the release I was preparing when I noticed something strange: a specific, Fedora-only corefile test was FAILing. That's a no-no, so I started investigating and found that the underlying reason was that, when the corefile was being generated, the build-id note from the executable was not being copied over. Fedora GDB has a local patch whose job is to, given a corefile with a build-id note, locate the corresponding binary that generated it. Without the build-id note, no binary was being located.

Coincidentally or not, at the same I started noticing some users reporting very similar build-id issues on the freenode's #gdb channel, and I thought that this bug had a potential to become a big headache for us if nothing was done to fix it right now.

I asked for some help from the team, and we managed to discover that the problem was also happening with upstream gcore, and that it was probably something that binutils was doing, and not GDB. Hmm...

Ah, so it's ld's fault. Or is it?

So there I went, trying to confirm that it was binutils's fault, and not GDB's. Of course, if I could confirm this, then I could also tell the binutils guys to fix it, which meant less work for us :-).

With a lot of help from Keith Seitz, I was able to bisect the problem and found that it started with the following commit:

commit f6aec96dce1ddbd8961a3aa8a2925db2021719bb
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Tue Feb 27 11:34:20 2018 -0800

    ld: Add --enable-separate-code

This is a commit that touches the linker, which is part of binutils. So that means this is not GDB's problem, right?!? Hmm. No, unfortunately not.

What the commit above does is to simply enable the use of --enable-separate-code (or -z separate-code) by default when linking an ELF program on x86_64 (more on that later). On a first glance, this change should not impact the corefile generation, and indeed, if you tell the Linux kernel to generate a corefile (for example, by doing sleep 60 & and then hitting C-\), you will notice that the build-id note is included into it! So GDB was still a suspect here. The investigation needed to continue.

What's with -z separate-code?

The -z separate-code option makes the code segment in the ELF file to put in a completely separated segment than data segment. This was done to increase the security of generated binaries. Before it, everything (code and data) was put together in the same memory region. What this means in practice is that, before, you would see something like this when you examined /proc/PID/smaps:

00400000-00401000 r-xp 00000000 fc:01 798593                             /file
Size:                  4 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
VmFlags: rd ex mr mw me dw sd

And now, you will see two memory regions instead, like this:

00400000-00401000 r--p 00000000 fc:01 799548                             /file
Size:                  4 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         4 kB
Private_Dirty:         0 kB
Referenced:            4 kB
Anonymous:             0 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
VmFlags: rd mr mw me dw sd
00401000-00402000 r-xp 00001000 fc:01 799548                             /file
Size:                  4 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
VmFlags: rd ex mr mw me dw sd

A few minor things have changed, but the most important of them is the fact that, before, the whole memory region had anonymous data in it, which means that it was considered an anonymous private mapping (anonymous because of the non-zero Anonymous amount of data; private because of the p in the r-xp permission bits). After -z separate-code was made default, the first memory mapping does not have Anonymous contents anymore, which means that it is now considered to be a file-backed private mapping instead.

GDB, corefile, and coredump_filter

It is important to mention that, unlike the Linux kernel, GDB doesn't have all of the necessary information readily available to decide the exact type of a memory mapping, so when I revamped this code back in 2015 I had to create some heuristics to try and determine this information. If you're curious, take a look at the linux-tdep.c file on GDB's source tree, specifically at the functions dump_mapping_p and linux_find_memory_regions_full.

When GDB is deciding which memory regions should be dumped into the corefile, it respects the value found at the /proc/PID/coredump_filter file. The default value for this file is 0x33, which, according to core(5), means:

Dump memory pages that are either anonymous private, anonymous
shared, ELF headers or HugeTLB.

GDB had the support implemented to dump almost all of these pages, except for the ELF headers variety. And, as you can probably infer, this means that, before the -z separate-code change, the very first memory mapping of the executable was being dumped, because it was marked as anonymous private. However, after the change, the first mapping (which contains only data, no code) wasn't being dumped anymore, because it was now considered by GDB to be a file-backed private mapping!

Finally, that is the reason for the difference between corefiles generated by GDB and Linux, and also the reason why the build-id note was not being included in the corefile anymore! You see, the first memory mapping contains not only the program's data, but also its ELF headers, which in turn contain the build-id information.

gcore, meet ELF headers

The solution was "simple": I needed to improve the current heuristics and teach GDB how to determine if a mapping contains an ELF header or not. For that, I chose to follow the Linux kernel's algorithm, which basically checks the first 4 bytes of the mapping and compares them against \177ELF, which is ELF's magic number. If the comparison succeeds, then we just assume we're dealing with a mapping that contains an ELF header and dump it.

In all fairness, Linux just dumps the first page (4K) of the mapping, in order to save space. It would be possible to make GDB do the same, but I chose the faster way and just dumped the whole mapping, which, in most scenarios, shouldn't be a big problem.

It's also interesting to mention that GDB will just perform this check if:

  • The heuristic has decided not to dump the mapping so far, and;
  • The mapping is private, and;
  • The mapping's offset is zero, and;
  • There is a request to dump mappings with ELF headers (i.e., coredump_filter).

Linux also makes these checks, by the way.

The patch, finally

I submitted the patch to the mailing list, and it was approved fairly quickly (with a few minor nits).

The reason I'm writing this blog post is because I'm very happy and proud with the whole process. It wasn't an easy task to investigate the underlying reason for the build-id failures, and it was interesting to come up with a solution that extended the work I did a few years ago. I was also able to close a few bug reports upstream, as well as the one reported against Fedora GDB.

The patch has been pushed, and is also present at the latest version of Fedora GDB for Rawhide. It wasn't possible to write a self-contained testcase for this problem, so I had to resort to using an external tool (eu-unstrip) in order to guarantee that the build-id note is correctly present in the corefile. But that's a small detail, of course.

Anyway, I hope this was an interesting (albeit large) read!

06 May, 2019 03:45AM by Sergio Durigan Junior

May 05, 2019

hackergotchi for Shirish Agarwal

Shirish Agarwal

A tale of broken promises

Before I venture out I would like to share with all one of my old classic favorite songs –

It’s a beautiful soft song and there are many such songs that were composed in the 1960’s. This specific song is from the movie ‘Woh Kaun Thi’ . The original song was sung by Indian Nigtingale Mrs. Lata Mangeshkar. The newer sound that you hear though is of Shreya Ghosal.

At this time the election scenario has been where all promises have been broken. While in the last blog post I had promised I will share some of the issues lot of events have taken in-between and there also had been lot of memories of broken promises by the ruling party which has lead not just me but probably lot of Puneties unhappy and disappointed. I will try and share some of them.

2014 Water pipeline promises

Pune has been facing water shortages since the last decade or so. In fact the situation has turned from bad to worse. The previous government promises they will make lot of pipelines and were voted to power. There was water pipeline scam and the Government changed in 2014. From 2014 to 2019 the water crisis has worsened. In my own home, we get water for only 2-3 hours daily. We have not been able to entertain any families for more than a day because of water issues. The only fortunate part is that I have to walk at the most 20 meters to take water while in some places people have to go miles or do dangerous things such as one I’m sharing below.

As can be seen they are going 60 feet down a well without any support structure or even a rope which means they can fall, injure or even die at any moment, a false step. And this they do everyday in order to get water for their families. This place is around 100 – 120 kms. from Pune. More than 1/3rd of the state has been declared as drought hit

Now we have the present ruling Government in Central, State as well as Municipalities but still the water projects are at the same place since 2014. In fact last year they raised 200 crore rupees saying we are short of funds for the project. Luckily I didn’t participate and I will share why I say luckily. Instead of INR 200 crore, people gave between INR 400-500 crores and this was raised in 2-3 days when the municipal bonds were going to be there for 2 weeks. It was made to imply that the investments would be under sovereign guarantee of Government of India . After the money was collected, it was told that neither the Center nor the State takes any responsibility for the money that was collected. The last I heard on the issue is that apparently the money has been kept in a Fixed Deposit Account (no reason given) .

Apart from the monetary issue, why did it take them 4 years to know that Pune has a water problem when that was their political plank which they had sold to Puneties. Of the several projects that are supposed to happen, the Bhama Akshad Dam project has been hanging fire since 2010. In fact it has its own page on wikipedia so it’s one of the notable water projects. The less I say of the other projects, the better.

Pune Municipal Transport or PMT

One of the other electoral promises that the current Government had made was that there will be a huge improvement in lot number of areas of PMT’s functioning. Accountability, number of buses, number of routes, less breakdowns and less corruption. Neither of these promises have been implemented. PMPML which also a webpage on wikipedia clearly shows that number of breakdowns, while unoffically the number is probably one and half times than what has been reported. As can be seen the last stats. are of 2016-2017, there are no stats. either of 2017-18 or 2018-19. This shows at the very least negligence and lack of transparency on the present Government. This is the same Government which has put spent close to 5,200 crores on advertisements (probably even more) as shared by Rajvardhan Rathore in a Lok Sabha reply in July 2018. And the amount that was spent within July 2018 – May 2019 would probably be about around 2000 crores or something as they spent a lot in ads this year, so the final tax payer and black money must be around 7,200 crores. It is possible that some estimates could be made by PRS and Election Commission, although the less I say about the EC, the better.

Pune Metro Project

This is and was one of the projects I am and was looking forwards to. The Pune Metro was going to make it easier just like Delhi Metro makes it easier to travel from point A to point B without waiting for a bus which when it will come is not known. To keep updated on the project I followed the Pune Metro project twitter handle. The twitter handle is and was useful as it used to keep me updated about how the Pune project is doing. I am one of those who used to check the twitter page daily just to know if they had posted something new just because I have fascination about transit systems, rails, planes, buses you name it. From 18th March 2019, the twitter handle stopped sharing any new news. The first week or two went by and I thought it is possible that due to Election Comission rules they might not be sharing. But even after Pune voted i.e 26th March, 2019 there were still no updates. Somehow I felt something is fishy and on a hunch I tried to see if the Nagpur Metro Rail handle was also showing similar.

To my surprise, the Nagpur Metro Rail were sharing, giving updates even though there is and was a huge section of Nagpur who didn’t feel the need of a metro. Pune does. So I dug a little deeper and found out that HCC has been terminated out of Pune Metro . Just to share a bit of history, this route is the most crucial part of the project as it is the first phase. Just to put a bit of context here HCC used to be a Navratna status public-private company which used to be at the bourses around INR 40-44/- which has now come to INR 13/- and will probably slide to something like INR 10/- . While I do not really want to get into how they have destructed public limited companies and it would probably need a blog post or two to share all the public limited companies they have systematically destroyed let’s leave that for another day.

As far as Pune Metro is concerned, I don’t see anything now till whatever the New Government comes into power anything will happen. This may happen in a month or two from today. Let’s say whatever Government does comes to power, there is also no guarantee that the team that is/was driving the Pune Metro will be managing then. If there is a new team it will again take its own sweet time. Even if the old team is given responsibility it will take close to 6 months to a year for replacements to come by as they would have to re-issue a global tender all over again giving an updated picture of the work done, the work yet to be completed and so on and so forth. We know the delay is going to be so severe that an ardent Pune metro video blogger whom I know, Yogi Logic shared just how things are without saying much as there is nothing left to say 😦

While I could go on sharing lot more deficiencies on the local, Municipal and State level, forget the center but that will have to be for another day.

Update 06/05/2019 – Few friends asked me to aslso share the work that Paani foundation has been doing and the transparent way they have been doing it.

Paani Foundation

Paani Foundation is headed by Aamir Khan and Kiran Rao. You can see the Paani Foundation’s website as well as the places where they are working. Now my question is, if a non-profit can show where they are doing things, show videos and stuff, also be available to answer queries and one can go and investigate then why not the Government. One thing I need to point out is that they are doing these things very cheaply. While Aamir and his team provide the expertise, the villagers spend their own money, get the materials, do shramdhan i.e, donate an hour or two from your life to make the watershed dam. While they wanted to do this since 2014 but due to bureacratic hurdles and what not, they could only start in 2017. There was an interview in which he shared the above. He also shared he learnt it in WOTR, a pune-based organization.

There is another thing as well. I think I had shared adventures travelingr with a friend when he wanted to look up houses, while he is still on the hunt, lot of projects that we together have now got some sort of RWH (Rain Water Harvesting projects) but that also seems interesting although whether this is too little, too late, only time will tell. I did see this also a few days and was wondering if the situation has turned so desperate. Because it’s the onion dunno if one should believe it or it’s a hoax. Even in the true/fake news the white man should live while Africans should die 😦

05 May, 2019 08:50PM by shirishag75

Reproducible builds folks

Reproducible Builds in April 2019

Welcome to the April 2019 report from the Reproducible Builds project! In these now-monthly reports we will outline the most important things which have been up to in and around the world of reproducible builds & secure toolchains.

As a quick recap, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users pre-compiled. The motivation behind reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In this months’s report, we will cover:

  • Media coverageCompromised toolchains, what makes a good digital product?, etc.
  • Upstream newsScala and Go working on reproducibility, etc.
  • Distribution workDistributing build certificates, an update from openSUSE, etc.
  • Software developmentNew features in diffoscope, yet more test framework development, etc
  • Misc newsFrom our mailing list, etc.
  • Getting in touchHow to contribute, etc

Media coverage

  • The SecureList website reported on Operation “ShadowHammer”, a high-profile supply chain attack involving the ASUS Live Update Utility. As their post describes in more detail tampering with binaries would usually break the digital signature, but in this case the digital signature itself appeared to have been compromised. (Read more)

Upstream news

The first non-trivial library written in the Scala programming language on the Java Virtual Machine was released with Arnout Engelen’s sbt-reproducible-builds plugin enabled during the build. This resulted in Akka 2.5.22 becoming reproducible, both for the artifacts built with version 2.12.8 and 2.13.0-RC1 of the Scala compiler. For 2.12.8, the original release was performed on a Mac and the validation was done on a Debian-based machine, so it appears the build is reproducible across diverse systems. (Mailing list thread)

Jeremiah “DTMB” Orians announced the 1.3.0 release of M2-Planet, a self-hosting C compiler written in a subset of the features it supports. It has been bootstrapped entirely from hexadecimal (!) with 100% reproducible output/binaries. This new release sports a self-hosting port for an additional architecture amongst other changes. Being “self-hosted” is an important property as it can provide a method of validating the legitimancy of the build toolchain.

The Go programming language has been making progress in making their builds reproducible. In 2016, Ximin Luo had created issue #16860 requesting that the compiler generates the same result regardless of the path in which the package is built. However, progress was recently made in Change #173344 (and adjacent) that will permit a -trimpath mode that will generate binaries that do not contain any local path names, similar to -fpath-prefix-map.

The fontconfig library for configuring and customising font access in a number of distributions announced they had merged patches to allow various cache files to be reproducible. This is after Chris Lamb posted a historical summary and a request for action to Fontconfig’s mailing list in January 2019

Distribution work

In Debian, Chris Lamb added 90 reviews of Debian packages, adding to our knowledge about identified issues and 14 issues were automatically removed. Chris also added two issue types: build_date_in_egg_info_directory_name & randomness_in_perl6_precompiled_libraries.

Holger Levsen started a discussion regarding the distribution of .buildinfo files. These files record the environment that was used as part of a particular build in order that — along with the source code — ensure that the aforementioned environment can be recreated at a later date to reproduce the exact binary. Distributing these files is important so that others can validate that a build is actually reproducible. In his post, Holger refers to two services that now exist, buildinfo.debian.net and buildinfos.debian.net.

In addition, Holger restarted a long-running discussion regarding the reproducibility status of Debian buster touching on questions of potentially performing mass rebuilds of all packages in order that they use updated toolchains.

There was yet more progress towards making the Debian Installer images reproducible. Following-on from last months, Chris Lamb performed some further testing of the generated images. Cyril Brulebois then made an upload of the debian-installer package to Debian that included a number of Chris’ patches and Vagrant Cascadian filed a patch to fix the reproducibility of “u-boot” images by using -n argument to gzip(1).

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution. Bernhard also posted to our mailing list regarding enabling the normalisation of file modification times in Python .pyc files and opened issue #1133809 in the openSUSE bug tracker.


Software development

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. It does not define reproducibility, but rather provides a helpful and human-readable guidance for packages that are not reproducible, rather than relying essentially-useless diffs.

This month, Chris Lamb did a lot of development of diffoscope, including:

  • Updating the certificate of the try.diffoscope.org web-based version of the tool.

  • Uploaded version 114 to the Debian experimental distribution and made the corresponding upload to the PyPI package repository.

  • Added support for semantic comparison of GnuPG “keybox” (.kbx) files. (#871244)

  • Add the ability to treat missing tools as failures if a “magic” environment variable is detected in order to facilitate interpreting required tools on the Debian autopkgtests as actual test failures, rather than skipping them. The behaviour of the existing testsuite remains unchanged. (#905885)

  • Filed a “request for packaging” for the annocheck tool which can be used to “analyse an application’s compilation”. This is as part of an outstanding wishlist issue. (#926470)

  • Consolidated on a single alias as the exception value across the entire codebase. []

In addition, Vibhu Agrawal ensured that diffoscope failed more gracefully when running out of diskspace to resolve Debian bug #874582 and Vagrant Cascadian updated to diffoscope 114 in GNU Guix. Thanks!

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. This month, Chris Lamb made the following improvements:

  • Workaround Archive::Zip’s incorrect handling of the localExtraField class member field by monkey-patching the accessor methods to always return normalised values. This fixes the normalisation of Unix ownership metadata within .zip and .epub files. (#858431)

  • Actually check the return status from Archive::Zip when writing file to disk. []

  • Catch an edge-case where we can’t parse the length of a particular field within .zip files. []

Chris then uploaded version 1.1.3-1 to the Debian experimental distribution.

Project website

Chris Lamb made a number of improvements to our project website this month, including:

  • Using an explicit “draft” boolean flag for posts. Jekyll in Debian stable silently (!) does not support the where_exp filter. []

  • Moving more pages away from the old design with HTML to Markdown formatting and the new design template. []

  • Adding a simple Makefile to implicitly document how to build the site [] and add a simple .gitlab-ci.yml to test branches/builds [].

  • Adding as simple “lint” command so we can see how many pages are using the old style. []

  • Adding an explicit link to our “Who is involved?” page in the footer of the newer design [] and add a link to donation page [].

  • Moved various bits of infrastructure to support a monthly report structure. []

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were done in the last month:

  • Holger Levsen (Debian-related changes):

    • Add new experimental buildinfos.debian.net service. [][][]
    • Allow pushing of .buildinfo files from coccia. []
    • Permit rsync to write into subdirectories. []
    • Include the meta “pool” job in the overall job health view. []
    • Add support for host-specific SSH authorized_keys files used on a particular build node. []
    • Show link to maintenance jobs for offline nodes. [][]
    • Increase the job timeout for some runners from 3 to 5 days. []
    • Don’t try to turn Jenkins or nodes offline too quickly. [][]
    • Fix pbuilder lock files if necessary. []
  • Mattia Rizzolo:

    • Special-case the debian-installer package when building to allow it access to the internet.. []
    • Force installing the debootstrap from stretch backports and remove cdebootstrap. []
    • Install the python3-yaml package on nodes as it is needed by the deploy script. []
    • Add/update the new reproducible-builds.org MX records. [][]
    • Fix typo in comment; thanks to ijc for reporting! []

Holger Levsen [][][], Mattia Rizzolo [] and Vagrant Cascadian [] all performed a large amount of build node maintenance, system & Jenkins administration and Chris Lamb provided a patch to avoid double spaces in IRC notifications [].


Misc news


Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

05 May, 2019 05:08PM

hackergotchi for Colin Watson

Colin Watson

Buster upgrade

I upgraded my home server from Debian stretch to buster recently, which is something I normally do once we’re frozen: this is a system that was first installed in 1999 and has a lot of complicated stuff on it, and while I try to keep it as cleanly-maintained as I can it still often runs into some interesting problems. Things went largely OK this time round, although there were a few snags of various degrees of severity, some of which weren’t Debian’s fault.

As ever, etckeeper made it much more comfortable to make non-trivial configuration file changes without fearing that I was going to lose information.

  • The first apt full-upgrade failed part-way through with “dependency problems prevent processing triggers for desktop-file-utils” for what didn’t seem like a particularly good reason; dpkg --configure -a sorted it out and I was able to resume the upgrade from there. I think I’ve seen a report of this somewhere recently as it rang a bell, though I haven’t yet found it.

  • I had a number of truly annoying configuration file resolutions to perform. There’s not much to be done about that except try to gradually move things to .d directories where available, and other such strategies to minimise the local differences I’m maintaining.

  • I had an old backup disk that had failed some time ago but was still plugged in and occasionally generating ATA errors. These made some parts of the upgrade excruciatingly slow, so as soon as I got to a point where I had to reboot anyway I took the opportunity to open up the case and unplug it.

  • I hit #919621 “lvm2: Update unexpectedly activates system ID check, bypassing impossible”. Fortunately I noticed the problem before rebooting due to warning messages from various things, and I adjusted my LVM configuration to set a system ID matching the one in my volume group. Unfortunately I forgot to run update-initramfs -u after doing so, and so I ended up having to use break=premount on the kernel command line and fix things up in the same way in the initramfs until I could update it properly. I’m not sure what the right fix for this is, although it probably only affects some rather old VGs; I created mine in 2004.

  • I ran into #924881 “postgresql: buster upgrade breaks older postgresql (9.6) and newer postgresql (11) is also inoperative” (in fact a bug in ssl-cert). It was correct to reject the snakeoil certificate, but the upgrade failure mode was pretty graceless and it would have been helpful for something to notice the situation and prompt me to regenerate the certificate.

  • My networking wasn’t happy after the upgrade; I ended up with some missing addresses, which I’m prepared to believe was the fault of my very old and badly-organised /etc/network/interfaces file, so I rearranged it to follow what seems to be the modern best practice of handling multiple addresses on an interface by just having one iface stanza per address using the same interface name, rather than pre-up ip addr add lines or alias interfaces or anything like that. After that, the interface sometimes refused to come up at all with “ADDRCONF(NETDEV_UP): eth0: link is not ready” messages. Some web-searching and grepping of the kernel source led me to the idea that listing inet6 stanzas before inet stanzas for a given interface name was likely to be helpful, and so it proved: I now have an /etc/network/interfaces that both works and is much easier to read.

  • I had to do some manual steps to get Icinga Web 2 authentication working again: I followed the upstream directions to upgrade the database schema, and I had to run a2enmod php7.3 manually since the previous enablement of php7.0 wasn’t carried over. (I’m not completely sure if the first step was required, but the second certainly was.)

Other than that, everything seems to be working well now.

05 May, 2019 12:10AM by Colin Watson

May 04, 2019

Thorsten Alteholz

My Debian Activities in April 2019

FTP master

This was again a quiet month and I only accepted 70 packages and rejected 11 uploads. The overall number of packages that got accepted was 102. As with every release, people still upload new versions of packages to unstable during the full freeze. I always wonder why they do this?

Debian LTS

This was my fifty eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 17.25h. During that time I did LTS uploads or prepared security uploads of:

  • [DLA 1760-1] wget security update for one CVE
  • [DLA 1763-1] putty security update for three CVEs
  • [DLA 1765-1] gpac security update for two CVEs
  • [DLA 1767-1] monit security update for two CVEs
  • [DLA 1769-1] gst-plugins-base0.10 security update for one CVE
  • [DLA 1770-1] gst-plugins-base1.0 security update for one CVE

I also started to work on CVEs for bind.

Last but not least I did some days of frontdesk duties and tried to add my DLAs to the Debian Website.

Debian ELTS

This month was the eleventh ELTS month.

During my allocated time I uploaded:

  • ELA-99-2 of libssh2 for an upstream regression of CVE-2019-3859
  • ELA-112-1 of wget for one CVE
  • ELA-113-1 of monit for two CVEs
  • ELA-114-1 of ruby1.9.1 for four CVES

As like in LTS, I also did some days of frontdesk duties.

Other stuff

I uploaded a new upstream version of …

On my grafana challenge I uploaded golang-github-apparentlymart-go-cidr, golang-github-apparentlymart-go-rundeck-api, golang- github-corpix-uarand, golang-github-cyberdelia-heroku-go, golang-github-facebookgo-inject, golang-github-hmrc-vmware-govcd, golang-github-icrowley-fake, golang-github-michaeltjones-walk, golang-github-willf-bloom. There is still more to come and thank you ever so much, Chris, for marking all those for ACCEPT.

I also sponsored the following packages for other members of the Go team: easygen, golang-github-anmitsu-go-shlex, golang-github-emirpasic-gods, golang-github-fzambia-sentinel, golang-github-gliderlabs-ssh, golang-github-hashicorp-go-safetemp, golang-github-jesseduffield-gocui, golang-github-jesseduffield-termbox-go, golang-github-jesseduffield-pty, golang-github-kevinburke-ssh-config, golang-github-mgutz-str, golang-github-mgutz-to, golang-github-nozzle-throttler, golang-github-src-d-gcfg, golang-github-stvp-roll, golang-gopkg-src-d-go-billy.v4

04 May, 2019 12:08PM by alteholz

May 03, 2019

hackergotchi for Keith Packard

Keith Packard

snek-mega

Snek on the Arduino Mega 2560 Rev3

The Arduino Mega 2560 Rev3 is larger in almost all ways than the ATmega328P based Arduino boards. Based on the ATMega 2560 SoC, the Mega has 256K of flash, 8K of RAM and 4K of EEPROM. The processor and peripherals are compatible with the ATMega 328P making supporting this in Snek pretty easy.

ATMega238P to ATMega2560 changes

All that I needed to do for Snek to compile for the Mega was to adjust the serial port code to use the Mega register names. gcc-avr prefixes all of the USART registers with 'USART0' for the 2560 instead of 'USART'. With that change, Snek came right up on the board.

GPIO Changes

To get the Arduino Mega pins all working, I had to add definitions for all 70 of them. That's a lot of pins! I took the definitions from the Arduino sources and matched up all of the PWM outputs as well.

USB Serial Adventures

The Arduino Duemilanove uses an FTDI USB to Serial converter chip, while the Arduino Mega uses an ATmega 16u2 SoC. The FTDI exposes a custom USB device while the ATmega16u2 implements a standard CDC ACM device.

The custom USB device provides full serial control, including support for selecting XON/XOFF flow control. The CDC ACM standard only exposes configuration for RTS/CTS flow control, but doesn't provide any way to ask for XON/XOFF flow control.

The Arduino programming protocol requires a transparent 8-bit data path; because the CDC ACM standard doesn't provide a way to turn XON/XOFF on and off, the ATmega 16u2 never does XON/XOFF.

Snek needs XON/XOFF flow control to upload and download code over the serial link.

I was hoping to leave the ATmega 16u2 code and ATmega 2560 boot loader alone. This would let people use Snek on the Arduino Mega without needing a programming puck. And, in fact, Snek works just fine. But, you can't use Snekde with the Mega because getting and putting code to the device ends up with corrupted data.

So, I changed the ATmega 16u2 code to enable XON/XOFF whenever the baud rate is below 57600 baud, left Snek running at 38400 baud while the boot loader uses 115200 baud. The result is that when Snek runs, there is XON/XOFF flow control, and when the boot loader runs, there is not.

Results

With the extra ROM, I was able to include all of the math functions. With the extra RAM, the heap can be 6kB. So much space!

03 May, 2019 03:45AM

May 02, 2019

hackergotchi for Shirish Agarwal

Shirish Agarwal

India – No country for women ?

While I had been thinking and also done quite a bit of work on some of the issues we are already facing and the new Government will face, for e.g. in the areas of –

1. Employment
2. Agriculture
3. Petrol prices
4. Woman Safety
5. Women empowerment
5. Cow vigilantism
6. Telecommunications
7. Indian Railways
8. IT industry
9. Make in India
10. Startup India
11. Toilets
12. Aadhar privacy
13. Crop loan
14. State of Economy
The list is not exhaustive by any means but is the only way to make some sense of things.

Instead of using the list I had prepared in the order I had wanted to share, had to go to 4. and 5. because of a viral video which has been doing the rounds lately –

A few friends had shared this video yesterday, I saw some partial clip but without audio so couldn’t get any context about what was being shown. Then some other friends shared couple of articles, one from wire and the other from newslaundry which shared slightly different aspects, takes on the same story. Then I saw the video and was truly shocked. Just to make sure I didn’t mishear anything I saw the video a few times. It is and was possible that I misheard or misunderstood something. Just to recap what happened in case the video is not seen, taken off the web or whatever ( the video is of a middle-aged women who is being jostled and questioned by young women for a statement she makes.) While the video is of 14:40 minutes, I will start at the beginning so it’s easier to figure out what happened.

Apparently the middle aged lady, a Mrs. Soma Chakravarthy, a teacher from Mumbai went to Delhi and in a restaurant saw two young women wearing short dresses and claimed that it is because of women like you that women get raped. Apparently, she not only stops there but goes to some men who are in the restaurant and tells/asks them to rape the women. Some other women who were in the restaurant overhear the conversation, take the two young women with them along with the elderly lady to some mall nearby and ask her to repeat what she said few minutes ago. After trying to dodge for a few minutes, she again says that the girls who wear short dresses or those who are rebels should be raped. When a woman asks the elderly lady to apologize to the two young women she doesn’t relent. In fact, when a young woman asks the elderly lady as to why 2 year old babies are raped (which was in national news, happened in Unnao some months ago) , and 80 year old women, the elderly woman reply that probably because they wear nighties instead of sarees. As if sarees some hidden quality. If such were the case, then none of the women who have been raped should be the ones who wear sarees, but this is not the case.

I do have to point out that after the video went viral, the elderly lady, nicknamed ‘Aunty’ has also been harrassed online which also shows how complex the issue is at some level.

The episode however also throws quite a few uncomfortable questions in the air –

a. The women is a teacher in some educational institution in Bombay/Mumbai. So doesn’t it raises questions as to what kind of values she may be imparting to impressionable minds in Bombay/Mumbai ?

b. While I’m perhaps one of the most fashion illertrate persons on the globe, even then I know that Bombay/Mumbai is the fashion capital of India. All the Indian top-most fashion designers have their boutiques there. Even Parsian fashionistas have their designer boutiques in Mumbai. If you see any of the collections on a ramp walk in Paris, you are sure to get same or a similar thing in one of the top boutiques soon. If you are on a lower budget and a design or two become popular, you may have to stay for some more time but you will get on some on the roadside stalls within a few weeks at probably 1/10th to 1/100th of the price at the boutique. In such a scenario does not she not see what people are wearing in Bombay/Mumbai ?

c. The third thing – Then I guess all women from outside India, women tourists should not be allowed or should be told, sorry we cannot protect you. Also ban all the beaches, ban swimming also because women will come in one peice or two-piece swimsuit which could also be alluring.

Apart from breach of a person’s fundamental rights it also contravenes another idea which was being germinated in light of Sri Lanka’s attacks.

Ban the Burqa

And this contravens another set of ideas that an Indian right-wing party called for ‘Ban the Burqa’ or head covering worn by Muslim women. While later they backpeddaled a bit calling it ‘Personal Opinion’ . This episode was particularly illustrated by a twitter user who goes by the name SardarPatelbannedRSS

Message to Women of India by men in politics:
1. Cover yourself up if you don’t want to be raped (‘coz we suck at stopping the men from raping). 2. But expose yourself to prevent terrorism (‘coz we really suck at stopping terrorism).

In fact there are and were many threads in twitter which tried to say how Hinduism is great and trying to whitewash how the Hindu Code Bill came. While I didn’t go to all the threads, at least in one I shared that it wasn’t as easy as being shared or told. It took Dr. Ambedkar and Pandit Jawaharlal Nehru took 15 years and the issue came to such a head that Dr. Ambedkar resigned from his post . In fact there is an Oxford scholarship book which examines the whole thing in much detail. I don’t have much info. on it apart from the URL .

The CJI Sexual Harrassment Case

In the last few days this has also been shaking the judiciary. From what I could understand of the case/suit filed, apparently a junior to the CJI filed a sexual harrasment case. The judiciary instead of using the Vishaka guidelines which were formed after the Bhawari Devi Gang rape case became a media sensation. While the guidelines has been superseded by POSH the highest judiciary in the land chose to not even use Vishaka guidelines. Due to this the complainant was forced to take back her complaint . She was supported in her decision to take back the complaint by around 300 women who are part of the legal fraternity .

At the end, one is pressed to ask the question, is India not a country for women 😦 ?

02 May, 2019 09:51PM by shirishag75

hackergotchi for Romain Perier

Romain Perier

My work on Debian (April 2019)

Hi,

This is a summary of what I have done in April 2019 on Debian

Changes

  • I have uploaded raspi3-firmware 1.20190215-2 to sid
  • I have bumped raspi3-firmware to 1.20190401-1
  • I have bumped the preempt_rt kernel to 4.19.37-rt19 in sid
  • I have bumped the experimental kernel to 5.0.7
  • I have bumped the experimental preempt_rt kernel to 5.0.7-rt5
  • I have bumped the experimental kernel to 5.0.8
  • I have bumped the experimental kernel to 5.0.9
  • I have bumped the experimental kernel to 5.0.10

 Issues

  • I have removed extra binary files from the orig tarball in raspi3-firmware, this closes #924315
  •  I have enabled support for the coreboot memconsole in kernel 5.0.x. It has been backported into sid and should be part of buster. This closes bug #872069

02 May, 2019 06:21PM by Romain Perier (noreply@blogger.com)

hackergotchi for Joachim Breitner

Joachim Breitner

Drawing foldl and foldr

Often, someone wants to exhaling the difference between a left-fold and a right-fold, i.e. foldl and foldr in Haskell, you see a picture like the following

foldl and foldr

foldl and foldr

This is taken from the recently published and very nice “foldilocks” tutorial by Ayman Nadeem, but I have seen similar pictures before.

I always thought that something is not quite right about them, in particular the foldr. I mean, they are correct, and while the foldl one clearly conveys the right intuition, the foldr doesn’t quite: it looks as if the computer would fast forward to the end of the list, and then start processing it. But that does not capture the essence of foldr, which also starts at the beginning of the list, by applying its argument lazily.

And therefore, this is how I would draw this graph:

foldl and foldr

foldl and foldr

This way (at least to people from a left-to-right top-to-bottom culture), it becomes more intuitive that with foldr, you are first looking at an application of the combinator to the first element, and then possibly more.

02 May, 2019 06:10PM by Joachim Breitner (mail@joachim-breitner.de)