March 19, 2017

Petter Reinholdtsen

Free software archive system Nikita now able to store documents

The Nikita Noark 5 core project is implementing the Norwegian standard for keeping an electronic archive of government documents. The Noark 5 standard document the requirement for data systems used by the archives in the Norwegian government, and the Noark 5 web interface specification document a REST web service for storing, searching and retrieving documents and metadata in such archive. I've been involved in the project since a few weeks before Christmas, when the Norwegian Unix User Group announced it supported the project. I believe this is an important project, and hope it can make it possible for the government archives in the future to use free software to keep the archives we citizens depend on. But as I do not hold such archive myself, personally my first use case is to store and analyse public mail journal metadata published from the government. I find it useful to have a clear use case in mind when developing, to make sure the system scratches one of my itches.

If you would like to help make sure there is a free software alternatives for the archives, please join our IRC channel (#nikita on irc.freenode.net) and the project mailing list.

When I got involved, the web service could store metadata about documents. But a few weeks ago, a new milestone was reached when it became possible to store full text documents too. Yesterday, I completed an implementation of a command line tool archive-pdf to upload a PDF file to the archive using this API. The tool is very simple at the moment, and find existing fonds, series and files while asking the user to select which one to use if more than one exist. Once a file is identified, the PDF is associated with the file and uploaded, using the title extracted from the PDF itself. The process is fairly similar to visiting the archive, opening a cabinet, locating a file and storing a piece of paper in the archive. Here is a test run directly after populating the database with test data using our API tester:

~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446

 0 - Title of the test case file created 2017-03-18T23:49:32.103446
 1 - Title of the test file created 2017-03-18T23:49:32.103446
Select which mappe you want (or search term): 0
Uploading mangelmelding/mangler.pdf
  PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
  File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
~/src//noark5-tester$

You can see here how the fonds (arkiv) and serie (arkivdel) only had one option, while the user need to choose which file (mappe) to use among the two created by the API tester. The archive-pdf tool can be found in the git repository for the API tester.

In the project, I have been mostly working on the API tester so far, while getting to know the code base. The API tester currently use the HATEOAS links to traverse the entire exposed service API and verify that the exposed operations and objects match the specification, as well as trying to create objects holding metadata and uploading a simple XML file to store. The tester has proved very useful for finding flaws in our implementation, as well as flaws in the reference site and the specification.

The test document I uploaded is a summary of all the specification defects we have collected so far while implementing the web service. There are several unclear and conflicting parts of the specification, and we have started writing down the questions we get from implementing it. We use a format inspired by how The Austin Group collect defect reports for the POSIX standard with their instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :).

The Nikita project is implemented using Java and Spring, and is fairly easy to get up and running using Docker containers for those that want to test the current code base. The API tester is implemented in Python.

19 March, 2017 07:00AM

hackergotchi for Clint Adams

Clint Adams

Measure once, devein twice

Ophira lived in a wee house in University Square, Tampa. It had one floor, three bedrooms, two baths, a handful of family members, a couple pets, some plants, and an occasional staring contest.

Mauricio lived in Lowry Park North, but Ophira wasn’t allowed to go there because Mauricio was afraid that someone would tell his girlfriend. Ophira didn’t like Mauricio’s girlfriend and Mauricio’s girlfriend did not like Ophira.

Mauricio did not bring his girlfriend along when he and Ophira went to St. Pete Beach. They frolicked in the ocean water, and attempted to have sex. Mauricio and Ophira were big fans of science, so Somewhat quickly they concluded that it is impossible to have sex underwater, and absconded to Ophira’s car to have sex therein.

“I hate Mauricio’s girlfriend,” Ophira told Amit on the telephone. “She’s not even pretty.”

“Hey, listen,” said Amit. “I’m going to a wedding on Captiva.”

“Oh, my family used to go to Captiva every year. There’s bioluminescent algae and little crabs and stuff.”

“Yeah? Do you want to come along? You could pick me up at the airport.”

“Why would I want to go to a wedding?”

“Well, it’s on the beach and they’re going to have a bouncy castle.”

“A bouncy castle‽ Are you serious?”

“Yes.”

“Well, okay.”

Amit prepared to go to the wedding and Ophira became terse then unresponsive. After he landed at RSW, he called Ophira, but instead of answering the phone she startled and fell out of her chair. Amit arranged for other transportation toward the Sanibel Causeway. Ophira bit her nails for a few hours, then went to her car and drove to Cape Coral.

Ophira cruised around Cape Coral for a while, until she spotted a teenager cleaning a minivan. She parked her car and approached him.

“Whatcha doing?” asked Ophira, pretending to chew on imaginary gum.

The youth slid the minivan door open. “I’m cleaning,” he said hesitantly.

“Didn’t your parents teach you not to talk to strangers? I could do all kinds of horrible things to you.”

They conversed for a bit. She recounted a story of her personal hero, a twelve-year-old girl who seduced and manipulated older men into ruin. She rehashed the mysteries of Mauricio’s girlfriend. She waxed poetic on her love of bouncy castles. The youth listened, hypnotized.

“What’s your name, kid?” Ophira yawned.

“Arjun,” he replied.

“How old are you?”

Arjun thought about it. “15,” he said.

“Hmm,” Ophira stroked her chin. “Can you sneak me into your room so that your parents never find out about it?”

Arjun’s eyes went wide.

MEANWHILE, on Captiva Island, Amit had learned that even though the Tenderly had multiple indoor jacuzzis, General Fitzpatrick and Mrs. Fitzpatrick had decided it prudent to have sex in the hot tub on the deck; that the execution of this plan had somehow necessitated a lengthy cleaning process before the hot tub could be used again; that that’s why workmen were cleaning the hot tub; and that the Fitzpatrick children had gotten General Fitzpatrick and Mrs. Fitzpatrick to agree to not do that again, with an added suggestion that they not be seen doing anything else naked in public.

A girl walked up to Amit. “Hey, I heard you lost your plus-one. Are you here alone? What a loser!” she giggled nervously, then stared.

“Leave me alone, Darlene,” sighed Amit.

Darlene’s face reddened as she spun on her heels and stormed over to Lisette. “Oh my god, did you see that? I practically threw myself at him and he was abusive toward me. He probably has all the classic signs of being an abuser. Did you hear about that girl he dated in Ohio? I bet I know why that ended.”

“Oh really?” said Lisette distractedly, looking Amit up and down. “So he’s single now?”

Darlene glared at Lisette as Amit wandered back outside to stare at the hot tub.

“Hey kid,” said Ophira, “bring me some snacks.”

“I don’t bring food into my room,” said Arjun. “It attracts pests.”

“Is that what your parents told you?” scoffed Ophira. “Don’t be such a wuss.”

Three minutes later, Ophira was finishing a bag of paprika puffs. “These are great, Arjun! Where do you get these?”

“My cousin sends them from Europe,” he explained.

“Now get me a diet soda.”

Amit strolled along the beach, then yelped. “What’s biting my legs?” he cried out.

“Those are sand fleas,” said Nessarose.

“What are sand fleas?” asked Amit incredulously.

Nessarose rolled her eyes. “Stop being a baby and have a drink.”

After the sun went down, Amit began to notice the crabs, and this made him drink more.

When everyone was soused, General Fitzpatrick announced that they were going for a swim in the Gulf, in direct contravention of safety guidelines. Most of the guests were wise enough to refuse, but an eightsome swam out, occasionally stopping to slap the algae, but continuing until they reached the sandbar that General Fitzpatrick correctly claimed was there.

Then screams echoed through the night as all the jellyfish attacked everyone invading their sandbar.

The crestfallen swimming party eventually made it back to shore.

“Pee on the jellyfish sting,” commanded Nessarose. “It’s the best cure.”

“No!” shouted General Fitzpatrick’s daughter. “Urine makes it worse.”

Things quickly escalated from Nessarose and General Fitzpatrick’s daughter screaming at each other to the beach dividing into three factions: those siding with Nessarose, those siding with General Fitzpatrick’s daughter, and those who had no idea what was going on. General Fitzpatrick had no interest in any of this, and went straight to bed.

“It’s getting late, kid,” said Ophira. “I’m taking your bed.”

“What?” squeaked Arjun.

“Look,” said Ophira, “your bed is small and there isn’t room for both of us. You may sleep on the floor if you’re quiet and don’t bother me.”

“What?” squeaked Arjun.

“Are you deaf, kid?” Ophira grunted and then went to bed.

Arjun blinked in confusion, then tried to fall asleep on the floor, without much success.

Ophira got up in the morning and said, “Before I go, I want to teach you a valuable lesson.”

“What?” groaned Arjun, getting to his feet.

“You should be careful talking to strangers. Now, I told you that I could do horrible things to you, so this is not my fault; it’s yours,” she announced, then sucker-punched him in the gut.

Ophira climbed out the window as Arjun doubled over.

As the ceremony began, only a small minority of the wedding party was visibly suffering from jellyfish stings, which may or may not have helped with ignoring the sand fleas.

The ceremony ended shortly thereafter, and now that marriage had been accomplished, everyone turned their attention to food and drink and swimming less irresponsibly than the night before. Guests that needed to return home sooner departed in waves and Amit started to appreciate the more peaceful environment.

He heard the deck door slide open behind him and turned his attention away from the hot tub.

“Hey, mofo,” Ophira shouted as strode stylishly out onto the deck. “Where’s this bouncy castle?”

Amit blinked in surprise. “That was yesterday. You missed it.”

“Oh,” she frowned. “So I met this South Slav guy with a really sexy forehead, and I need some advice. I don’t know if I should call him or wait.”

Amit pointed to the hot tub and told her the story of General Fitzpatrick and Mrs. Fitzpatrick and the hot tub.

“What?” said Ophira. “How could they have sex underwater?”

“What do you mean?” asked Amit.

“Well, it’s impossible,” she replied.

Posted on 2017-03-19
Tags: mintings

19 March, 2017 04:38AM

March 18, 2017

Vincent Sanders

A rose by any other name would smell as sweet

Often I end up dealing with code that works but might not be of the highest quality. While quality is subjective I like to use the idea of "code smell" to convey what I mean, these are a list of indicators that, in total, help to identify code that might benefit from some improvement.

Such smells may include:
  • Complex code lacking comments on intended operation
  • Code lacking API documentation comments especially for interfaces used outside the local module
  • Not following style guide
  • Inconsistent style
  • Inconsistent indentation
  • Poorly structured code
  • Overly long functions
  • Excessive use of pre-processor
  • Many nested loops and control flow clauses
  • Excessive numbers of parameters
I am most certainly not alone in using this approach and Fowler et al have covered this subject in the literature much better than I can here. One point I will raise though is some programmers dismiss code that exhibits these traits as "legacy" and immediately suggest a fresh implementation. There are varying opinions on when a rewrite is the appropriate solution from never to always but in my experience making the old working code smell nice is almost always less effort and risk than a re-write.

Tests

When I come across smelly code, and I decide it is worthwhile improving it, I often discover the biggest smell is lack of test coverage. Now do remember this is just one code smell and on its own might not be indicative, my experience is smelly code seldom has effective test coverage while fresh code often does.

Test coverage is generally understood to be the percentage of source code lines and decision paths used when instrumented code is exercised by a set of tests. Like many metrics developer tools produce, "coverage percentage" is often misused by managers as a proxy for code quality. Both Fowler and Marick have written about this but sufficient to say that for a developer test coverage is a useful tool but should not be misapplied.

Although refactoring without tests is possible the chances for unintended consequences are proportionally higher. I often approach such a refactor by enumerating all the callers and constructing a description of the used interface beforehand and check that that interface is not broken by the refactor. At which point is is probably worth writing a unit test to automate the checks.

Because of this I have changed my approach to such refactoring to start by ensuring there is at least basic API code coverage. This may not yield the fashionable 85% coverage target but is useful and may be extended later if desired.

It is widely known and equally widely ignored that for maximum effectiveness unit tests must be run frequently and developers take action to rectify failures promptly. A test that is not being run or acted upon is a waste of resources both to implement and maintain which might be better spent elsewhere.

For projects I contribute to frequently I try to ensure that the CI system is running the coverage target, and hence the unit tests, which automatically ensures any test breaking changes will be highlighted promptly. I believe the slight extra overhead of executing the instrumented tests is repaid by having the coverage metrics available to the developers to aid in spotting areas with inadequate tests.

Example

A short example will help illustrate my point. When a web browser receives an object over HTTP the server can supply a MIME type in a content-type header that helps the browser interpret the resource. However this meta-data is often problematic (sorry that should read "a misleading lie") so the actual content must be examined to get a better answer for the user. This is known as mime sniffing and of course there is a living specification.

The source code that provides this API (Linked to it rather than included for brevity) has a few smells:
  • Very few comments of any type
  • The API are not all well documented in its header
  • A lot of global context
  • Local static strings which should be in the global string table
  • Pre-processor use
  • Several long functions
  • Exposed API has many parameters
  • Exposed API uses complex objects
  • The git log shows the code has not been significantly updated since its implementation in 2011 but the spec has.
  • No test coverage
While some of these are obvious the non-use of the global string table and the API complexity needed detailed knowledge of the codebase, just to highlight how subjective the sniff test can be. There is also one huge air freshener in all of this which definitely comes from experience and that is the modules author. Their name at the top of this would ordinarily be cause for me to move on, but I needed an example!

First thing to check is the API use

$ git grep -i -e mimesniff_compute_effective_type --or -e mimesniff_init --or -e mimesniff_fini
content/hlcache.c: error = mimesniff_compute_effective_type(handle, NULL, 0,
content/hlcache.c: error = mimesniff_compute_effective_type(handle,
content/hlcache.c: error = mimesniff_compute_effective_type(handle,
content/mimesniff.c:nserror mimesniff_init(void)
content/mimesniff.c:void mimesniff_fini(void)
content/mimesniff.c:nserror mimesniff_compute_effective_type(llcache_handle *handle,
content/mimesniff.h:nserror mimesniff_compute_effective_type(struct llcache_handle *handle,
content/mimesniff.h:nserror mimesniff_init(void);
content/mimesniff.h:void mimesniff_fini(void);
desktop/netsurf.c: ret = mimesniff_init();
desktop/netsurf.c: mimesniff_fini();

This immediately shows me that this API is used in only a very small area, this is often not the case but the general approach still applies.

After a little investigation the usage is effectively that the mimesniff_init API must be called before the mimesniff_compute_effective_type API and the mimesniff_fini releases the initialised resources.

A simple test case was added to cover the API, this exercised the behaviour both when the init was called before the computation and not. Also some simple tests for a limited number of well behaved inputs.

By changing to using the global string table the initialisation and finalisation API can be removed altogether along with a large amount of global context and pre-processor macros. This single change removes a lot of smell from the module and raises test coverage both because the global string table already has good coverage and because there are now many fewer lines and conditionals to check in the mimesniff module.

I stopped the refactor at this point but were this more than an example I probably would have:
  • made the compute_effective_type interface simpler with fewer, simpler parameters
  • ensured a solid set of test inputs
  • examined using a fuzzer to get a better test corpus.
  • added documentation comments
  • updated the implementation to 2017 specification.

Conclusion

The approach examined here reduce the smell of code in an incremental, testable way to improve the codebase going forward. This is mainly necessary on larger complex codebases where technical debt and bit-rot are real issues that can quickly overwhelm a codebase if not kept in check.

This technique is subjective but helps a programmer to quantify and examine a piece of code in a structured fashion. However it is only a tool and should not be over applied nor used as a metric to proxy for code quality.

18 March, 2017 01:01PM by Vincent Sanders (noreply@blogger.com)

March 17, 2017

hackergotchi for Shirish Agarwal

Shirish Agarwal

Science Day at GMRT, Khodad 2017

The whole team posing at the end of day 2

The above picture is the blend of the two communities from foss community and mozilla India. And unless you were there you wouldn’t know who is from which community which is what FOSS is all about. But as always I’m getting a bit ahead of myself.

Akshat, who works at NCRA as a programmer, the standing guy on the left shared with me in January this year that this year too, we should have two stalls, foss community and mozilla India stalls next to each other. While we had the banners, we were missing stickers and flyers. Funds were and are always an issue and this year too, it would have been emptier if we didn’t get some money saved from last year minidebconf 2016 that we had in Mumbai. Our major expenses included printing stickers, stationery and flyers which came to around INR 5000/- and couple of LCD TV monitors which came for around INR 2k/- as rent. All the labour was voluntary in nature, but both me and Akshat easily spending upto 100 hours before the event. Next year, we want to raise to around INR 10-15k so we can buy 1 or 2 LCD monitors and we don’t have to think for funds for next couple of years. How will we do that I have no idea atm.

Printing leaflets

Me and Akshat did all the printing and stationery runs and hence had not been using my lappy for about 3-4 days.

Come to the evening before the event and the laptop would not start. Coincidentally, or not few months or even last at last year’s Debconf people had commented on IBM/Lenovo’s obsession with proprietary power cords and adaptors. I hadn’t given it much thought but when I got no power even after putting it on AC power for 3-4 hours, I looked up on the web and saw that the power cord and power adaptors were all different even in T440 and even that under existing models. In fact I couldn’t find mine hence sharing it via pictures below.

thinkpad power cord male

thinkpad power adaptor female

I knew/suspected that thinkpads would be rare where I was going, it would be rarer still to find the exact power cord and I was unsure whether it was the power cord at fault or adaptor or whatever goes for SMPS in laptop or memory or motherboard/CPU itself. I did look up the documentation at support.lenovo.com and was surprised at the extensive documentation that Lenovo has for remote troubleshooting.

I did the usual take out the battery, put it back in, twiddle with the little hole in the bottom of the laptop, trying to switch on without the battery on AC mains, trying to switch on with battery power only but nothing worked. Couple of hours had gone by and with a resigned thought went to bed, convincing myself that anyways it’s good I am not taking the lappy as it is extra-dusty there and who needs a dead laptop anyways.

Update – After the event was over, I did contact Lenovo support and within a week, with one visit from a service engineer, he was able to identify that it was a faulty cable which was at fault and not the the other things which I was afraid of. Another week gone by and lenovo replaced the cable. Going by service standards that I have seen of other companies, Lenovo deserves a gold star here for the prompt service they provided. I probably would end up subscribing to their extended 2-year warranty service when my existing 3 year warranty is about to be over.

Next day, woke up early morning, two students from COEP hostel were volunteering and we made our way to NCRA, Pune University Campus. Ironically, though we were under the impression that we would be the late arrivals, it turned out we were the early birds. 5-10 minutes passed by and soon enough we were joined by Aniket and we played catch-up for a while. We hadn’t met each other for a while so it was good to catch-up. Then slowly other people starting coming in and around 07:10-07:15 we started for GMRT, Khodad.

Now I had been curious as had been hearing for years that the Pune-Nashik NH-50 highway would be concreted and widened to six-lane highways but the experience was below par. Came back and realized the proposal has now been pushed back to 2020.

From the mozilla team, only Aniket was with us, the rest of the group was coming straight from Nashik. Interestingly, all the six people who came, came on bikes which depending upon how you look at it was either brave or stupid. Travelling on bikes on Indian highways you either have to be brave or stupid or both, we have more than enough ‘accidents’ due to quality of road construction, road design, lane-changing drivers and many other issues. This is probably not the place for it hence will use some other blog post to rant about that.

We reached around 10:00 hrs. IST and hung around till lunch as Akshat had all the marketing material, monitors etc. The only thing we had were couple of lappies and couple of SBC’s, an RPI 3 and a BBB.

Aarti Kashyap sharing something about SBC

Our find for the event was Aarti Kashyap who you can see above. She is a third-year student at COEP and one of the rare people who chose to interact with hardware rather than software. From last several years, we had been trying, successfully and unsuccessfully to get more Indian women and girls interested into technology. It is a vicious circle as till a girl/woman doesn’t volunteer we are unable to share our knowledge to the extent we can which leads them to not have much interest in FOSS or even technology in general.

While there are groups are djangogirls, Pyladies and railgirls and even Outreachy which tries to motivate getting girls into computing but it’s a long road ahead.

We are short of both funds and ideas as to how to motivate more girls to get into computing and then to get into playing with hardware. I don’t know where to start and end for whoever wants to play with hardware. From SBC’s, routers to blade servers the sky is the limit. Again this probably isn’t the place for it, hence probably we can chew it on more at some other blog post.

This year, we had a lowish turnout due to the fact that the 12th board exams 1st paper was on the day we had opened. So instead of 20-25k, we probably had 5-7k fewer people pass through. There were two-three things that we were showing, we were showing Debian on one of the systems, we were showing the output from the SBC’s on the other monitor but the glare kept hitting the monitors.

While the organizers had done exemplary work over last year. They had taped the carpets on the ground so there was hardly any dust moving around. However, I wished the organizers had taken the pains to have two cloth roofs over our head instead of just one, the other roof head could be say 2 feet up, this would have done two things –

a. It probably would have cooled the place a bit more as –

b. We could get diffused sunlight which would have lessened the glare and reflection the LCD’s kept throwing back. At times we also got people to come to our side as can be seen in Aarti’s photo as can be seen above.

If these improvements can be made for next year, this would result in everybody in our ‘Pandal’ would benefit, not just us and mozilla. This would be benefiting around 10-15 organizations which were within the same temporary structure.

Of course, it depends very much on the budget they are able to have and people who are executing, we can just advise.

The other thing which had been missing last year and this year is writing about Single Board Computers in Marathi. If we are to promote them as something to replace a computer or something for a younger brother/sister to learn computing upon at a lower cost, we need leaflets written in their language to be more effective. And this needs to be in the language and mannerisms that people in that region understand. India, as probably people might have experienced is a dialect-prone country. Which means every 2-5 kms, the way the language is spoken is different from anywhere else. The Marathi spoken by somebody who has lived in Ravivar Peth for his whole life and a person who has lived in say Kothrud are different. The same goes from any place and this place, Khodad, Narayangaon would have its own dialect, its own mini-codespeak.

Just to share, we did have one in English but it would have been a vast improvement if we could do it in the local language. Maybe we can discuss about this and ask for help from people.

Outside, Looking in

Mozillians helping FOSS community and vice-versa

What had been interesting about the whole journey were the new people who were bringing all their passion and creativity to the fore. From the mozilla community, we had Akshay who is supposed to be a wizard on graphics, animation, editing anything to do with the visual medium. He shared some of the work he had done and also shared a bit about how blender works with people who wanted to learn about that.

Mayur, whom you see in the picture pointing out something about FOSS and this was the culture that we strove to have. I know and love and hate the browser but haven’t been able to fathom the recklessness that Mozilla has been doing the last few years, which has just been having one mis-adventure after another.

For instance, mozstumbler was an effort which I thought would go places. From what little I understood, it served/serves as a user-friendly interface to a potential user while still sharing all the data with OSM . They (Mozilla) seems/seemed to have a fatalistic take as it provided initial funding but then never fully committing to the project.

Later, at night we had the whole ‘free software’ and ‘open-source’ sharings where I tried to emphasize that without free software, the term ‘open-source’ would not have come into existence. We talked and talked and somewhere around 02:00 I slept, the next day was an extension of the first day itself where we ribbed each other good-naturedly and still shared whatever we could share with each other.

I do hope that we continue this tradition for great many years to come and engage with more and more people every passing year.


Filed under: Miscellenous Tagged: #budget, #COEP< #volunteering, #debian, #Events, #Expenses, #mozstumbler, #printing, #SBC's, #Science Day 2017, #thinkpad cable issue, FOSS, mozilla

17 March, 2017 07:19PM by shirishag75

Antonio Terceiro

Patterns for Testing Debian Packages

At the and of 2016 I had the pleasure to attend the 11th Latin American Conference on Pattern Languages of Programs, a.k.a SugarLoaf PLoP. PLoP is a series of conferences on Patterns (as in “Design Patterns”), a subject that I appreciate a lot. Each of the PLoP conferences but the original main “big” conference has a funny name. SugarLoaf PLoP is called that way because its very first edition was held in Rio de Janeiro, so the organizers named it after a very famous mountain in Rio. The name stuck even though a long time has passed since it was held in Rio for the last time. 2016 was actually the first time SugarLoaf PLoP was held outside of Brazil, finally justifying the “Latin American” part of its name.

I was presenting a paper I wrote on patterns for testing Debian packages. The Debian project funded my travel expenses through the generous donations of its supporters. PLoP’s are very fun conferences with a relaxed atmosphere, and is amazing how many smart (and interesting!) people gather together for them.

My paper is titled “Patterns for Writing As-Installed Tests for Debian Packages”, and has the following abstract:

Large software ecosystems, such as GNU/Linux distributions, demand a large amount of effort to make sure all of its components work correctly invidually, and also integrate correctly with each other to form a coherent system. Automated Quality Assurance techniques can prevent issues from reaching end users. This paper presents a pattern language originated in the Debian project for automated software testing in production-like environments. Such environments are closer in similarity to the environment where software will be actually deployed and used, as opposed to the development environment under which developers and regular Continuous Integration mechanisms usually test software products. The pattern language covers the handling of issues arising from the difference between development and production-like environments, as well as solutions for writing new, exclusive tests for as-installed functional tests. Even though the patterns are documented here in the context of the Debian project, they can also be generalized to other contexts.

In practical terms, the paper documents a set of patterns I have noticed in the last few years, when I have been pushing the Debian Continous Integration project. It should be an interesting read for people interested in the testing of Debian packages in their installed form, as done with autopkgtest. It should also be useful for people from other distributions interested in the subject, as the issues are not really Debian-specific.

I have recently finished the final version of the paper, which should be published in the ACM Digital Library at any point now. You can download a copy of the paper in PDF. Source is also available, if you are into markdown, LaTeX, makefiles and this sort of thing.

If everything goes according to plan, I should be presenting a talk on this at the next Debconf in Montreal.

17 March, 2017 01:23AM

March 16, 2017

hackergotchi for Thorsten Glaser

Thorsten Glaser

Updates to the last two posts

Someone from the FSF’s licencing department posted an official-looking thing saying they don’t believe GitHub’s new ToS to be problematic with copyleft. Well, my lawyer (not my personal one, nor for The MirOS Project, but related to another association, informally) does agree with my reading of the new ToS, and I can point out at least a clause in the GPLv1 (I really don’t have time right now) which says contrary (but does this mean the FSF generally waives the restrictions of the GPL for anything on GitHub?). I’ll eMail GitHub Legal directly and will try to continue getting this fixed (as soon as I have enough time for it) as I’ll otherwise be forced to force GitHub to remove stuff from me (but with someone else as original author) under GPL, such as… tinyirc and e3.

My dbconfig-common Debian packaging example got a rather hefty upgrade because dbconfig-common (unlike any other DB schema framework I know of) doesn’t apply the upgrades on a fresh install (and doesn’t automatically put the upgrades into a transaction either) but only upgrades between Debian package versions (which can be funny with backports, but AFAICT that part is handled correctly). I now append the upgrades to the initial-version-as-seen-in-the-source to generate the initial-version-as-shipped-in-the-binary-package (optionally, only if it’s named .in) removing all transaction stuff from the upgrade files and wrapping the whole shit in BEGIN; and COMMIT; after merging. (This should at least not break nōn-PostgreSQL databases and… well, database-like-ish things I cannot test for obvious (SQLite is illegal, at least in Germany, but potentially worldwide, and then PostgreSQL is the only remaining Open Source database left ;) reasons.)

Update: Yes, this does mean that maintainers of databases and webservers should send me patches to make this work with not-PostgreSQL (new install/name.in, upgrade files) and not-Apache-2.2/2.4 (new debian/*/*.conf snippets) to make this packaging example even more generally usable.

Natureshadow already forked this and made a Python/Flask package from it, so I’ll prod him to provide a similarily versatile hello-python-world example package.

16 March, 2017 11:12PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Joey Hess

Joey Hess

end of an era

I'm at home downloading hundreds of megabytes of stuff. This is the first time I've been in position of "at home" + "reasonably fast internet" since I moved here in 2012. It's weird!

Satellite internet dish with solar panels in foreground

While I was renting here, I didn't mind dialup much. In a way it helps to focus the mind and build interesting stuff. But since I bought the house, the prospect of only dialup at home ongoing became more painful.

While I hope to get on the fiber line that's only a few miles away eventually, I have not convinced that ISP to build out to me yet. Not enough neighbors. So, satellite internet for now.

9.1 dB SNR

speedtest results: 15 megabit down / 4.5 up with significant variation

Dish seems well aligned, speed varies a lot, but is easily hundreds of times faster than dialup. Latency is 2x dialup.

The equipment uses more power than my laptop, so with the current solar panels, I anticipate using it only 6-9 months of the year. So I may be back to dialup most days come winter, until I get around to adding more PV capacity.

It seems very cool that my house can capture sunlight and use it to beam signals 20 thousand miles into space. Who knows, perhaps there will even be running water one day.

Satellite dish

16 March, 2017 10:14PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, February 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 154 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 3 hours (out of 13h allocated, thus keeping 10 extra hours for March).
  • Balint Reczey did 13 hours (out of 13 hours allocated + 1.25 hours remaining, thus keeping 1.25 hours for March).
  • Ben Hutchings did 19 hours (out of 13 hours allocated + 15.25 hours remaining, he gave back the remaining hours to the pool).
  • Chris Lamb did 13 hours.
  • Emilio Pozuelo Monfort did 12.5 hours (out of 13 hours allocated, thus keeping 0.5 hour for March).
  • Guido Günther did 8 hours.
  • Hugo Lefeuvre did nothing and gave back his 13 hours to the pool.
  • Jonas Meurer did 14.75 hours (out of 5 hours allocated + 9.75 hours remaining).
  • Markus Koschany did 13 hours.
  • Ola Lundqvist did 4 hours (out of 13h allocated, thus keeping 9 hours for March).
  • Raphaël Hertzog did 3.75 hours (out of 10 hours allocated, thus keeping 6.25 hours for March).
  • Roberto C. Sanchez did 5.5 hours (out of 13 hours allocated + 0.25 hours remaining, thus keeping 7.75 hours for March).
  • Thorsten Alteholz did 13 hours.

Evolution of the situation

The number of sponsored hours increased slightly thanks to Bearstech and LiHAS joining us.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file 39. The number of open issues continued its slight increase, this time it could be explained by the fact that many contributors did not spend all the hours allocated (for various reasons). There’s nothing worrisome at this point.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 March, 2017 01:25PM by Raphaël Hertzog

Enrico Zini

Django signing signs, does not encrypt

As is says in the documentation. django.core.signing signs, and does not encyrpt.

Even though signing.dumps creates obscure-looking tokens, they are not encrypted, and here's a proof:

>>> from django.core import signing
>>> a = signing.dumps({"action":"set-password", "username": "enrico", "password": "SECRET"})
>>> from django.utils.encoding import force_bytes
>>> print(signing.b64_decode(force_bytes(a.split(":",1)[0])))
b'{"action":"set-password","password":"SECRET","username":"enrico"}'

I'm writing it down so one day I won't be tempted to think otherwise.

16 March, 2017 11:01AM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Codes of Conduct

These days, most large FLOSS communities have a "Code of Conduct"; a document that outlines the acceptable (and possibly not acceptable) behaviour that contributors to the community should or should not exhibit. By writing such a document, a community can arm itself more strongly in the fight against trolls, harassment, and other forms of antisocial behaviour that is rampant on the anonymous medium that the Internet still is.

Writing a good code of conduct is no easy matter, however. I should know -- I've been involved in such a process twice; once for Debian, and once for FOSDEM. While I was the primary author for the Debian code of conduct, the same is not true for the FOSDEM one; I was involved, and I did comment on a few early drafts, but the core of FOSDEM's current code was written by another author. I had wanted to write a draft myself, but then this one arrived and I didn't feel like I could improve it, so it remained.

While it's not easy to come up with a Code of Conduct, there (luckily) are others who walked this path before you. On the "geek feminism" wiki, there is an interesting overview of existing Open Source community and conference codes of conduct, and reading one or more of them can provide one with some inspiration as to things to put in one's own code of conduct. That wiki page also contains a paragraph "Effective codes of conduct", which says (amongst others) that a good code of conduct should include

Specific descriptions of common but unacceptable behaviour (sexist jokes, etc.)

The attentive reader will notice that such specific descriptions are noticeably absent from both the Debian and the FOSDEM codes of conduct. This is not because I hadn't seen the above recommendation (I had); it is because I disagree with it. I do not believe that adding a list of "don't"s to a code of conduct is a net positive to it.

Why, I hear you ask? Surely having a list of things that are not welcome behaviour is a good thing, which should be encouraged? Surely such a list clarifies the kind of things your does not want to see? Having such a list will discourage that bad behaviour, right?

Well, no, I don't think so. And here's why.

Enumerating badness

A list of things not to do is like a virus scanner. For those not familiar with these: on some operating systems, there is specific piece of software that everyone recommends you run, which checks if particular blobs of data appear in files on the disk. If they do, then these files are assumed to be bad, and are kicked out. If they do not, then these files are assumed to be not bad, and are left alone (for the most part).

This works if we know all the possible types of badness; but as soon as someone invents a new form of badness, suddenly your virus scanner is ineffective. Additionally, it also means you're bound to continually have to update your virus scanner (or, as the case may be, code of conduct) to a continually changing hostile world. For these (and other) reasons, enumerating badness is listed as number 2 in security expert Markus Ranum's "six dumbest ideas in computer security," which was written in 2005.

In short, a list of "things not to do" is bound to be incomplete; if the goal is to clarify the kind of behaviour that is not welcome in your community, it is usually much better to explain the behaviour that is wanted, so that people can infer (by their absense) the kind of behaviour that isn't welcome.

This neatly brings me to my next point...

Black vs White vs Gray.

The world isn't black-and-white. We could define a list of welcome behaviour -- let's call that the whitelist -- or a list of unwelcome behaviour -- the blacklist -- and assume that the work is done after doing so. However, that wouldn't be true. For every item on either the white or black list, there's going to be a number of things that fall somewhere in between. Let's call those things as being on the "gray" list. They're not the kind of outstanding behaviour that we would like to see -- they'd be on the white list if they were -- but they're not really obvious CoC violations, either. You'd prefer it if people don't do those things, but it'd be a stretch to say they're jerks if they do.

Let's clarify that with an example:

Is it a code of conduct violation if you post links to pornography websites on your community's main development mailinglist? What about jokes involving porn stars? Or jokes that denigrate women, or that explicitly involve some gender-specific part of the body? What about an earring joke? Or a remark about a user interacting with your software, where the women are depicted as not understanding things as well as men? Or a remark about users in general, that isn't written in a gender-neutral manner? What about a piece of self-deprecating humor? What about praising someone else for doing something outstanding?

I'm sure most people would agree that the first case in the above paragraph should be a code of conduct violation, whereas the last case should not be. Some of the items in the list in between are clearly on one or the other side of the argument, but for others the jury is out. Let's call those as being in the gray zone. (Note: no, I did not mean to imply that the list is ordered in any way ;-)

If you write a list of things not to do, then by implication (because you didn't mention them), the things in the gray area are okay. This is especially problematic when it comes to things that are borderline blacklisted behaviour (or that should be blacklisted but aren't, because your list is incomplete -- see above). In such a situation, you're dealing with people who are jerks but can argue about it because your definition of jerk didn't cover teir behaviour. Because they're jerks, you can be sure they'll do everything in their power to waste your time about it, rather than improving their behaviour.

In contrast, if you write a list of things that you want people to do, then by implication (because you didn't mention it), the things in the gray area are not okay. If someone slips and does something in that gray area anyway, then that probably means they're doing something borderline not-whitelisted, which would be mildly annoying but doesn't make them jerks. If you point that out to them, they might go "oh, right, didn't think of it that way, sorry, will aspire to be better next time". Additionally, the actual jerks and trolls will have been given less tools to argue about borderline violations (because the border of your code of conduct is far, far away from jerky behaviour), so less time is wasted for those of your community who have to police it (yay!).

In theory, the result of a whitelist is a community of people who aspire to be nice people, rather than a community of people who simply aspire to be "not jerks". I know which kind of community I prefer.

Giving the wrong impression

During one of the BOFs that were held while I was drafting the Debian code of conduct, it was pointed out to me that a list of things not to do may give the impression to people that all these things on this list do actually happen in the code's community. If that is true, then a very long list may produce the impression that the given community is a community with a lot of problems.

Instead, a whitelist-based code of conduct will provide the impression that you're dealing with a healthy community. Whether that is the case obviously depends on more factors than just the code of conduct itself, but it will put people in the right mindset for this to become something of a self-fulfilling prophecy.

Conclusion

Given all of the above, I think a whitelist-based code of conduct is a better idea than a blacklist-based one. Additionally, in the few years since the Debian code of conduct was accepted, it is my impression that the general atmosphere in the Debian project has improved, which would seem to confirm that the method works (but YMMV, of course).

At any rate, I'm not saying that blacklist-based codes of conduct are useless. However, I do think that whitelist-based ones are better; and hopefully, you now agree, too ;-)

16 March, 2017 08:02AM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, February 2017

I was assigned 13 hours of work by Freexian's Debian LTS initiative and carried over 15.25 from January. I worked 19 hours and have returned the remaining 9.25 hours to the general pool.

I prepared a security update for the Linux kernel and issued DLA-833-1. However, I spent most of my time catching up with a backlog of fixes for the Linux 3.2 longterm stable branch. I issued two stable updates (3.2.85, 3.2.86).

16 March, 2017 04:44AM

March 15, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppEigen 0.3.2.9.1

A new maintenance release 0.3.2.9.1 of RcppEigen, still based on Eigen 3.2.9 is now on CRAN and is now going into Debian soon.

This update ensures that RcppEigen and the Matrix package agree on their #define statements for the CholMod / SuiteSparse library. Thanks to Martin Maechler for the pull request. I also added a file src/init.c as now suggested (soon: requested) by the R CMD check package validation.

The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.2.9.1 (2017-03-14)

  • Synchronize CholMod header file with Matrix package to ensure binary compatibility on all platforms (Martin Maechler in #42)

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); also use .registration=TRUE in useDynLib in NAMESPACE

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 March, 2017 11:40AM

hackergotchi for Michal Čihař

Michal Čihař

Life of free software project

During last week I've noticed several interesting posts about challenges being free software maintainer. After being active in open source for 16 years I can share much of the feelings I've read and I can also share my dealings with the things.

First of all let me link some of the other posts on the topic:

I guess everybody involved in in some popular free software project knows it - there is much more work to be done than people behind the project can handle. It really doesn't matter it those are bug reports, support requests, new features or technical debt, it's simply too much of that. If you are the only one behind the project it can feel even more pressing.

There can be several approaches how to deal with that, but you have to choose what you prefer and what is going to work for you and your project. I've used all of the below mentioned approaches on some of the projects, but I don't think there is a silver bullet.

Finding more people

Obviously if you can not cope with the work, let's find more people to do the work. Unfortunately it's not that easy. Sometimes people come by, contribute few patches, but it's not that easy to turn them into regular contributor. You should encourage them to stay and to care about the part of the project they have touched.

You can try to attract completely new contributors through programs as Google Summer of Code (GSoC) or Outreachy, but that has it's own challenges as well.

With phpMyAdmin we're participating regularly in GSoC (we've only missed last year as we were not chosen by Google that year) and it indeed helps to bring new people on the board. Many of them even stay around your project (currently 3 of 5 phpMyAdmin team members are former GSoC students). But I think this approach really works only for bigger organizations.

You can also motivate people by money. It's way which is not really much used on free software projects, partly because lack of funding (I'll get to that later) and partly because it doesn't necessarily bring long time contributors, just cash hunters. I've been using Bountysource for some of my projects (Weblate and Gammu) and so far it mostly works other way around - if somebody posts bounty on the issue, it means it's quite important for him to get that fixed, so I use that as indication for myself. On attracting new developers it never really worked well, even when I've tried to post bounties to some easy to fix issues, where newbies could learn our code base and get paid for that. These issues stayed opened for months and in the end I've fixed them myself because they annoyed me.

Don't care too much

I think this is most important aspect - you simply can never fix all the problems. Let's face it and work according to that. There can be various levels of don't caring. I find it always better to try to encourage people to fix their problem, but you can't expect big success rate in that, so you might find it not worth of the time.

What I currently do:

  • I often ignore direct emails asking for fixing something. The project has public issue tracker on purpose. Once you solve the issue there others will have chance to find it when they face similar problem. Solving things privately in mails will probably make you look at similar problems again and again.
  • I try to batch process things. It is really easier to get focused when you work on one project and do not switch contexts. This means people will have to wait until you get to their request, but it also means that you will be able to deal them much more effectively. This is why Free hosting requests for Hosted Weblate get processed once in a month.
  • I don't care about number of unread mails, notifications or whatever. Or actually I try to not get much of these at all. This is really related to above, I might to some things once in a month (or even less) and that's still okay. Maybe you're just getting notifications for things you really don't need to get notified on? Do you really need notification for new issues? Isn't it better just to look at the issue tracker once in a time than constantly feeling the pressure of not read notifications?
  • I don't have to fix every problem. When it seems like something what could be as well fixed by the reporter, I just try to give them guidance how to dig deeper into the issue. Obviously this can't work for all cases, but getting more people on board always helps.
  • I try to focus on things which can save time in future. Many issues turn out to be just some unclear things and once you figure out that, spend few more minutes to improve your documentation to cover that. It's quite likely that this will save your time in future.

If you still can't handle that, you should consider abandoning the project as well. Does it bring something to you other than frustration of not completed work? I know it can be hard decision, in the end it is your child, but sometimes it's the best think you can do.

Get paid to do the work

Are you doing your fulltime job and then work on free software on nights or weekends? It can probably work for some time, but unless you find some way to make these two match, you will lack free time to relax and spend with friends or family. There are several options to make these work together.

You can find job where doing free software will be natural part of it. This worked for me pretty well at SUSE, but I'm sure there are more companies where it will work. It can happen that the job will not cover all your free software activities, but this still helps.

You can also make your project to become your employer. This can be sometimes challenging to make volunteers and paid contractors to work on one project, but I think this can be handled. Such setup currently works currently quite well for phpMyAdmin (we will announce second contractor soon) and works quite well for me with Weblate as well.

Funding free software projects

Once your project is well funded, you can fix many problems by money. You can pay yourself to do the work, hire additional developers, get better infrastructure or travel to conferences to spread word about it. But the question is how to get to the point of being well funded.

There are several crowdfunding platforms which can help you with that (Liberapay, Bountysource salt, Gratipay or Snowdrift to mention some). You can also administer the funding yourself or using some legal entity such as Software Freedom Conservancy which handles this for phpMyAdmin.

But the most important thing is to persuade people and companies to give back. You know there are lot of companies relying on your project, but how to make them fund the project? I really don't know, I still struggle with this as I don't want to be too pushy in asking for money, but I'd really like to see them to give back.

What kind of works is giving your sponsors logo / link placement on your website. If your website is well ranked, you can expect to get quite a lot of SEO sponsors and the question is where to draw a line what you still find acceptable. Obviously the most willing to pay companies will have nothing to do with what you do and they just want to get the link. The industry you can expect is porn, gambling, binary options and various MFA sites. You will get some legitimate sponsors related to your project as well. We felt we've gone too far with phpMyAdmin last year and we've stricten the rules recently, but the outcome is still not visible on our website (as we've just limited new sponsors, but existing contracts will be honored).

Another option is to monetize your project more directly. You can offer consulting services or provide it as a service (this is what I currently do with Weblate). It really depends on the product if you can build customer base on that or not, but certainly this is not something what would work well for all projects.

Thanks for reading this and I hope it's not too chaotic, as I've moved parts there and back while writing and I'm afraid it got too long in the end.

Filed under: Debian English Gammu phpMyAdmin SUSE Weblate | 0 comments

15 March, 2017 11:00AM

Bits from Debian

Build Android apps with Debian: apt install android-sdk

In Debian stretch, the upcoming new release, it is now possible to build Android apps using only packages from Debian. This will provide all of the tools needed to build an Android app targeting the "platform" android-23 using the SDK build-tools 24.0.0. Those two are the only versions of "platform" and "build-tools" currently in Debian, but it is possible to use the Google binaries by installing them into /usr/lib/android-sdk.

This doesn't cover yet all of the libraries that are used in the app, like the Android Support libraries, or all of the other myriad libraries that are usually fetched from jCenter or Maven Central. One big question for us is whether and how libraries should be included in Debian. All the Java libraries in Debian can be used in an Android app, but including something like Android Support in Debian would be strange since they are only useful in an Android app, never for a Debian app.

Building apps with these packages

Here are the steps for building Android apps using Debian's Android SDK on Stretch.

  1. sudo apt install android-sdk android-sdk-platform-23
  2. export ANDROID_HOME=/usr/lib/android-sdk
  3. In build.gradle, set compileSdkVersion to 23 and buildToolsVersion to 24.0.0
  4. run gradle build

The Gradle Android Plugin is also packaged. Using the Debian package instead of the one from online Maven repositories requires a little configuration before running gradle. In the buildscript block:

  • add maven { url 'file:///usr/share/maven-repo' } to repositories
  • use compile 'com.android.tools.build:gradle:debian' to load the plugin

Currently there is only the target platform of API Level 23 packaged, so only apps targeted at android-23 can be built with only Debian packages. There are plans to add more API platform packages via backports. Only build-tools 24.0.0 is available, so in order to use the SDK, build scripts need to be modified. Beware that the Lint in this version of Gradle Android Plugin is still problematic, so running the :lint tasks might not work. They can be turned off with lintOptions.abortOnError in build.gradle. Google binaries can be combined with the Debian packages, for example to use a different version of the platform or build-tools.

Why include the Android SDK in Debian?

While Android developers could develop and ship apps right now using these Debian packages, this is not very flexible since only build-tools-24.0.0 and android-23 platform are available. Currently, the Debian Android Tools Team is not aiming to cover the most common use cases. Those are pretty well covered by Google's binaries (except for the proprietary license on the Google binaries), and are probably the most work for the Android Tools Team to cover. The current focus is on use cases that are poorly covered by the Google binaries, for example, like where only specific parts of the whole SDK are used. Here are some examples:

  • tools for security researchers, forensics, reverse engineering, etc. which can then be included in live CDs and distros like Kali Linux
  • a hardened APK signing server using apksigner that uses a standard, audited, public configuration of all reproducibly built packages
  • Replicant is a 100% free software Android distribution, so of course they want to have a 100% free software SDK
  • high security apps need a build environment that matches their level of security, the Debian Android Tools packages are reproducibly built only from publicly available sources
  • support architectures besides i386 and amd64, for example, the Linaro LAVA setup for testing ARM devices of all kinds uses the adb packages on ARM servers to make their whole testing setup all ARM architecture
  • dead simple install with strong trust path with mirrors all over the world

In the long run, the Android Tools Team aims to cover more use cases well, and also building the Android NDK. This all will happen more quickly if there are more contributors on the Android Tools team! Android is the most popular mobile OS, and can be 100% free software like Debian. Debian and its derivatives are one of the most popular platforms for Android development. This is an important combination that should grow only more integrated.

Last but not least, the Android Tools Team wants feedback on how this should all work, for example, ideas for how to nicely integrate Debian's Java libraries into the Android gradle workflow. And ideally, the Android Support libraries would also be reproducibly built and packaged somewhere that enforces only free software. Come find us on IRC and/or email! https://wiki.debian.org/AndroidTools#Communication_Channels

15 March, 2017 11:00AM by Hans-Christoph Steiner and Kai-Chung Yan (殷啟聰)

March 14, 2017

hackergotchi for Keith Packard

Keith Packard

Valve

Consulting for Valve in my spare time

Valve Software has asked me to help work on a couple of Linux graphics issues, so I'll be doing a bit of consulting for them in my spare time. It should be an interesting diversion from my day job working for Hewlett Packard Enterprise on Memory Driven Computing and other fun things.

First thing on my plate is helping support head-mounted displays better by getting the window system out of the way. I spent some time talking with Dave Airlie and Eric Anholt about how this might work and have started on the kernel side of that. A brief synopsis is that we'll split off some of the output resources from the window system and hand them to the HMD compositor to perform mode setting and page flips.

After that, I'll be working out how to improve frame timing reporting back to games from a composited desktop under X. Right now, a game running on X with a compositing manager can't tell when each frame was shown, nor accurately predict when a new frame will be shown. This makes smooth animation rather difficult.

14 March, 2017 07:10PM

John Goerzen

Parsing the GOP’s Health Insurance Statistics

There has been a lot of noise lately about the GOP health care plan (AHCA) and the differences to the current plan (ACA or Obamacare). A lot of statistics are being misinterpreted.

The New York Times has an excellent analysis of some of this. But to pick it apart, I want to highlight a few things:

Many Republicans are touting the CBO’s estimate that, some years out, premiums will be 10% lower under their plan than under the ACA. However, this carries with it a lot of misleading information.

First of all, many are spinning this as if costs would go down. That’s not the case. The premiums would still rise — they would just have risen less by the end of the period than under ACA. That also ignores the immediate spike and throwing millions out of the insurance marketplace altogether.

Now then, where does this 10% number come from? First of all, you have to understand the older people are substantially more expensive to the health system, and therefore more expensive to insure. ACA limited the price differential from the youngest to the oldest people, which meant that in effect some young people were subsidizing older ones on the individual market. The GOP plan removes that limit. Combined with other changes in subsidies and tax credits, this dramatically increases the cost to older people. For instance, the New York Times article cites a CBO estimate that “the price an average 64-year-old earning $26,500 would need to pay after using a subsidy would increase from $1,700 under Obamacare to $14,600 under the Republican plan.”

They further conclude that these exceptionally high rates would be so unaffordable to older people that the older people will simply stop buying insurance on the individual market. This means that the overall risk pool of people in that market is healthier, and therefore the average price is lower.

So, to sum up: the reason that insurance premiums under the GOP plan will rise at a slightly slower rate long-term is that the higher-risk people will be unable to afford insurance in the first place, leaving only the cheaper people to buy in.

14 March, 2017 03:35PM by John Goerzen

Reproducible builds folks

Reproducible Builds: week 98 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday March 5 and Saturday March 11 2017:

Upcoming events

Reproducible Builds Hackathon Hamburg

The Reproducible Builds Hamburg Hackathon 2017, or RB-HH-2017 for short, is a 3 day hacking event taking place in the CCC Hamburg Hackerspace located inside the Frappant, which is collective art space located in a historical monument in Hamburg, Germany.

The aim of the hackathon is to spent some days working on Reproducible Builds in every distribution and project. The event is open to anybody interested on working on Reproducible Builds issues in any distro or project, with or without prio experience!

Packages filed

Chris Lamb:

Toolchain development

  • Guillem Jover uploaded dpkg 1.18.23 to unstable, declaring .buildinfo format 1.0 as "stable".

  • Jams McCoy uploaded devscripts 2.17.2 to unstable addingd support for .buildinfo files to the debsign utility via patches from Ximin Luo and Guillem Jover.

  • Hans-Christoph Steiner noted that the first reproducibility-related patch in the Android SDK was marked as confirmed.

Reviews of unreproducible packages

39 package reviews have been added, 7 have been updated and 9 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (2)

buildinfo.debian.net development

reproducible-website development

tests.reproducible-builds.org

  • Hans-Christoph Steiner gave a progress report on testing F-Droid: we now have a complete vagrant workflow working in nested KVM! So we can provision a new KVM guest, then package it using vagrant box all inside of a KVM guest (which is a profitbricks build node). So we finally have a working setup on jenkins.debian.net. Next up is fixing bugs in our libvirt snapshoting support.
  • Then Hans-Christoph was also able to enable building of all F-Droid apps in our setup, though this is still work in progress…
  • Daniel Shahaf spotted a subtile error in our FreeBSD sudoers configuration and as a result the FreeBSD reproducibility results are back.
  • Holger once again adjusted the Debian armhf scheduling frequency, to cope with the ever increasing amount of armhf builds.
  • Mattia spotted a refactoring error which resulted in no maintenance mails for a week.
  • Holger also spent some time on improving IRC notifications further, though there is still some improvements to be made.

Misc.

This week's edition was written by Chris Lamb, Holger Levsen, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

14 March, 2017 06:41AM

March 13, 2017

hackergotchi for Sean Whitton

Sean Whitton

Initial views of 5th edition DnD

I’ve been playing in a 5e campaign for around two months now. In the past ten days or so I’ve been reading various source books and Internet threads regarding the design of 5th edition. I’d like to draw some comparisons and contrasts between 5th edition, and the 3rd edition family of games (DnD 3.5e and Paizo’s Pathfinder, which may be thought of as 3.75e).

The first thing I’d like to discuss is that wizards and clerics are no longer Vancian spellcasters. In rules terms, this is the idea that individual spells are pieces of ammunition. Spellcasters have a list of individual spells stored in their heads, and as they cast spells from that list, they cross off each item. Barring special rules about spontaneously converting prepared spells to healing spells, for clerics, the only way to add items back to the list is to take a night’s rest. Contrast this with spending points from a pool of energy in order to use an ability to cast a fireball. Then the limiting factor on using spells is having enough points in your mana pool, not having further castings of the spell waiting in memory.

One of the design goals of 5th edition was to reduce the dominance of spellcasters at higher levels of play. The article to which I linked in the previous paragraph argues that this rebalancing requires the removal of Vancian magic. The idea, to the extent that I’ve understood it, is that Vancian magic is not an effective restriction on spellcaster power levels, so it is to be replaced with other restrictions—adding new restrictions while retaining the restrictions inherent in Vancian magic would leave spellcasters crippled.

A further reason for removing Vancian magic was to defeat the so-called “five minute adventuring day”. The compat ability of a party that contains higher level Vancian spellcasters drops significantly once they’ve fired off their most powerful combat spells. So adventuring groups would find themselves getting into a fight, and then immediately retreating to fully rest up in order to get their spells back. This removes interesting strategic and roleplaying possibilities involving the careful allocation of resources, and continuing to fight as hit points run low.

There are some other related changes. Spell components are no longer used up when casting a spell. So you can use one piece of bat guano for every fireball your character ever casts, instead of each casting requiring a new piece. Correspondingly, you can use a spell focus, such as a cool wand, instead of a pouch full of material components—since the pouch never runs out, there’s no mechanical change if a wizard uses an arcane focus instead. 0th level spells may now be cast at will (although Pathfinder had this too). And there are decent 0th level attack spells, so a spellcaster need not carry a crossbow or shortbow in order to have something to do on rounds when it would not be optimal to fire off one of their precious spells.

I am very much in favour of these design goals. The five minute adventuring day gets old fast, and I want it to be possible for the party to rely on the cool abilities of non-spellcasters to deal with the challenges they face. However, I am concerned about the flavour changes that result from the removal of Vancian magic. These affect wizards and clerics differently, so I’ll take each case in turn.

Firstly, consider wizards. In third edition, a wizard had to prepare and cast Read Magic (the only spell they could prepare without a spellbook), and then set about working through their spellbook. This involved casting the spells they wanted to prepare, up until the last few triggering words or gestures that would cause the effect of the spell to manifest. They would commit these final parts of the spell to memory. When it came to casting the spell, the wizard would say the final few words and make the required gestures, and bring out relevant material components from their component pouch. The completed spell would be ripped out of their mind, to manifest its effect in the world. We see that the casting of a spell is a highly mentally-draining activity—it rips the spell out of the caster’s memory!—not to be undertaken lightly. Thus it is natural that a wizard would learn to use a crossbow for basic damage-dealing. Magic is not something that comes very naturally to the wizard, to be deployed in combat as readily as the fighter swings their sword. They are not a superhero or video game character, “pew pew”ing their way to victory. This is a very cool starting point upon which to roleplay an academic spellcaster, not really available outside of tabletop games. I see it as a distinction between magical abilities and real magic.

Secondly, consider clerics. Most of the remarks in the previous paragraph apply, suitably reworked to be in terms of requesting certain abilities from the deity to whom the cleric is devoted. Additionally, there is the downgrading of the importance of the cleric’s healing magic in 5th edition. Characters can heal themselves by taking short and long rests. Previously, natural healing was very slow, so a cleric would need to convert all their remaining magic to healing spells at the end of the day, and hope that it was enough to bring the party up to fighting shape. Again, this made the party of adventurers seem less like superheroes or video game characters. Magic had a special, important and unique role, that couldn’t be replaced by the abilities of other classes.

There are some rules in the back of the DMG—“Slow Natural Healing”, “Healing Kit Dependency”, “Lingering Wounds”—which can be used to make healing magic more important. I’m not sure how well they would work without changes to the cleric class.

I would like to find ways to restore the feel and flavour of Vancian clerics and wizards to 5th edition, without sacrificing the improvements that have been made that let other party members do cool stuff too. I hope it is possible to keep magic cool and unique without making it dominate the game. It would be easy to forbid the use of arcane foci, and say that material component pouches run out if the party do not visit a suitable marketplace often enough. This would not have a significant mechanical effect, and could enhance roleplaying possibilities. I am not sure how I could deal with the other issues I’ve discussed without breaking the game.

The second thing I would like to discuss is bounded accuracy. Under this design principle, the modifiers to dice rolls grow much more slowly. The gain of hit points remains unbounded. Under third edition, it was mechanically impossible for a low-level monster to land a hit on a higher-level adventurer, rendering them totally useless even in overwhelming numbers. With bounded accuracy, it’s always possible for a low-level monster to hit a PC, even if they do insigificant damage. That means that multiple low-level monsters pose a threat.

This change opens up many roleplaying opportunities by keeping low-level character abilities relevant, as well as monster types that can remain involves in stories without giving them implausible new abilities so they don’t fall far behind the PCs. However, I’m a little worried that it might make high level player characters feel a lot less powerful to play. I want to cease a be a fragile adventurer and become a world-changing hero at later levels, rather than forever remain vulnerable to the things that I was vulnerable to at the start of the game. This desire might just be the result of the video games which I played growing up. In the JRPGs I played and in Diablo II, enemies in earlier areas of the map were no threat at all once you’d levelled up by conquering higher-level areas. My concerns about bounded accuracy might just be that it clashes with my own expectations of how fantasy heroes work. A good DM might be able to avoid these worries entirely.

The final thing I’d like to discuss is the various simplifications to the rules of 5th edition, when it is compared with 3rd edition and Pathfinder. Attacks of opportunity are only provoked when leaving a threatened square; you can go ahead and cast a spell when in melee with someone. There is a very short list of skills, and party members are much closer to each other in skills, now that you can’t pump more and more ranks into one or two abilities. Feats as a whole are an optional rule.

At first I was worried about these simplifications. I thought that they might make character building and tactics in combat a lot less fun. However, I am now broadly in favour of all of these changes, for two reasons. Firstly, they make the game so much more accessible, and make it far more viable to play without relying on a computer program to fill in the boxes on your character sheet. In my 5th edition group, two of us have played 3rd edition games, and the other four have never played any tabletop games before. But nobody has any problems figuring out their modifiers because it is always simply your ability bonus or penalty, plus your proficiency bonus if relevant. And advantage and disadvantage is so much more fun than getting an additional plus or minus two. Secondly, these simplifications downplay the importance of the maths, which means it is far less likely to be broken. It is easier to ensure that a smaller core of rules is balanced than it is to keep in check a larger mass of rules, constantly being supplemented by more and more addon books containing more and more feats and prestige classes. That means that players make their characters cool by roleplaying them in interesting ways, not making them cool by coming up with ability combos and synergies in advance of actually sitting down to play. Similarly, DMs can focus on flavouring monsters, rather than writing up longer stat blocks.

I think that this last point reflects what I find most worthwhile about tabletop RPGs. I like characters to encounter cool NPCs and cool situations, and then react in cool ways. I don’t care that much about character creation. (I used to care more about this, but I think it was mainly because of interesting options for magic items, which hasn’t gone away.) The most important thing is exercising group creativity while actually playing the game, rather than players and DMs having to spend a lot of time preparing the maths in advance of playing. Fifth edition enables this by preventing the rules from getting in the way, because they’re broken or overly complex. I think this is why I love Exalted: stunting is vital, and there is social combat. I hope to be able to work out a way to restore Vancian magic, but even without that, on balance, fifth edition seems like a better way to do group storytelling about fantasy heroes. Hopefully I will have an opportunity to DM a 5th edition campaign. I am considering disallowing all homebrew and classes and races from supplemental books. Stick to the well-balanced core rules, and do everything else by means of roleplaying and flavour. This is far less gimmicky, if more work for unimaginative players (such as myself!).

Some further interesting reading:

13 March, 2017 11:37PM

hackergotchi for Ross Gammon

Ross Gammon

February 2017 – My Free Software activities summary

When I sat down to write this blog, I thought I hadn’t got much done in February. But as it took  me quite a while to write up, there must have actually been a little bit of progress. With my wife starting a new job, there have been some adjustments in family life, and I have struggled just to keep up with all the Debian and Ubuntu emails. Anyway……..

Debian

Ubuntu

  • Tested Ubuntu Studio 16.02.2 point release, marked as ready, and updated the Release Notes.
  • Started updating my previous Gramps backport in Ubuntu to Gramps 4.2.5. The package builds fine, and I have tested that it installs and works. I just need to update the bug.
  • Prepared updates to the ubuntustudio-default-settings & ubuntustudio-meta packages. There were some deferred changes from before Yakkety was released, including moving the final bit of configuration left in the ubuntustudio-lightdm-theme package to ubuntustudio-default-settings. Jeremy Bicha sponsored the uploads after suggesting moving away from some transitional ttf font packages in ubuntustudio-meta.
  • Tested the Ubuntu Studio 17.04 First Beta release, marked as ready, and prepared the Release Notes.
  • Upgraded my music studio Ubuntu Studio computer to Yakkety 16.1o.
  • Got accepted as an Ubuntu Contributing Developer by the Developer Membership Board.

Other

  • After a merge of my Family Tree with the Family Tree of my wife in Gramps a long way back, I finally started working through the database merging duplicates and correcting import errors.
  • Worked some more on the model railway, connecting up the other end of the tunnel section with the rest of the railway.

Plan status from last month & update for next month

Debian

For the Debian Stretch release:

  • Keep an eye on the Release Critical bugs list, and see if I can help fix any. – In Progress

Generally:

  • Finish the Gramps 5.2.5 backport for Jessie. – Done
  • Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release.
  • Begin working again on all the new stuff I want packaged in Debian.

Ubuntu

  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages. – Done
  • Reapply to become a Contributing Developer. – Done
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started
  • Start testing & bug triaging Ubuntu Studio packages. – In progress
  • Test Len’s work on ubuntustudio-controls – In progress
  • Do the Ubuntu Studio Zesty 17.04 Final Beta release.

Other

  • Give JMRI a good try out and look at what it would take to package it. – In progress
  • Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!). – In progress

13 March, 2017 09:06PM by Ross Gammon

hackergotchi for Michal Čihař

Michal Čihař

Weblate users survey

Weblate is growing quite well in last months, but sometimes it's development is really driven by people who complain instead of following some roadmap with higher goals. I think it's time to change it at least a little bit. In order to get broader feedback I've sent out short survey to active project owners in Hosted Weblate week ago.

I've decided to target at smaller audience for now, though publicly open survey might follow later (but it's always harder to evaluate feedback across different user groups).

Overall feelings were really positive, most people find Weblate better than other similar services they have used. This is really something I like to hear :-).

Weblate overall experience

Weblate compared with other tools

But the most important part for me was where users want to see improvements. This somehow matches my expectation that we really should improve the user interface.

Weblate future development

We have quite a lot features, which are really hidden in the user interface. Also interface for some of the features is far from being intuitive. This all probably comes from the fact that we really don't have anybody experienced with creating user interfaces right now. It's time to find somebody who will help us. In case you are able to help or know somebody who might be interested in helping, please get in touch. Weblate is free software, but this can still be paid job.

Last part of the survey was focused on some particular features, but the outcome was not as clear as I hoped for as almost all feature group attracted about same attention (with one exception being extending the API, which was not really wanted by most of the users).

Overall I think doing some survey like this is useful and I will certainly repeat it (probably yearly or so), to see where we're moving and what our users want. Having feedback from users is important for every project and this seemed to worked quite well. Anyway if you have further feedback, don't hesitate to use our issue tracker at GitHub or contact me directly.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

13 March, 2017 11:00AM

March 12, 2017

Iustin Pop

A recipe for success

It is said that with age comes wisdom. I would be happy for that to be true, because today I must have been very very young then.

For example, if you want to make a long bike ride in order to hit some milestone, like your first metric century, it is not indicated to follow ANY of the following points:

  • instead of doing this in the season, when you're fit, wait over the winter, during which you should indulge in food and drink with only an occasional short bike ride, so that most of your fitness is gone and replaced by a few extra kilograms;
  • instead of choosing a flat route that you've done before, extending it a bit to hit the target distance, think about taking the route from one of the people you follow on Strava (and I mean real cyclists here); bonus points if you choose one they mention was about training instead of a freeride and gave it a meaningful name like "The ride of 3 peaks", something with 1'500m+ altitude gain…
  • in order to not get bogged down by too much by extra weight (those winter kilograms are enough!), skimp on breakfast (just a very very light one); together with the energy bar you eat, something like 400 calories…
  • take the same amount of food you take for much shorter and flatter rides; bonus points if you don't check the actual calories in the food, and instead of the presumed 700+ calories you think you're carrying (which might be enough, if you space them correctly, given how much you can absorb per hour), take at most 300 calories with you, because hey, your body is definitely used with long efforts in which you convert fat to energy on the fly, right? especially after said winter pause!
  • since water is scarce in the Swiss outdoors (not!), especially when doing a road bike ride, carry lots of water with you (full hydro-pack, 3l) instead of an extra banana or energy bar, or a sandwich, or nuts, or a steak… mmmm, steak!
  • and finally and most importantly don't do the ride indoors on the trainer, even though it can pretty realistically simulate the effort, but instead do it for real outside, where you can't simply stop when you had enough, because you have to get back home…

For bonus points, if you somehow manage to reach the third peak in the above ride, and have mostly only flat/down to the destination, do the following: be so glad you're done with climbing, that you don't pay attention to the map and start a wrong descent, on a busy narrow road, so that you can't stop immediately as you realise you've lost the track; it will cost you only an extra ~80 meters of height towards the end of the ride. Which are pretty cheap, since all the food is gone and the water almost as well, so the backpack is light. Right.

However, if you do follow all the above, you're rewarded with a most wonderful thing for the second half of the ride: your will receive a +5 boost on your concentration skill. You will be able to focus on, and think about a single thing for hours at a time, examining it (well, its contents) in minute detail.

Plus, when you get home and open that thing—I mean, of course, the FRIDGE with all the wonderful FOOD it contains—everything will taste MAGICAL! You can now recoup the roughly 1500 calories deficit on the ride, and finally no longer feel SO HUNGRY.

That's all. Strava said "EXTREME" suffer score, albeit less than 20% points in the red, which means I was just slugging through the ride (total time confirms it), like a very very very old man. But definitely not a wise one.

12 March, 2017 10:38PM

Mike Hommey

When the memory allocator works against you

Cloning mozilla-central with git-cinnabar requires a lot of memory. Actually too much memory to fit in a 32-bits address space.

I hadn’t optimized for memory use in the first place. For instance, git-cinnabar keeps sha-1s in memory as hex values (40 bytes) rather than raw values (20 bytes). When I wrote the initial prototype, it didn’t matter that much, and while close(ish) to the tipping point, it didn’t require more than 2GB of memory at the time.

Time passed, and mozilla-central grew. I suspect the recent addition of several thousands of commits and files has made things worse.

In order to come up with a plan to make things better (short or longer term), I needed data. So I added some basic memory resource tracking, and collected data while cloning mozilla-central.

I must admit, I was not ready for what I witnessed. Follow me for a tale of frustrations (plural).

I was expecting things to have gotten worse on the master branch (which I used for the data collection) because I am in the middle of some refactoring and did many changes that I was suspecting might have affected memory usage. I wasn’t, however, expecting to see the clone command using 10GB(!) memory at peak usage across all processes.

(Note, those memory sizes are RSS, minus “shared”)

It also was taking an unexpected long time, but then, I hadn’t cloned a large repository like mozilla-central from scratch in a while, so I wasn’t sure if it was just related to its recent growth in size or otherwise. So I collected data on 0.4.0 as well.

Less time spent, less memory usage… ok. There’s definitely something wrong on master. But wait a minute, that slope from ~2GB to ~4GB on the git-remote-hg process doesn’t actually make any kind of sense. I mean, I’d understand it if it were starting and finishing with the “Import manifest” phase, but it starts in the middle of it, and ends long before it finishes. WTH?

First things first, since RSS can be a variety of things, I checked /proc/$pid/smaps and confirmed that most of it was, indeed, the heap.

That’s the point where you reach for Google, type something like “python memory profile” and find various tools. One from the results that I remembered having used in the past is guppy’s heapy.

Armed with pdb, I broke execution in the middle of the slope, and tried to get memory stats with heapy. SIGSEGV. Ouch.

Let’s try something else. I reached out to objgraph and pympler. SIGSEGV. Ouch again.

Tried working around the crashes for a while (too long while, retrospectively, hindsight is 20/20), and was somehow successful at avoiding them by peaking at a smaller set of objects. But whatever I did, despite being attached to a process that had 2.6GB RSS, I wasn’t able to find more than 1.3GB of data. This wasn’t adding up.

It surely didn’t help that getting to that point took close to an hour each time. Retrospectively, I wish I had investigated using something like Checkpoint/Restore in Userspace.

Anyways, after a while, I decided that I really wanted to try to see the whole picture, not smaller peaks here and there that might be missing something. So I resolved myself to look at the SIGSEGV I was getting when using pympler, collecting a core dump when it happened.

Guess what? The Debian python-dbg package does not contain the debug symbols for the python package. The core dump was useless.

Since I was expecting I’d have to fix something in python, I just downloaded its source and built it. Ran the command again, waited, and finally got a backtrace. First Google hit for the crashing function? The exact (unfixed) crash reported on the python bug tracker. No patch.

Crashing code is doing:

((f)->f_builtins != (f)->f_tstate->interp->builtins)

And (f)->f_tstate is NULL. Classic NULL deref.

Added a guard (assessing it wouldn’t break anything). Ran the command again. Waited. Again. SIGSEGV.

Facedesk. Another crash on the same line. Did I really use the patched python? Yes. But this time (f)->f_tstate->interp is NULL. Sigh.

Same player, shoot again.

Finally, no crash… but still stuck on only 1.3GB accounted for. Ok, I know not all python memory profiling tools are entirely reliable, let’s try heapy again. SIGSEGV. Sigh. No debug info on the heapy module, where the crash happens. Sigh. Rebuild the module with debug info, try again. The backtrace looks like heapy is recursing a lot. Look at %rsp, compare with the address space from /proc/$pid/maps. Confirmed. A stack overflow. Let’s do ugly things and increase the stack size in brutal ways.

Woohoo! Now heapy tells me there’s even less memory used than the 1.3GB I found so far. Like, half less. Yeah, right.

I’m not clear on how I got there, but that’s when I found gdb-heap, a tool from Red Hat’s David Malcolm, and the associated talk “Dude, where’s my RAM?” A deep dive into how Python uses memory (slides).

With a gdb attached, I would finally be able to rip python’s guts out and find where all the memory went. Or so I thought. The gdb-heap tool only found about 600MB. About as much as heapy did, for that matter, but it could be coincidental. Oh. Kay.

I don’t remember exactly what went through my mind then, but, since I was attached to a running process with gdb, I typed the following on the gdb prompt:

gdb> call malloc_stats()

And that’s when the truth was finally unvealed: the memory allocator was just acting up the whole time. The ouput was something like:

Arena 0:
system bytes    =  some number above (but close to) 2GB
in use bytes    =  some number above (but close to) 600MB

Yes, the glibc allocator was just telling it had allocated 600MB of memory, but was holding onto 2GB. I must have found a really bad allocation pattern that causes massive fragmentation.

One thing that David Malcolm’s talk taught me, though, is that python uses its own allocator for small sizes, so the glibc allocator doesn’t know about them. And, roughly, adding the difference between RSS and what glibc said it was holding to to the use bytes it reported somehow matches the 1.3GB I had found so far.

So it was time to see how those things evolved in time, during the entire clone process. I grabbed some new data, tracking the evolution of “system bytes” and “in use bytes”.

There are two things of note on this data:

  • There is a relatively large gap between what the glibc allocator says it has gotten from the system, and the RSS (minus “shared”) size, that I’m expecting corresponds to the small allocations that python handles itself.
  • Actual memory use is going down during the “Import manifests” phase, contrary to what the evolution of RSS suggests.

In fact, the latter is exactly how git-cinnabar is supposed to work: It reads changesets and manifests chunks, and holds onto them while importing files. Then it throws away those manifests and changesets chunks one by one while it imports them. There is, however, some extra bookkeeping that requires some additional memory, but it’s expected to be less memory consuming than keeping all the changesets and manifests chunks in memory.

At this point, I thought a possible explanation is that since both python and glibc are mmap()ing their own arenas, they might be intertwined in a way that makes things not go well with the allocation pattern happening during the “Import manifest” phase (which, in fact, allocates and frees increasingly large buffers for each manifest, as manifests grow in size in the mozilla-central history).

To put the theory at work, I patched the python interpreter again, making it use malloc() instead of mmap() for its arenas.

“Aha!” I thought. That definitely looks much better. Less gap between what glibc says it requested from the system and the RSS size. And, more importantly, no runaway increase of memory usage in the middle of nowhere.

I was preparing myself to write a post about how mixing allocators could have unintended consequences. As a comparison point, I went ahead and ran another test, with the python allocator entirely disabled, this time.

Heh. It turns out glibc was acting up all alone. So much for my (plausible) theory. (I still think mixing allocators can have unintended consequences.)

(Note, however, that the reason why the python allocator exists is valid: without it, the overall clone took almost 10 more minutes)

And since I had been getting all this data with 0.4.0, I gathered new data without the python allocator with the master branch.

This paints a rather different picture than the original data on that branch, with much less memory use regression than one would think. In fact, there isn’t much difference, except for the spike at the end, which got worse, and some of the noise during the “Import manifests” phase that got bigger, implying larger amounts of temporary memory used. The latter may contribute to the allocation patterns that throw glibc’s memory allocator off.

It turns out tracking memory usage in python 2.7 is rather painful, and not all the tools paint a complete picture of it. I hear python 3.x is somewhat better in that regard, and I hope it’s true, but at the moment, I’m stuck with 2.7. The most reliable tool I’ve used here, it turns out, is pympler. Or rebuilding the python interpreter without its allocator, and asking the system allocator what is allocated.

With all this data, I now have some defined problems to tackle, some easy (the spike at the end of the clone), and some less easy (working around glibc allocator’s behavior). I have a few hunches as to what kind of allocations are causing the runaway increase of RSS. Coincidentally, I’m half-way through a refactor of the code dealing with manifests, and it should help dealing with the issue.

But that will be the subject of a subsequent post.

12 March, 2017 01:47AM by glandium

hackergotchi for Steve Kemp

Steve Kemp

How I started programming

I've written parts of this story in the past, but never in one place and never in much detail. So why not now?

In 1982 my family moved house, so one morning I went to school and at lunch-time I had to walk home to a completely different house.

We moved sometime towards the end of the year, and ended up spending lots of money replacing the windows of the new place. For people in York I was born in Farrar Street, Y010 3BY, and we moved to a place on Thief Lane, YO1 3HS. Being named as it was I "ironically" stole at least two street-signs and hung them on my bedroom wall. I suspect my parents were disappointed.

Anyway the net result of this relocation, and the extra repairs meant that my sisters and I had a joint Christmas present that year, a ZX Spectrum 48k.

I tried to find pictures of what we received but unfortunately the web doesn't remember the precise bundle. All together though we received:

I know we also received Horace and the Spiders, and I have vague memories of some other things being included, including a Space Invaders clone. No doubt my parents bought them separately.

Highlights of my Spectrum-gaming memories include R-Type, Strider, and the various "Dizzy" games. Some of the latter I remember very fondly.

Unfortunately this Christmas was pretty underwhelming. We unpacked the machine, we cabled it up to the family TV-set - we only had the one, after all - and then proceeded to be very disappointed when nothing we did resulted in a successful game! It turns out our cassette-deck was not good enough. Being back in the 80s the shops were closed over Christmas, and my memory is that it was around January before we received a working tape-player/recorder, such that we could load games.

Happily the computer came with manuals. I read one, skipping words and terms I didn't understand. I then read the other, which was the spiral-bound orange book. It contained enough examples and decent wording that I learned to write code in BASIC. Not bad for an 11/12 year old.

Later I discovered that my local library contained "computer books". These were colourful books that promised "The Mystery of Silver Mounter", or "Write your own ADVENTURE PROGRAMS". But were largely dry books that contained nothing but multi-page listings of BASIC programs to type in. Often with adjustments that had to be made for your own computer-flavour (BASIC varying between different systems).

If you want to recapture the magic scroll to the foot of this Osbourne page and you can download them!

Later I taught myself Z80 Assembly Language, partly via the Spectrum manual and partly via such books as these two (which I still own 30ish years later):

  • Understanding your Spectrum, Basic & Machine Code Programming.
    • by Dr Ian Logan
  • An introduction to Z80 Machine Code.
    • R.A & J.W Penfold

Pretty much the only reason I continued down this path is because I wanted infinite/extra lives in the few games I owned. (Which were largely pirated via the schoolboy network of parents with cassette-copiers.)

Eventually I got some of my l33t POKES printed in magazines, and received free badges from the magazines of the day such as Your Sinclair & Sinclair User. For example I was "Hacker of the Month" in the Your Sinclair issue 67 , Page 32, apparently because I "asked so nicely in my letter".

Terrible scan is terrible:

Anyway that takes me from 1980ish to 1984. The only computer I ever touched was a Spectrum. Friends had other things, and there were Sega consoles, but I have no memories of them. Suffice it to say that later when I first saw a PC (complete with Hercules graphics, hard drives, and similar sourcery, running GEM IIRC) I was pleased that Intel assembly was "similar" to Z80 assembly - and now I know the reason why.

Some time in the future I might document how I got my first computer job. It is hillarious. As was my naivete.

12 March, 2017 12:00AM

March 11, 2017

John Goerzen

Silent Data Corruption Is Real

Here’s something you never want to see:

ZFS has detected a checksum error:

   eid: 138
 class: checksum
  host: alexandria
  time: 2017-01-29 18:08:10-0600
 vtype: disk

This means there was a data error on the drive. But it’s worse than a typical data error — this is an error that was not detected by the hardware. Unlike most filesystems, ZFS and btrfs write a checksum with every block of data (both data and metadata) written to the drive, and the checksum is verified at read time. Most filesystems don’t do this, because theoretically the hardware should detect all errors. But in practice, it doesn’t always, which can lead to silent data corruption. That’s why I use ZFS wherever I possibly can.

As I looked into this issue, I saw that ZFS repaired about 400KB of data. I thought, “well, that was unlucky” and just ignored it.

Then a week later, it happened again. Pretty soon, I noticed it happened every Sunday, and always to the same drive in my pool. It so happens that the highest I/O load on the machine happens on Sundays, because I have a cron job that runs zpool scrub on Sundays. This operation forces ZFS to read and verify the checksums on every block of data on the drive, and is a nice way to guard against unreadable sectors in rarely-used data.

I finally swapped out the drive, but to my frustration, the new drive now exhibited the same issue. The SATA protocol does include a CRC32 checksum, so it seemed (to me, at least) that the problem was unlikely to be a cable or chassis issue. I suspected motherboard.

It so happened I had a 9211-8i SAS card. I had purchased it off eBay awhile back when I built the server, but could never get it to see the drives. I wound up not filling it up with as many drives as planned, so the on-board SATA did the trick. Until now.

As I poked at the 9211-8i, noticing that even its configuration utility didn’t see any devices, I finally started wondering if the SAS/SATA breakout cables were a problem. And sure enough – I realized I had a “reverse” cable and needed a “forward” one. $14 later, I had the correct cable and things are working properly now.

One other note: RAM errors can sometimes cause issues like this, but this system uses ECC DRAM and the errors would be unlikely to always manifest themselves on a particular drive.

So over the course of this, had I not been using ZFS, I would have had several megabytes of reads with undetected errors. Thanks to using ZFS, I know my data integrity is still good.

11 March, 2017 09:34PM by John Goerzen

Enrico Zini

On the meaning of "we"

Rather than as a word of endearment, I'm starting to see "we" as a word of entitlement.

In some moments of insecurity, I catch myself "wee"-ing over other people, to claim them as mine.

11 March, 2017 01:11PM

March 10, 2017

hackergotchi for Jonathan Dowland

Jonathan Dowland

Nintendo NES Classic Mini

After months of trying, I've finally got my hands on a Nintendo NES Classic Mini. It's everything I wish retropie was: simple, reliable, plug-and-play gaming. I didn't have a NES at the time, so the games are all mostly new to me (although I'm familiar with things like Super Mario Brothers).

NES classic and 8bitdo peripherals

NES classic and 8bitdo peripherals

The two main complaints about the NES classic are the very short controller cable and the need to press the "reset" button on the main unit to dip in and out of games. Both are addressed by the excellent 8bitdo Retro Receiver for NES Classic bundle. You get a bluetooth dongle that plugs into the classic and a separate wireless controller. The controller is a replica of the original NES controller. However, they've added another two buttons on the right-hand side alongside the original "A" and "B", and two discrete shoulder buttons which serve as turbo-repeat versions of "A" and "B". The extra red buttons make it look less authentic which is a bit of a shame, and are not immediately useful on the NES classic (but more on that in a minute).

With the 8bitdo controller, you can remotely activate the Reset button by pressing "Down" and "Select" at the same time. Therefore the whole thing can be played from the comfort of my sofa.

That's basically enough for me, for now, but in the future if I want to expand the functionality of the classic, it's possible to mod it. A hack called "Hakchi2" lets you install additional NES ROMs; install retroarch-based emulator cores and thus play SNES, Megadrive, N64 (etc. etc.) games; as well as other hacks like adding "down+select" Reset support to the wired controller. If you were playing non-NES games on the classic, then the extra buttons on the 8bitdo become useful.

10 March, 2017 11:45AM

Reproducible builds folks

Reproducible Builds: week 97 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday February 26 and Saturday March 4 2017:

Upcoming Events

Ed Maste will present Reproducible Builds in FreeBSD at AsiaBSDCon 2017.

Ximin Luo will present Reproducible builds, its uses and the future at Open Source Days in Copenhagen on March 18.

Holger Levsen will give a talk at the German Unix User Group's "Frühjahrsfachgespräch" in Darmstadt, Germany, about Reproducible Builds everywhere on March 23.

Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th-26th.

Media coverage

Aspiration Tech published a very detailed report on our Reproducible Builds World Summit 2016 in Berlin.

Reproducible work in other projects

Duncan published a very thorough post on the Rust Programming Language Forum about reproducible builds in the Rust compiler and toolchain.

In particular, he produced a table recording the reproducibility of different build products under different individual variations, totalling 187 build+variation combinations.

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Dhole:

Reviews of unreproducible packages

60 package reviews have been added, 8 have been updated and 13 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Chris Lamb (3)

diffoscope development

diffoscope 78 was uploaded to unstable and jessie-backports by Mattia Rizzolo. It included contributions from:

  • Chris Lamb:
    • Make tests that call xxd work on jessie again. (Closes: #855239)
    • tests: Move normalize_zeros to more generic utils.data module.
  • Brett Smith:
    • comparators.json: Catch bad JSON errors on Python pre-3.5. (Closes: #855233)
  • Ed Maste:
    • Use BSD-style stat(1) on FreeBSD. (Closes: #855169)

In addition, the following changes were made on the experimental branch:

  • Chris Lamb (4):
    • Tidy cbfs tests.
    • Correct "exercice" -> "exercise" typo.
    • Support newer versions of cbfstool to avoid test failure. (Closes: #856446)
    • Skip icc test that varies on endian if the (Debian-specific) patch is not present. (Closes: #856447)

reproducible-website development

  • anonmos1:
    • Replace root with 0 when giving UIDs/GIDs to GNU tar.
  • Holger Levsen and Chris Lamb:
    • Publish report by Aspiration Tech about RWS Berlin 2016.

tests.reproducible-builds.org

  • Ed Maste continued his work on testing FreeBSD for reproducibility but hasn't reached the magical 100% mark yet.
  • Holger Levsen adjusted the Debian builders scheduling frequency, mostly to adopt to armhf having become faster due to the two new nodes.

Misc.

This week's edition was written by Ximin Luo, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

10 March, 2017 08:41AM

hackergotchi for Martín Ferrari

Martín Ferrari

SunCamp happening again this May!

As I announced in mailing lists a few days ago, the Debian SunCamp (DSC2017) is happening again this May.

SunCamp different to most other Debian events. Instead of a busy schedule of talks, SunCamp focuses on the hacking and socialising aspect, without making it just a Debian party/vacation.

DSC2016 - Hacking and discussing

The idea is to have 4 very productive days, staying in a relaxing and comfy environment, working on your own projects, meeting with your team, or presenting to fellow Debianites your most recent pet project.

DSC2016 - Tincho talking about Prometheus

We have tried to make this event the simplest event possible, both for organisers and attendees. There will be no schedule, except for the meal times at the hotel. But these can be ignored too, there is a lovely bar that serves snacks all day long, and plenty of restaurants and cafés around the village.

DSC2016 - Hacking and discussing

The SunCamp is an event to get work done, but there will be time for relaxing and socialising too.

DSC2016 - Well deserved siesta
DSC2016 - Playing Pétanque

Do you fancy a hack-camp in a place like this?

Swimming pool

Café Café terrace

One of the things that makes the event simple, is that we have negotiated a flat price for accommodation that includes usage of all the facilities in the hotel, and optionally food. We will give you a booking code, and then you arrange your accommodation as you please, you can even stay longer if you feel like it!

The rooms are simple but pretty, and everything has been renovated very recently.

Room Room view

We are not preparing a talks programme, but we will provide the space and resources for talks if you feel inclined to prepare one.

You will have a huge meeting room, divided in 4 areas to reduce noise, where you can hack, have team discussions, or present talks.

Hacklab Hacklab

Do you want to see more pictures? Check the full gallery


Debian SunCamp 2017

Hotel Anabel, LLoret de Mar, Province of Girona, Catalonia, Spain

May 18-21, 2017


Tempted already? Head to the wikipage and register now, it is only 2 months away!

Please try to reserve your room before the end of March. The hotel has reserved a number of rooms for us until that time. You can reserve a room after March, but we can't guarantee the hotel will still have free rooms.

Comment

10 March, 2017 07:36AM

March 09, 2017

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Tired

To be honest, at this stage I'd actually prefer ads in Wikipedia to having ever more intrusive begging for donations. Please go away soon.

09 March, 2017 06:28PM

Petter Reinholdtsen

Detecting NFS hangs on Linux without hanging yourself...

Over the years, administrating thousand of NFS mounting linux computers at the time, I often needed a way to detect if the machine was experiencing NFS hang. If you try to use df or look at a file or directory affected by the hang, the process (and possibly the shell) will hang too. So you want to be able to detect this without risking the detection process getting stuck too. It has not been obvious how to do this. When the hang has lasted a while, it is possible to find messages like these in dmesg:

nfs: server nfsserver not responding, still trying
nfs: server nfsserver OK

It is hard to know if the hang is still going on, and it is hard to be sure looking in dmesg is going to work. If there are lots of other messages in dmesg the lines might have rotated out of site before they are noticed.

While reading through the nfs client implementation in linux kernel code, I came across some statistics that seem to give a way to detect it. The om_timeouts sunrpc value in the kernel will increase every time the above log entry is inserted into dmesg. And after digging a bit further, I discovered that this value show up in /proc/self/mountstats on Linux.

The mountstats content seem to be shared between files using the same file system context, so it is enough to check one of the mountstats files to get the state of the mount point for the machine. I assume this will not show lazy umounted NFS points, nor NFS mount points in a different process context (ie with a different filesystem view), but that does not worry me.

The content for a NFS mount point look similar to this:

[...]
device /dev/mapper/Debian-var mounted on /var with fstype ext3
device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
        opts:   rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
        age:    7863311
        caps:   caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
        sec:    flavor=1,pseudoflavor=1
        events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 
        bytes:  166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 
        RPC iostats version: 1.0  p/v: 100003/3 (nfs)
        xprt:   tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
        per-op statistics
                NULL: 0 0 0 0 0 0 0 0
             GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
             SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
              LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
              ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
            READLINK: 125 125 0 20472 18620 0 1112 1118
                READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
               WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
              CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
               MKDIR: 3680 3680 0 773980 993920 26 23990 24245
             SYMLINK: 903 903 0 233428 245488 6 5865 5917
               MKNOD: 80 80 0 20148 21760 0 299 304
              REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
               RMDIR: 3367 3367 0 645112 484848 22 5782 6002
              RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
                LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
             READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
         READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
              FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
              FSINFO: 2 2 0 232 328 0 1 1
            PATHCONF: 1 1 0 116 140 0 0 0
              COMMIT: 0 0 0 0 0 0 0 0

device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
[...]

The key number to look at is the third number in the per-op list. It is the number of NFS timeouts experiences per file system operation. Here 22 write timeouts and 5 access timeouts. If these numbers are increasing, I believe the machine is experiencing NFS hang. Unfortunately the timeout value do not start to increase right away. The NFS operations need to time out first, and this can take a while. The exact timeout value depend on the setup. For example the defaults for TCP and UDP mount points are quite different, and the timeout value is affected by the soft, hard, timeo and retrans NFS mount options.

The only way I have been able to get working on Debian and RedHat Enterprise Linux for getting the timeout count is to peek in /proc/. But according to Solaris 10 System Administration Guide: Network Services, the 'nfsstat -c' command can be used to get these timeout values. But this do not work on Linux, as far as I can tell. I asked Debian about this, but have not seen any replies yet.

Is there a better way to figure out if a Linux NFS client is experiencing NFS hangs? Is there a way to detect which processes are affected? Is there a way to get the NFS mount going quickly once the network problem causing the NFS hang has been cleared? I would very much welcome some clues, as we regularly run into NFS hangs.

09 March, 2017 02:20PM

Arturo Borrero González

Netfilter in GSoC 2017

logo

Great news! The Netfilter project has been elected by Google to be a mentoring organization in this year Google Summer of Code program. Following the pattern of the last years, Google seems to realise and support the importance of this software project in the Linux ecosystem.

I will be proudly mentoring some student this 2017 year, along with Eric Leblond and of course Pablo Neira.

The focus of the Netfilter project has been in nftables for the last years, and the students joining our community will likely work on the new framework.

For prospective students: there is an ideas document which you must read. The policy in the Netfilter project is to encourage students to send patches before they are elected to join us. Therefore, a good starting point is to subscribe to the mailing lists, download the git code repositories, build by hand the projects (compilation) and look at the bugzilla (registration required).

Due to this type of internships and programs, I believe is interesting to note the ascending involvement of women in the last years. I can remember right now: Ana Rey (@AnaRB), Shivani Bhardwaj (@tuxish), Laura García and Elise Lennion (blog).

On a side note, Debian is not participating in GSoC this year :-(

09 March, 2017 09:00AM

March 08, 2017

hackergotchi for Thorsten Glaser

Thorsten Glaser

Updated Debian packaging example: PHP webapp with dbconfig-common

Since I use this as base for other PHP packages like SimKolab, I’ve updated my packaging example with:

  • PHP 7 support (untested, as I need libapache2-mod-php5)
  • tons more utility code for you to use
  • a class autoloader, with example (build time, for now)
  • (at build time) running a PHPUnit testsuite (unless nocheck)

The old features (Apache 2.2 and 2.4 support, dbconfig-common, etc.) are, of course, still there. Support for other webservers could be contributed by you, and I could extend the autoloader to work at runtime (using dpkg triggers) to include dependencies as packaged in other Debian packages. See, nobody needs “composer”! ☻

Feel free to check it out, play around with it, install it, test it, send me improvement patches and feature requests, etc. — it’s here with a mirror at GitHub (since I wrote it myself and the licence is permissive enough anyway).

This posting and the code behind it are sponsored by my employer ⮡ tarent.

08 March, 2017 10:00PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Neil McGovern

Neil McGovern

GNOME ED update – Week 10

Conferences

After quite a bit of work, we finally have the sponsorship brochure produced for GUADEC and GNOME.Asia. Huge thanks to everyone who helped, I’m really pleased with the result. Again, if you or your company are interested in sponsoring us, please drop a mail to sponsors@guadec.org!

Food and Games

I like food, and I like games. So this week there was a couple of awesome sneak previews on the upcoming GNOME 3.24 release. Matthias Clasen posted about GNOME Recipes the 1.0 release – tasty snacks are now available directly on the desktop, which means I can also view them when I’m at the back of the house in the kitchen, where the wifi connection is somewhat spotty. Adrien Plazas also posted about GNOME Games – now I can get my retro gaming fix easily.

Signing thingswpid-file1488981981482.jpg

I was sent a package in the post, with lots of blank stickers and a couple of pens. I’ve now signed a load of stickers, and my hand hurts. More details about exactly what this is about soon :)

08 March, 2017 09:02PM by Neil McGovern

hackergotchi for Clint Adams

Clint Adams

Oh, little boy, pick up the pieces

Chris sat in the window seat in the row behind his parents. Actually he also sat in half of his neighbor’s seat. His neighbor was uncomfortable but said nothing and did not attempt to lower the armrest to try to contain his girth.

His parents were awful human beings: selfish, self-absorbed and controlling. “Chris,” his dad would say, “look out the window!” His dad was the type of officious busybody who would snitch on you at work for not snitching on someone else.

“What?” Chris would reply, after putting down The Handmaid’s Tale and removing one of his earbuds. Then his dad would insist that it was very important that he look out the window to see a very important cloud or glacial landform.

Chris would comply and then return to his book and music.

“Chris,” his mom would say, “you need to review our travel itinerary.” His mom cried herself to sleep when she heard that Nigel Stock died, gave up on ever finding True Love, and resolved to achieve a husband and child instead.

“What?” Chris would reply, after putting down The Handmaid’s Tale and removing one of his earbuds. Then his mom would insist that it was very important that review photos and prose regarding their managed tour package in Costa Rica, because he wouldn’t want to show up there unprepared. Chris would passive-aggressively stare at each page of the packet, then hand it back to his mother.

It was already somewhat clear that due to delays in taking off they would be missing their connecting flight to Costa Rica. About ⅓ of the passengers on the aeroplane were also going to Costa Rica, and were discussing the probable missed connection amongst themselves and with the flight staff.

Chris’s parents were oblivious to all of this, despite being native speakers of English. Additionally, just as they were unaware of what other people were discussing, they imagined that no one else could hear their private family discussions.

Everyone on the plane missed their connecting flights. Chris’s parents continued to be terrible human beings.

Posted on 2017-03-08
Tags: etiamdisco

08 March, 2017 04:06PM

Petter Reinholdtsen

How does it feel to be wiretapped, when you should be doing the wiretapping...

So the new president in the United States of America claim to be surprised to discover that he was wiretapped during the election before he was elected president. He even claim this must be illegal. Well, doh, if it is one thing the confirmations from Snowden documented, it is that the entire population in USA is wiretapped, one way or another. Of course the president candidates were wiretapped, alongside the senators, judges and the rest of the people in USA.

Next, the Federal Bureau of Investigation ask the Department of Justice to go public rejecting the claims that Donald Trump was wiretapped illegally. I fail to see the relevance, given that I am sure the surveillance industry in USA believe they have all the legal backing they need to conduct mass surveillance on the entire world.

There is even the director of the FBI stating that he never saw an order requesting wiretapping of Donald Trump. That is not very surprising, given how the FISA court work, with all its activity being secret. Perhaps he only heard about it?

What I find most sad in this story is how Norwegian journalists present it. In a news reports the other day in the radio from the Norwegian National broadcasting Company (NRK), I heard the journalist claim that 'the FBI denies any wiretapping', while the reality is that 'the FBI denies any illegal wiretapping'. There is a fundamental and important difference, and it make me sad that the journalists are unable to grasp it.

Update 2017-03-13: Look like The Intercept report that US Senator Rand Paul confirm what I state above.

08 March, 2017 10:50AM

hackergotchi for Matthew Garrett

Matthew Garrett

The Internet of Microphones

So the CIA has tools to snoop on you via your TV and your Echo is testifying in a murder case and yet people are still buying connected devices with microphones in and why are they doing that the world is on fire surely this is terrible?

You're right that the world is terrible, but this isn't really a contributing factor to it. There's a few reasons why. The first is that there's really not any indication that the CIA and MI5 ever turned this into an actual deployable exploit. The development reports[1] describe a project that still didn't know what would happen to their exploit over firmware updates and a "fake off" mode that left a lit LED which wouldn't be there if the TV were actually off, so there's a potential for failed updates and people noticing that there's something wrong. It's certainly possible that development continued and it was turned into a polished and usable exploit, but it really just comes across as a bunch of nerds wanting to show off a neat demo.

But let's say it did get to the stage of being deployable - there's still not a great deal to worry about. No remote infection mechanism is described, so they'd need to do it locally. If someone is in a position to reflash your TV without you noticing, they're also in a position to, uh, just leave an internet connected microphone of their own. So how would they infect you remotely? TVs don't actually consume a huge amount of untrusted content from arbitrary sources[2], so that's much harder than it sounds and probably not worth it because:

YOU ARE CARRYING AN INTERNET CONNECTED MICROPHONE THAT CONSUMES VAST QUANTITIES OF UNTRUSTED CONTENT FROM ARBITRARY SOURCES

Seriously your phone is like eleven billion times easier to infect than your TV is and you carry it everywhere. If the CIA want to spy on you, they'll do it via your phone. If you're paranoid enough to take the battery out of your phone before certain conversations, don't have those conversations in front of a TV with a microphone in it. But, uh, it's actually worse than that.

These days audio hardware usually consists of a very generic codec containing a bunch of digital→analogue converters, some analogue→digital converters and a bunch of io pins that can basically be wired up in arbitrary ways. Hardcoding the roles of these pins makes board layout more annoying and some people want more inputs than outputs and some people vice versa, so it's not uncommon for it to be possible to reconfigure an input as an output or vice versa. From software.

Anyone who's ever plugged a microphone into a speaker jack probably knows where I'm going with this. An attacker can "turn off" your TV, reconfigure the internal speaker output as an input and listen to you on your "microphoneless" TV. Have a nice day, and stop telling people that putting glue in their laptop microphone is any use unless you're telling them to disconnect the internal speakers as well.

If you're in a situation where you have to worry about an intelligence agency monitoring you, your TV is the least of your concerns - any device with speakers is just as bad. So what about Alexa? The summary here is, again, it's probably easier and more practical to just break your phone - it's probably near you whenever you're using an Echo anyway, and they also get to record you the rest of the time. The Echo platform is very restricted in terms of where it gets data[3], so it'd be incredibly hard to compromise without Amazon's cooperation. Amazon's not going to give their cooperation unless someone turns up with a warrant, and then we're back to you already being screwed enough that you should have got rid of all your electronics way earlier in this process. There are reasons to be worried about always listening devices, but intelligence agencies monitoring you shouldn't generally be one of them.

tl;dr: The CIA probably isn't listening to you through your TV, and if they are then you're almost certainly going to have a bad time anyway.

[1] Which I have obviously not read
[2] I look forward to the first person demonstrating code execution through malformed MPEG over terrestrial broadcast TV
[3] You'd need a vulnerability in its compressed audio codecs, and you'd need to convince the target to install a skill that played content from your servers

comment count unavailable comments

08 March, 2017 01:30AM

March 07, 2017

Bits from Debian

New Debian Developers and Maintainers (January and February 2017)

The following contributors got their Debian Developer accounts in the last two months:

  • Ulrike Uhlig (ulrike)
  • Hanno Wagner (wagner)
  • Jose M Calhariz (calharis)
  • Bastien Roucariès (rouca)

The following contributors were added as Debian Maintainers in the last two months:

  • Dara Adib
  • Félix Sipma
  • Kunal Mehta
  • Valentin Vidic
  • Adrian Alves
  • William Blough
  • Jan Luca Naumann
  • Mohanasundaram Devarajulu
  • Paulo Henrique de Lima Santana
  • Vincent Prat

Congratulations!

07 March, 2017 11:30PM by Jean-Pierre Giraud

Daniel Stender

Remotely deploy a WSGI application (as a Debian package) with Ansible

This is a mini workshop as an introduction into using Ansible for the administration of Debian systems. As an example it’s shown how this configuration management tool can be used to remotely set up a simple WSGI application running on an Apache web server on a Debian installation to make it available on the net. The application which is used as an example is httpbin by Runscope. This is an useful HTTP request service for the development of web software or any other purposes which features a number of specific endpoints that can be used for different testing matters. For example, the address http://<address>/user-agent of httpbin returns the user agent identification of the client program which has been used to query it (that’s taken from the header of the request). There are official instances of this request server running on the net, like the one at http://httpbin.org/. WSGI is a widespread standard for programming web application in Python, and httpbin is implemented in Python using the Flask web framework.

The basis of the workshop is a simple base installation of an up-to-date Debian 8 “Jessie” on a demonstration host, and the latest official release of that is 8.7. As a first step, the installation has to be switched over to the “testing” branch of Debian, because the Debian packages of httpbin are comparatively new and are going to be introduced into the “stable” branch of the archive the first time with the upcoming major release number 9 “Stretch”. After that, the Apache packages which are needed to make it available (apache2 and libapache2-mod-wsgi – other web servers of course could be used instead), and which are not part of a base installation, are installed from the archive. The web server then gets launched remotely, and the httpbin package will be also pulled and the service is going to be integrated into Apache to run on that. To achieve that, two configuration files must be deployed on the target system, and a few additional operations are needed to get everything working together. Every step is preconfigured within Ansible so that the whole process could be launched by a single command on the control node, and could be run on a single or a number of comparable target machines automatically and reproducibly.

If a server is needed for trying this workshop out, straightforward cloud server instances are available on the net for example at DigitalOcean, but – let me underline this – there are other cloud providers which offer the same things, too! If it’s needed for experiments or other purposes only for a limited time, low priced “droplets” are available here which are billed by the hour. After being registered, the machine(s) which is/are wanted could be set up easily over the web interface (choose “Debian 8.7” as OS), but there are also command line clients available like doctl (which is not yet available as a Debian package). For the convenient use of a droplet the user should generate a SSH key pair on the local machine, first:

$ ssh-keygen -t rsa -b 4096 -C "john@doe.com" -f ~/.ssh/mykey

The public part of the key ~/.ssh/mykey.pub then can be uploaded into the user account before the droplet is going to be created, it then could be integrated automatically. There is a good introduction on the whole process available in the excellent tutorial series serversforhackers.com, here. Ansible then can use the SSH keypair to login into a droplet without the need to type in the password every time. On a cloud server like this carrying a Debian base system, the examples in this workshop can be tried well. Ansible works client-less and doesn’t need to be installed on the remote system but only on the control node, however a Python 2.7 interpreter is needed there (the base system of DigitalOcean includes that).

For that Ansible can do anything on them, remote servers which are going to be controlled must be added to /etc/ansible/hosts. This is a configuration file in the INI format for DNS names and IP addresses. For a flexible organisation of the server inventory it’s possible to group hosts here, IP ranges could be given, and optional variables can be used among other useful things (the default file contains a couple of examples). One or a couple of servers (in Ansible they are called “hosts”) on which something particular is going to be happening (like httpbin is going to be installed) could be added like this (the group name is arbitrary):

[httpbin]
192.0.2.0

Whether Ansible could communicate with the hosts in the group and actually can operate on them can be verified by just pinging them like this:

$ ansible httpbin -m ping -u root --private-key=~/.ssh/mykey
192.0.2.0 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

The command succeeded well, so it appears there isn’t no significant problem regarding this machine. The return value changed:false indicates that there haven’t been any changes on that host as a result of the execution of this command. Next to ping there are several other modules which could be used with the command line tool ansible the same way, and these modules are actually something like the core components of Ansible. The module shell for example can be used to execute shell commands on the remote machine like uname to get some system information returned from the server:

$ ansible httpbin -m shell -a "uname -a" -u root --private-key=~/.ssh/mykey
192.0.2.0 | SUCCESS | rc=0 >>
Linux debian-512mb-fra1-01 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux

In the same way, the module apt could be used to remotely install packages. But with that there’s no major advantage over other software products that offer a similar functionality, and using those modules on the command line is just the basics of Ansible usage.

Playbooks in Ansible are YAML scripts for the manipulation of the registered hosts in /etc/ansible/hosts. Different tasks can be defined here for successive processing, like a simple playbook for changing the package source from “stable” to “testing” for example goes like this:

---
 - hosts: httpbin
   tasks:
   - name: remove "jessie" package source
     apt_repository: repo='deb http://mirrors.digitalocean.com/debian jessie main' state=absent

   - name: add "testing" package source
     apt_repository: repo='deb http://httpredir.debian.org/debian testing main contrib non-free' state=present

   - name: upgrade packages
     apt: update_cache=yes upgrade=dist

First, like used with the CLI tool ansible above, the targeted host group httpbin is chosen. The default user “root” and the SSH key could be fixed here, too, to spare the need to give them on the command line. Then there are three tasks defined to get worked down consecutively: With the module apt_repository the preset package source “jessie” is removed from /etc/apt/sources.list. Then, a new package source for the “testing” archive gets added to /etc/apt/sources.list.d/ by using the same module (by the way, mirrors.digitalocean.org also provides testing, though, and that might be faster). After that, the apt module is used to upgrade the package inventory (it performs apt-get dist-upgrade), after an update of the package cache has taken place (by running apt-get update)

A playbook like this (the filename is arbitrary, but commonly carries the suffix .yml) can be run by the CLI tool ansible-playbook, like this:

$ ansible-playbook httpbin.yml -u root --private-key=~/.ssh/mykey

Ansible then works down the individual “plays” of the tasks on the remote server(s) top-down, and due to a high speed net connection and SSD block device hardware the change of the system to being a Debian Testing base installation only takes around a minute to complete in the cloud. While working, Ansible puts out status reports for the individual operations. If certain changes on the base system have been taken place already like when a playbook is run through one more time, the modules of course sense that and return just the information that the system haven’t been changed because it’s already there what have been wanted to change to. Beyond the basic playbook which is shown here there are more advanced features like register and when available to bind the execution of a play to the error-free result of a previous one.

The apt module then can be used in the playbook to install the three needed binary packages one after another:

   - name: install apache2
     apt: pkg=apache2 state=present

   - name: install mod_wsgi
     apt: pkg=libapache2-mod-wsgi state=present

   - name: install httpbin
     apt: pkg=python-httpbin state=present

The Debian packages are configured in a way that the Apache web server is running immediately after installation, and the Apache module mod_wsgi is automatically integrated. If that would be otherwise desired, there are Ansible modules available for operating on Apache which can reverse this if that is wanted. By the way, after the package have been installed the httpbin server can be launched with python -m httpbin.core, but this runs only a mini web server which is not suitable for productive use.

To get httpbin running on the Apache web server two configuration files are needed. They could be set up in the project directory on the control node and then uploaded onto the remote machine with another Ansible module. The file httpbin.wsgi (the name is again arbitrary) contains only a single line which is the starter for the WSGI application to run:

from httpbin import app as application

The module copy can be used to deploy that script on the host, while the target folder /var/www/httpbin must be set up before that by the module file. In addition to that, a separate user account like “httpbin” (the name is also arbitrary but picked up in the other config file) is needed to run it, and the module user can set this up. The demonstrational playbook continues, and the plays which are performing these three operations are going like this:

   - name: mkdir /var/www/httpbin
     file: path=/var/www/httpbin state=directory

   - name: set up user "httpbin"
     user: name=httpbin

   - name: copy WSGI starter
     copy: src=httpbin.wsgi dest=/var/www/httpbin/httpbin.wsgi owner=httpbin group=httpbin mode=0644 

Another configuration script httpbin.conf is needed for Apache on the remote server to include the WSGI application httpbin running as a virtual host. It goes like this:

<VirtualHost *>
 WSGIDaemonProcess httpbin user=httpbin group=httpbin threads=5
 WSGIScriptAlias / /var/www/httpbin/httpbin.wsgi

 <Directory /var/www/httpbin>
  WSGIProcessGroup httpbin
  WSGIApplicationGroup %{GLOBAL}
  Order allow,deny
  Allow from all
 </Directory>
</VirtualHost>

This file needs to be copied into the folder /etc/apache2/sites-available on the host, which already exists when the apache2 package is installed. The remaining operations which are missing to get anything running together are: The default welcome screen of Apache blocks anything else and should be disabled by Apache’s CLI tool a2dissite. And after that, the new virtual host needs to be activated with the complementary tool a2ensite – both could be run remotely by the module command. Then the Apache server on the remote machine must be restarted to read in the new configuration. You’ve guessed it already, that’s all easy to perform with Ansible:

   - name: deploy configuration script
     copy: src=httpbin.conf dest=/etc/apache2/sites-available owner=root group=root mode=0644

   - name: deactivate default welcome screen
     command: a2dissite 000-default.conf
     
   - name: activate httpbin virtual host
     command: a2ensite httpbin.conf

   - name: restart Apache
     service: name=apache2 state=restarted 

That’s it. After this playbook has been performed by Ansible on a (or several) freshly set up remote Debian base installation completely, then the httpbin request server is available running on the Apache web server and could be queried from anywhere by a web browser, or for example by curl:

$ curl http://192.0.2.0/user-agent
{
  "user-agent": "curl/7.50.1"
}

With the broad set of Ansible modules and the playbooks a lot of tasks can be accomplished like the example problem which has been explained here. But the range of functions of Ansible however is still even more comprehensive, but to discuss that would have blown the frame of this blog post. For example the playbooks offer more advanced features like event handler which can be used for recurring operations like the restart of Apache in more extensive projects. And beyond playbooks, templates could be set up in the roles which can behave differently on selected machine groups – Ansible uses Jinja2 as template engine for that. And the scope of functions of the basic modules could be expanded by employing external tools.

To drop a word on why it could be useful in certain situations to run own instances of the httpbin request server instead of using the official ones which are provided on the net by Runscope: Like some people would prefer to run a private instance for example in the local network instead of querying the one on the internet. Or for some development reasons a couple or even a large number of identical instances might be needed – Ansible is ideal for setting them up automatically. Anyway, the Javascript bindings to the tracking services like Google Analytics in httpbin/templates/trackingscripts.html are patched out in the Debian package. That could be another reason to prefer to set up an own instance on a Debian server.

07 March, 2017 07:18PM

Hideki Yamane

ftp, gone.


kernel.org shutting down FTP services, see https://kernel.org/shutting-down-ftp-services.html. I may be better to consider it as in Debian, as I said.



07 March, 2017 02:26PM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

7DRL 2017

It's time once again for the 7-day Roguelike challenge. This years attempt is entitled "Casket of Deplorables".

Further updates will be posted here.

07 March, 2017 05:52AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.9

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

The RProtoBuf 0.4.9 release is the fourth and final update this weekend following the request by CRAN to not use package= in .Call() when PACKAGE= is really called for.

Some of the code in RProtoBuf 0.4.9 had this bug; some other entry points had neither (!!). With the ongoing drive to establish proper registration of entry points, a few more issues were coming up, all of which are now addressed. And we had some other unreleased minor cleanup, so this made for a somewhat longer (compared to the other updates this weekend) NEWS list:

Changes in RProtoBuf version 0.4.9 (2017-03-06)

  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

  • Several missing PACKAGE= arguments were added to the corresponding .Call invocations

  • Two (internal) C++ functions were renamed with suffix _cpp to disambiguate them from R functions with the same name

  • All of above were part of #26

  • Some editing corrections were made to the introductory vignette (David Kretch in #25)

  • The 'configure.ac' file was updated, and renamed from the older converntion 'configure.in', along with 'src/Makevars'. (PR #24 fixing #23)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 March, 2017 01:17AM

RVowpalWabbit 0.0.9

The RVowpalWabbit package update is the third of four upgrades requested by CRAN, following RcppSMC 0.1.5 and RcppGSL 0.3.2.

This package being somewhat raw, the change was simple and just meant converting the single entry point to using Rcpp Attributes -- which addressed the original issue in passing.

No new code or features were added.

We should mention that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 March, 2017 01:06AM

March 05, 2017

RcppGSL 0.3.2

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

RcppGSL release 0.3.2 is one of several maintenance releases this weekend to fix an issue flagged by CRAN: calls to .Call() sometimes used package= where PACKAGE= was meant. This came up now while the registration mechanism is being reworked.

So RcppGSL was updated too, and we took the opportunity to bring several packaging aspects up to the newest standards, including support for the soon-to-be required registration of routines.

No new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.2 (2017-03-04)

  • In the fastLm function, .Call now uses the correct PACKAGE= argument

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); also use .registration=TRUE in useDynLib in NAMESPACE

  • The skeleton configuration for created packages was updated.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

05 March, 2017 07:39PM

RcppSMC 0.1.5

RcppSMC provides Rcpp-based bindings to R for them Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article.

RcppSMC release 0.1.5 is one of several maintenance releases this weekend to fix an issue flagged by CRAN: calls to .Call() sometimes used package= where PACKAGE= was meant. This came up now while the registration mechanism is being reworked.

Hence RcppSMC was updated, and we took the opportunity to bring several packaging aspects up to the newest standards, including support for the soon-to-be required registration of routines.

No new code or features were added. The NEWS file entries follow below:

Changes in RcppSMC version 0.1.5 (2017-03-03)

  • Correct .Call to use PACKAGE= argument

  • DESCRIPTION, NAMESPACE, README.md changes to comply with current R CMD check levels

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Updated .travis.yml file for continuous integration

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

05 March, 2017 07:38PM

Julien Viard de Galbert

Raspberry Pi 3 as desktop computer

For about six months I’ve been using a Raspberry Pi 3 as my desktop computer at home.

The overall experience is fine, but I had to do a few adjustments.
First was to use KeePass, the second to compile gcc for cross-compilation (ie use buildroot).

KeePass

I’m using KeePass + KeeFox to maintain my passwords on the various websites (and avoid reusing the same everywhere).
For this to work on the Raspberry Pi, one need to use mono from Xamarin:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update

sudo apt-get upgrade
sudo apt-get install mono-runtime

The install instruction comes from mono-project and the initial pointer was found on raspberrypi forums, stackoverflow and Benny Michielsen’s blog.
And for some plugin to work I think I had to apt-get install mono-complete.

Compiling gcc

Using the Raspberry Pi 3, I recovered an old project based on buildroot for the raspberry pi 2. And just for building the tool-chain I had a few issues.

First the compilation would stop during mnp compilation:

 /usr/bin/gcc -std=gnu99 -c -DHAVE_CONFIG_H -I. -I.. -D__GMP_WITHIN_GMP -I.. -DOPERATION_divrem_1 -O2 -Wa,--noexecstack tmp-divrem_1.s -fPIC -DPIC -o .libs/divrem_1.o
tmp-divrem_1.s: Assembler messages:
tmp-divrem_1.s:129: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11'
tmp-divrem_1.s:145: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11'
tmp-divrem_1.s:158: Error: selected processor does not support ARM mode `mls r1,r4,r8,r11'
tmp-divrem_1.s:175: Error: selected processor does not support ARM mode `mls r1,r4,r3,r8'
tmp-divrem_1.s:209: Error: selected processor does not support ARM mode `mls r11,r4,r12,r3'

Makefile:768: recipe for target 'divrem_1.lo' failed
make[]: *** [divrem_1.lo] Error 1

I Googled the error and found this post on the Raspberry Pi forum not really helpful…
But I finally found an explanation on Jan Hrach’s page on the subject.
The raspbian distribution is still optimized for the first Raspberry Pi so basically the compiler is limited to the old raspberypi instructions. While I was compiling gcc for a Raspberry Pi 2 so needed the extra ones.

The proposed solution is to basically update raspbian to debian proper.

While this is a neat idea, I still wanted to get some raspbian specific packages (like the kernel) but wanted to be sure that everything else comes from debian. So I did some apt pinning.

First I experienced that pinning is not sufficient so when updating source.list with plain debian Jessie, make sure to add theses lines before the raspbian lines:

# add official debian jessie (real armhf gcc)
deb http://ftp.fr.debian.org/debian/ jessie main contrib non-free
deb-src http://ftp.fr.debian.org/debian/ jessie main

deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main

deb http://ftp.fr.debian.org/debian/ jessie-updates main
deb-src http://ftp.fr.debian.org/debian/ jessie-updates main

Then run the following to get the debian gpg keys, but don’t yet upgrade your system:

apt update
apt install debian-archive-keyring

Now, let’s add the pinning.
First if you were using APT::Default-Release "stable"; in your apt.conf (as I did) remove it. It does not mix well with fine grained pinning we will then implement.

Then, fill your /etc/apt/preferences file with the following:

# Debian
Package: *
Pin: release o=Debian,a=stable,n=jessie
Pin-Priority: 700

# Raspbian
Package: *
Pin: release o=Raspbian,a=stable,n=jessie
Pin-Priority: 600

Package: *
Pin: release o=Raspberry Pi Foundation,a=stable,n=jessie
Pin-Priority: 600

# Mono
Package: *
Pin: release v=7.0,o=Xamarin,a=stable,n=wheezy,l=Xamarin-Stable,c=main
Pin-Priority: 800

Note: You can use apt-cache policy (no parameter) to debug pinning.
The pinning above is mainly based on the origin field of the repositories (o=)
Finally you can upgrade your system:

apt update 
apt-cache policy gcc 
rm /var/cache/apt/archives/* 
apt upgrade 
apt-cache policy gcc

Note: Removing the cache ensure we download the packages from debian as raspbian is using the exact same naming but we now they are not compiled with a real armhf tool-chain.

Second issue with gcc

The build stopped on recipe for target 's-attrtab' failed. There are many references on the web, that one was easy, it ‘just’ need more memory, so I added some swap on the external SSD I was already using to work on buildroot.

Conclusion

That’s it for today, not bad considering my last post was more that 3 years ago…

05 March, 2017 04:25PM by Julien

Thorsten Alteholz

My Debian Activities in February 2017

FTP assistant

This month you didn’t hear much of me, as I only marked 97 packages for accept and rejected 17 packages. I only sent one email to maintainers asking questions.

Nevertheless the NEW queue is down to 46 packages at the moment, so my fellows in misery do a really good job :-).

Debian LTS

This was my thirty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 13.00h. During that time I did uploads of

  • [DLA 832-1] bitlbee security update for three CVEs
  • [DLA 837-1] radare2 security update for one CVE
  • [DLA 839-1] tnef security update for four CVEs
  • [DLA 843-1] bind9 security update for one CVE

Thanks again to all the people who complied with my requests to test a package!

I also prepared the Jessie DSA for tnef which resulted in DSA 3798-1.

At the end of the month I did another week of frontdesk work and among other things I filed some bugs against packages from [1].

[1] https://security-tracker.debian.org/tracker/status/unreported

Other stuff

Reading about openoverlayrouter in the German magazine c’t, I uploaded that software to Debian.

I also uploaded npd6, which helped me to reach github from a IPv6-only-machine.
Further I uploaded pyicloud.

As my DOPOM for this mont I adopted bottlerocket. Though you can’t buy the hardware anymore, there still seem to be some users around.

05 March, 2017 04:16PM by alteholz

hackergotchi for Vincent Bernat

Vincent Bernat

Netops with Emacs and Org mode

Org mode is a package for Emacs to “keep notes, maintain todo lists, planning projects and authoring documents”. It can execute embedded snippets of code and capture the output (through Babel). It’s an invaluable tool for documenting your infrastructure and your operations.

Here are three (relatively) short videos exhibiting Org mode use in the context of network operations. In all of them, I am using my own junos-mode which features the following perks:

  • syntax highlighting for configuration files,
  • commit of configuration snippets to remote devices, and
  • execution of remote commands.

Since some Junos devices can be quite slow, commits and remote executions are done asynchronously1 with the help of a Python helper.

In the first video, I take some notes about configuring BGP add-path feature (RFC 7911). It demonstrates all the available features of junos-mode.

In the second video, I execute a planned operation to enable this feature in production. The document is a modus operandi and contains the configuration to apply and the commands to check if it works as expected. At the end, the document becomes a detailed report of the operation.

In the third video, a cookbook has been prepared to execute some changes. I set some variables and execute the cookbook to apply the change and check the result.


  1. This is a bit of a hack since Babel doesn’t have native support for that. Also have a look at ob-async which is a language-independent implementation of the same idea. 

05 March, 2017 11:01AM by Vincent Bernat

hackergotchi for Shirish Agarwal

Shirish Agarwal

To say or not to say

Voltaire

For people who are visually differently-abled, the above reads –

“To learn who rules over you, simply find out who you are not allowed to criticize” – Voltaire wrote this either in late 16th century or early 17th century and those words were as apt in those times, as it is in these turbulent times as well.

Update 05/03 – According to @bla these words are attributable to a neo-nazi and apparently a child abuser. While I don’t know the context in which it was shared, it describes the environment in which we are perfectly. Please see his comment for a link and better understanding.

The below topic requires a bit of maturity, so if you are easily offended, feel free not to read further.

While this week-end I was supposed to share about the recent Science Day celebrations that we did last week –

Science Day celebrations at GMRT

Would explore it probably next week.

This week the attempt is to share thoughts which had been simmering at the back of my mind for more than 2 weeks or more and whose answers are not clear to me.

My buttons were pressed when Martin f. Kraft shared about a CoC violation and the steps taken therein. While it is easy to say with 20:20 hind-sight to say that the gentleman acted foolishly, I don’t really know the circumstances to pass the judgement so quickly. In reality, while I didn’t understand the ‘joke’ in itself, I have to share some background by way of anecdotes as to why it isn’t so easy for me to give a judgement call.

a. I don’t know the topics chosen by stand-up comedians in other countries, in India, most of the stand-up acts are either about dating or sex or somewhere in-between, which is lovingly given the name ‘Leela’ (dance of life) in Indian mythology. I have been to several such acts over the years at different events, different occasions and 99.99% of the time I would see them dealing with pedophilia, necrophilia and all sorts of deviants in sexuality and people laughing wildly, but couple of times when the comedian shared the term ‘sex’ with people, educated, probably more than a few world-travelled middle to higher-middle class people were shocked into silence. I had seen this not in once but 2-3 times in different environments and was left wondering just couple of years back ‘ Is sex such a bad word that people get easily shocked ? ‘ Then how is it that we have 1.25 billion + people in India. There had to be some people having sex. I don’t think that all 1.25 billion people are test-tube babies.

b. This actually was what lead to my quandary last year when my sharing of ‘My Experience with Debian’ which I had carefully prepared for newbies, seeing seasoned debian people, I knew my lame observations wouldn’t cut ice with them and hence had to share my actual story which involved a bit of porn. I was in two minds whether or not to say it till my eyes caught a t-shirt on which it was said ‘We make porn’ or something to that effect. That helped me share my point.

c. Which brings me to another point, it seems it is becoming increasingly difficult to talk about anything either before apologizing to everyone and not really knowing who will take offence at what and what the repercussions might be. In local sharings, I always start with a blanket apology that if I say something that offends you, please let me know afterwards so I can work on it. As the term goes ‘You can’t please everyone’ and that is what happens. Somebody sooner or later would take offence at something and re-interpret it in ways which I had not thought of.

Charlie Chaplin - King of self-deprecating humor

From the little sharings and interactions I have been part of, I find people take offence at the most innocuous things. For instance, one of the easy routes of not offending anyone is to use self-deprecating humour (or so I thought) either of my race, caste, class or even my issues with weight and each of the above would offend somebody. Charlie Chaplin didn’t have those problems. If somebody is from my caste, I’m portraying the caste in a certain light, a certain slant. If I’m talking about weight issues, then anybody who is like me (fat) feels that the world is laughing at them rather than at me or they will be discriminated against. While I find the last point a bit valid, it leaves with me no tools and no humour. I neither have the observational powers or the skills that Kapil Sharma has and have to be me.

While I have no clue what to do next, I feel the need to also share why humour is important in any sharing.-

a. Break – When any speaker uses humour, the idea is to take a break from a serious topic. It helps to break the monotony of the talk especially if the topic is full of jargon talk and new concepts. A small comedic relief brings the attendees attention back to the topic as it tends to wander in a long monotonous talk.

b. Bridge – Some of the better speakers use one or more humourous anecdote to explain and/or bridge the chasm between two different concepts. Some are able to produce humour on the fly while others like me have to rely on tried and tested methods.

There is one another thing as well, humour is seems to be a mixture of social, cultural and political context and its very easy to have it back-fired upon you.

For instance, I attempted humour on refugees, probably not the best topic to try humour in the current political climate, and predictably, it didn’t go down well. I had to share and explain about Robin Williams slightly dark yet humorous tale in ‘Moscow on the Hudson‘ The film provides comedy and pathos in equal measure. You are left identifying with Vladimir Ivanoff (Robin Williams character) especially in the last scene where he learns of his grand-mother dying and he remembers her and his motherland, Russia and plays a piece on his saxophone as a tribute both to his grand-mother and the motherland. Apparently, in the height of the cold war, if a Russian defected to United States (land of Satan and other such terms used) you couldn’t return to Russia.

The movie, seen some years back left a deep impact on me. For all the shortcomings and ills that India has, even if I could, would and could I be happy anywhere else ? The answers are not so easy. With most NRI’s (Non-Resident Indians) who emigrated for good did it not so much for themselves but for their children. So the children would hopefully have a better upbringing, better facilities, better opportunities than they would have got here.

I talked to more than a few NRI’s and while most of them give standardized answers, talking awhile and couple of beers or their favourite alcohol later, you come across deeply conflicted human beings whose heart is in India and their job, profession and money interests compel them to be in the country where they are serving.

And Indian movies further don’t make it easy for the Indian populace when trying to integrate into a new place. Some of the biggest hits of yesteryear’s were about having the distinct Indian culture in their new country while the message of most countries is integration. I know of friends who are living in Germany who have to struggle through their German in order to be counted as a citizen, the same I guess is true of other countries as well, not just the language but the customs as well. They also probably struggle with learning more than one language and having an amalgamation of values which somehow they and their children have to make sense of.

I was mildly shocked last week to learn that Mishi Choudary had to train people in the U.S. to differentiate between Afghan turban styles of wearing and the Punjabi style of wearing the turban. A simple search on ‘Afghani turban’ and ‘Punjabi turban’ reveals that there are a lot of differences between the two cultures. In fact, the way they talk, the way they walk, there are lots that differentiate the two cultures.

The second shocking video was of an African-American man racially abusing an Indian-American girl. At first, I didn’t believe it till I saw the video on facebook.

My point through all that is it seems humour, that clean, simple exercise which brings a smile to you and uplifts the spirit doesn’t seem to be as easy as it once was.

Comments, suggestions, criticisms all are welcome.


Filed under: Miscellenous Tagged: #Elusive, #Fear, #hind-sight, #Humour, #immigrant, #integration, #Mishi Choudary, #refugee, #Robin Williams, #self-deprecating, #SFLC, #two-minds

05 March, 2017 03:43AM by shirishag75

March 04, 2017

Simon Josefsson

GPS on Replicant 6

I use Replicant on my main Samsung S3 mobile phone. Replicant is a fully free Android distribution. One consequence of the “fully free” means that some functionality is not working properly, because the hardware requires non-free software. I am in the process of upgrading my main phone to the latest beta builds of Replicant 6. Getting GPS to work on Replicant/S3 is not that difficult. I have made the decision that I am willing to compromise on freedom a bit for my Geocaching hobby. I have written before how to get GPS to work on Replicant 4.0 and GPS on Replicant 4.2. When I upgraded to Wolfgang’s Replicant 6 build back in September 2016, it took some time to figure out how to get GPS to work. I prepared notes on non-free firmware on Replicant 6 which included a section on getting GPS to work. Unfortunately, that method requires that you build your own image and has access to the build tree. Which is not for everyone. This writeup explains how to get GPS to work on a Replicant 6 without building your own image. Wolfgang already explained how to add all other non-free firmware to Replicant 6 but it did not cover GPS. The reason is that GPS requires non-free software to run on your main CPU. You should understand the consequences of this before proceeding!

The first step is to download a Replicant 6.0 image, currently they are available from the replicant 6.0 forum thread. Download the replicant-6.0-i9300.zip file and flash it to your phone as usual. Make sure everything (except GPS of course) works, after loading other non-free firmware (Wifi, Bluetooth etc) using "./firmwares.sh i9300 all" that you may want. You can install the Geocaching client c:geo via fdroid by adding fdroid.cgeo.org as a separate repository. Start the app and verify that GPS does not work. Keep the replicant-6.0-i9300.zip file around, you will need it later.

The tricky part about GPS is that the daemon is started through the init system of Android, specified by the file /init.target.rc. Replicant ships with the GPS part commented out. To modify this file, we need to bring out our little toolbox. Modifying the file on the device itself will not work, the root filesystem is extracted from a ramdisk file on every boot. Any changes made to the file will not be persistent. The file /init.target.rc is stored in the boot.img ramdisk, and that is the file we need to modify to make a persistent modification.

First we need the unpackbootimg and mkbootimg tools. If you are lucky, you might find them pre-built for your operating system. I am using Debian and I couldn’t find them easily. Building them from scratch is however not that difficult. Assuming you have a normal build environment (i.e., apt-get install build-essentials) try the following to build the tools. I was inspired by a post on unpacking and editing boot.img for some of the following instructions.

git clone https://github.com/CyanogenMod/android_system_core.git
cd android_system_core/
git checkout cm-13.0 
cd mkbootimg/
gcc -o ./mkbootimg -I ../include ../libmincrypt/*.c ./mkbootimg.c
gcc -o ./unpackbootimg -I ../include ../libmincrypt/*.c ./unpackbootimg.c
sudo cp mkbootimg unpackbootimg /usr/local/bin/

You are now ready to unpack the boot.img file. You will need the replicant ZIP file in your home directory. Also download the small patch I made for the init.target.rc file: https://gitlab.com/snippets/1639447. Save the patch as replicant-6-gps-fix.diff in your home directory.

mkdir t
cd t
unzip ~/replicant-6.0-i9300.zip 
unpackbootimg -i ./boot.img
mkdir ./ramdisk
cd ./ramdisk/
gzip -dc ../boot.img-ramdisk.gz | cpio -imd
patch < ~/replicant-6-gps-fix.diff 

Assuming the patch applied correctly (you should see output like "patching file init.target.rc" at the end) you will now need to put the ramdisk back together.

find . ! -name . | LC_ALL=C sort | cpio -o -H newc -R root:root | gzip > ../new-boot.img-ramdisk.gz
cd ..
mkbootimg --kernel ./boot.img-zImage \
--ramdisk ./new-boot.img-ramdisk.gz \
--second ./boot.img-second \
--cmdline "$(cat ./boot.img-cmdline)" \
--base "$(cat ./boot.img-base)" \
--pagesize "$(cat ./boot.img-pagesize)" \
--dt ./boot.img-dt \
--ramdisk_offset "$(cat ./boot.img-ramdisk_offset)" \
--second_offset "$(cat ./boot.img-second_offset)" \
--tags_offset "$(cat ./boot.img-tags_offset)" \
--output ./new-boot.img

Reboot your phone to the bootloader:

adb reboot bootloader

Then flash the new boot image back to your phone:

heimdall flash --BOOT new-boot.img

The phone will restart. To finalize things, you need the non-free GPS software components glgps, gps.exynos4.so and gps.cer. Before I used a complicated method involving sdat2img.py to extract these files from a CyanogenMod 13.x archive. Fortunately, Lineage OS is now offering downloads containing the relevant files too. You will need to download some files, extract them, and load them onto your phone.

wget https://mirrorbits.lineageos.org/full/i9300/20170125/lineage-14.1-20170125-experimental-i9300-signed.zip
mkdir lineage
cd lineage
unzip ../lineage-14.1-20170125-experimental-i9300-signed.zip
adb root
adb wait-for-device
adb remount
adb push system/bin/glgps /system/bin/
adb push system/lib/hw/gps.exynos4.vendor.so /system/lib/hw/gps.exynos4.so
adb push system/bin/gps.cer /system/bin/

Now reboot your phone and start c:geo and it should find some satellites. Congratulations!

04 March, 2017 05:08PM by simon

intrigeri

Contribute your skills to Debian in Paris, May 13-14 2017

(There is a French translation of this announcement.)

Join us in Paris, on May 13-14 2017, and we will find a way in which you can help Debian with your current set of skills! You might even learn one or two things in passing (but you don't have to).

Debian is a free operating system for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian comes with dozens of thousands of packages, precompiled software bundled up for easy installation on your machine. A number of other operating systems, such as Ubuntu and Tails, are based on Debian.

The upcoming version of Debian, called Stretch, will be released later this year. We need you to help us make it awesome :)

Whether you're a computer user, a graphics designer, or a bug triager, there are many ways you can contribute to this effort. We also welcome experience in consensus decision-making, anti-harassment teams, and package maintenance. No effort is too small and whatever you bring to this community will be appreciated.

Here's what we will be doing:

  • We will test the upcoming version of Debian and gather all kinds of feedback.
  • We will identify problems about graphics and design in Debian, and solve some of them.
  • We will triage bug reports that are blocking the release of the upcoming version of Debian.
  • Debian package maintainers will fix some of these bugs.

Goals and principles

Before diving into the exact content of this event, a few words from the organization team.

This is a work in progress, and a statement of intent. Not everything is organized and confirmed yet.

We want to bring together a heterogeneous group of people. This goal will guide our handling of sponsorship requests, and will help us make decisions if more people want to attend than we can welcome properly. In other words: if you're part of a group that is currently under-represented in computer communities, we would like you to be able to attend.

We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar personal characteristic. Attending this event requires reading and respecting our Code of Conduct, that sets the standards in terms of behaviour for the whole event, including communication (public and private) before, while and after. There will be a team ready to act if you feel you have been or are being harassed or made uncomfortable by an attendee.

We believe that money should not be a blocker for contributing to Debian. We will sponsor travel and find a place to sleep for as many enthusiastic attendees as possible. The space where this event will take place is accessible to wheelchairs. We are trying to organize a translation into (probably French) sign language. Vegan food should be provided for lunch. If you have any specific needs regarding food, please let us know when registering, and we will do our best.

What we will be doing

There will be a number of stations, i.e. physical space dedicated to people with a given set of skills, hosted by someone who is ready to make this space welcoming and productive.

A few stations are already organized, and are described below. If you want to host a station yourself, or would like us to organize another one, please let us know. For example, you may want to assess the state of Debian Stretch for a specific field of interest, be it audio editing, office work, network auditing or whatever you are doing with Debian :)

Test the upcoming version of Debian

We will test Debian Stretch and gather feedback. We are particularly interested in:

  • feedback about support for universal access technologies, such as screen readers;
  • feedback about the state of translations into your language;
  • the top 3 things you like or dislike most in the current version of Debian; the top 3 things you like or dislike most in the upcoming version of Debian;
  • general feelings about your experience with Debian!

Experienced Debian contributors will be ready to relay this feedback to the relevant teams so it is put to good use. Hypra collaborators will be there to bring a focus on universal access technologies.

Design and graphics

Truth be told, Debian lacks people who are good at design and graphics. There are definitely a good number of low-hanging fruits that can be tackled in a week-end, either in Debian per-se, or in upstream) projects, or in Debian derivatives.

This station will be hosted by Juliette Belin. She designed the themes for the last two versions of Debian.

Triage Release Critical bugs

Bugs flagged as Release Critical are blocking the release of the upcoming version of Debian. To fix them, it helps to make sure the bug report documents the up-to-date status of the bug, and of its resolution. One does not need to be a programmer to do this work! For example, you can try and reproduce bugs in software you use... or in software you will discover. This helps package maintainers better focus their work.

This station will be hosted by Solveig. She has experience in bug triaging, in Debian and elsewhere. Cyril Brulebois, a long-time Debian developer, will provide support and advice upon request.

Fix Release Critical bugs

There will be a Bug Squashing Party.

Where? When? How to register?

See https://wiki.debian.org/BSP/2017/05/fr/Paris for the exact address and time.

Please register by the end of March if you want to attend this event!

If you have any questions, please get in touch with us.

04 March, 2017 04:33PM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

March 03, 2017

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in February 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

We have reached the end of Stretch’s development cycle, a phase called full freeze. That means packages may only migrate to Testing aka Stretch after approval by the release team. Changes must be minimal and only address important or release critical bugs. This is usually the time when I stop uploading new upstream releases to unstable to avoid any disruptions. Of course there are exceptions but if you are unsure best practice is to use experimental instead. A lot of RC bugs are still open and affect the next release. In February I could close five one and triage two more.

Debian Games

  • I packaged new upstream releases of Bullet (2.86 and 2.86.1), a 3D physics engine, after I was informed by Debian’s OpenMW maintainer (who is also one of the upstream developers) that this would fix a couple of issues for them.
  • Debian Bug #848063 – ri-li: FTBFS randomly (Impossible d’initialiser SDL:Couldn’t open X11 display): I usually note bug fixes very briefly but this one deserves some extra information. Apparently ri-li randomly failed to build from source on the bug reporter’s build system. The error message indicated that an X11 display could not be opened. For those who wonder why an X11 display is required to build a package from source; ri-li is a game and comes with a small development program, MakeDat, to build the data files from source. The program is only needed at build time but it requires some SDL functions to work properly. During the compilation step MakeDat tries to initialize SDL and it requires an X11 display for doing that. Ri-Li uses the xvfb-run wrapper to create a virtual X server environment and then executes MakeDat. I didn’t need to touch the package for more than two years and needless to say ri-li has always worked so far.  I agreed that this was probably a regression in one of ri-li’s dependencies. I immediately suspected xvfb and the xvfb-run script being the most likely cause for the build failures and after some investigation on the Internet I uploaded a new revision using the “-a” switch for xvfb-run. Unfortunately that didn’t work as expected. On the other hand I noticed that the package built fine on the official buildd network for all release architectures, not to mention on my own system. I decided that severity important would be the appropriate severity level for this issue because the majority of users was unaffected and the claim the package failed to build 99 % of the time was just wrong.

    So much for the prologue. The bug reporter disagreed with the bug severity and insisted to make #848063 release critical again. Since nobody of the Games Team wanted to do that and there were more people in a similar situation who disagreed with such a move, a thread was started on the debian-devel mailing list. I stayed away from it mainly because I already participated in the same discussions before where I got the impression that concerns were simply ignored. Also other people made a good response and expressed my views, for instance here , here and here.

    In my opinion Debian is more than just an operating system and “not an academic exercise”. I really do think that a package which fails to build from source is a bug and should be fixed but not every FTBFS is release critical, that’s why we have for example release architectures and ports. We already make distinctions and we don’t support every possible hardware configuration.  If a package FTBFS on my laptops because 64 MB RAM or a 6 GB hard disk don’t cut it anymore I’m not going to file RC bugs against the package in question, I’ll try with a slightly more powerful machine again. RC bugs are a big hammer and we should be really considerate when we file them because as a consequence, if we can’t fix them in time the package will not be part of the next stable release or even removed from Debian. We certainly don’t have a shortage of bugs and if there is disagreement we should make case-by-case decisions and not blindly act “by the book”. Threatening people to escalate bugs to Debian’s Technical Committee isn’t helpful either. I am not saying that random build failures should be ignored. There are tests which are designed to fail 50 % of the time. Obviously they are not very useful for Debian. But we have also many tests that check for real life situations, which require a specific amount of memory and disk space. I think it is a shame that we have to disable those tests or even the whole test suite if they work locally and on the official buildd network but not in a custom build environment.  I fear we don’t make Debian better but instead we “verschlimmbessern” (to improve sth. for the worse) it. Last but not least bug #848063 was never about a single vs. multi-core CPU issue, even the bug reporter agreed with this statement but apparently a lot of people who commented on debian-devel never fully read the bug report or followed closely enough.

Debian Java

Debian LTS

This was my twelfth month as a paid contributor and I have been paid to work 13 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 06. February until 13. February I was in charge of our LTS frontdesk. I triaged security issues in mp3splt, spice, gnome-keyring, irssi, gtk-vnc, php5, openpyxl, postfixadmin, sleekxmpp, mcabber, psi-plus, vim, mupdf, netpbm-free and libplist.
  • DLA-820-1. Issued a security update for viewvc fixing 1 CVE.
  • DLA-823-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-825-1. Issued a security update for spice fixing 2 CVE.
  • DLA-823-2. Issued a regression update for tomcat7.
  • DLA-834-1. Issued a security update for phpmyadmin fixing 1 CVE.
  • DLA-835-1. Issued a security update for cakephp fixing 1 CVE.
  • DLA-840-1. Issued a security update for libplist fixing 2 CVE.

Non-maintainer uploads

Thanks for reading and see you next time.

03 March, 2017 05:11PM by Apo

Petter Reinholdtsen

Norwegian Bokmål translation of The Debian Administrator's Handbook complete, proofreading in progress

For almost a year now, we have been working on making a Norwegian Bokmål edition of The Debian Administrator's Handbook. Now, thanks to the tireless effort of Ole-Erik, Ingrid and Andreas, the initial translation is complete, and we are working on the proof reading to ensure consistent language and use of correct computer science terms. The plan is to make the book available on paper, as well as in electronic form. For that to happen, the proof reading must be completed and all the figures need to be translated. If you want to help out, get in touch.

A fresh PDF edition in A4 format (the final book will have smaller pages) of the book created every morning is available for proofreading. If you find any errors, please visit Weblate and correct the error. The state of the translation including figures is a useful source for those provide Norwegian bokmål screen shots and figures.

03 March, 2017 01:50PM

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.12

Weblate 2.12 has been released today, few days behind schedule. It brings improved screenshots management, better search and replace features or improved import. Many of the new features were already announced in previous post, where you can find more details about them.

Full list of changes:

  • Improved admin interface for groups.
  • Added support for Yandex Translate API.
  • Improved speed of sitewide search.
  • Added project and component wide search.
  • Added project and component wide search and replace.
  • Improved rendering of inconsistent translations.
  • Added support for opening source files in local editor.
  • Added support for configuring visual keyboard with special characters.
  • Improved screenshot management with OCR support for matching source strings.
  • Default commit message now includes translation information and URL.
  • Added support for Joomla translation format.
  • Improved reliability of import across file formats.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

03 March, 2017 11:00AM

March 02, 2017

hackergotchi for Joey Hess

Joey Hess

what I would ask my lawyers about the new Github TOS

The Internet saw Github's new TOS yesterday and collectively shrugged.

That's weird..

I don't have any lawyers, but the way Github's new TOS is written, I feel I'd need to consult with lawyers to understand how it might affect the license of my software if I hosted it on Github.

And the license of my software is important to me, because it is the legal framework within which my software lives or dies. If I didn't care about my software, I'd be able to shrug this off, but since I do it seems very important indeed, and not worth taking risks with.

If I were looking over the TOS with my lawyers, I'd ask these questions...


4 License Grant to Us

This seems to be saying that I'm granting an additional license to my software to Github. Is that right or does "license grant" have some other legal meaning?

If the Free Software license I've already licensed my software under allows for everything in this "License Grant to Us", would that be sufficient, or would my software still be licenced under two different licences?

There are violations of the GPL that can revoke someone's access to software under that license. Suppose that Github took such an action with my software, and their GPL license was revoked. Would they still have a license to my software under this "License Grant to Us" or not?

"Us" is actually defined earlier as "GitHub, Inc., as well as our affiliates, directors, subsidiaries, contractors, licensors, officers, agents, and employees". Does this mean that if someone say, does some brief contracting with Github, that they get my software under this license? Would they still have access to it under that license when the contract work was over? What does "affiliates" mean? Might it include other companies?

Is it even legal for a TOS to require a license grant? Don't license grants normally involve an intentional action on the licensor's part, like signing a contract or writing a license down? All I did was loaded a webpage in a browser and saw on the page that by loading it, they say I've accepted the TOS. (I then set about removing everything from Github.)

Github's old TOS was not structured as a license grant. What reasons might they have for structuring this TOS in such a way?

Am I asking too many questions only 4 words into this thing? Or not enough?

Your Content belongs to you, and you are responsible for Content you post even if it does not belong to you. However, we need the legal right to do things like host it, publish it, and share it. You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.

If this is a software license, the wording seems rather vague compared with other software licenses I've read. How much wiggle room is built into that wording?

What are the chances that, if we had a dispute and this came before a judge, that Github's laywers would be able to find a creative reading of this that makes "do things like" include whatever they want?

Suppose that my software is javascript code or gets compiled to javascript code. Would this let Github serve up the javascript code for their users to run as part of the process of rendering their website?

That means you're giving us the right to do things like reproduce your content (so we can do things like copy it to our database and make backups); display it (so we can do things like show it to you and other users); modify it (so our server can do things like parse it into a search index); distribute it (so we can do things like share it with other users); and perform it (in case your content is something like music or video).

Suppose that Github modified my software, does not distribute the modified version, but converts it to javascipt code and distributes that to their users to run as part of the process of rendering their website. If my software is AGPL licensed, they would be in violation of that license, but doesn't this additional license allow them to modify and distribute my software in such a way?

This license does not grant GitHub the right to sell your Content or otherwise distribute it outside of our Service.

I see that "Service" is defined as "the applications, software, products, and services provided by GitHub". Does that mean at the time I accept the TOS, or at any point in the future?

If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not?

If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?


5 License Grant to Other Users

Any Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and "fork" your repositories (this means that others may make their own copies of your Content in repositories they control).

Let's say that company Foo does something with my software that violates its GPL license and the license is revoked. So they no longer are allowed to copy my software under the GPL, but it's there on Github. Does this "License Grant to Other Users" give them a different license under which they can still copy my software?

The word "fork" has a particular meaning on Github, which often includes modification of the software in a repository. Does this mean that other users could modify my software, even if its regular license didn't allow them to modify it or had been revoked?

How would this use of a platform-specific term "fork" be interpreted in this license if it was being analized in a courtroom?

If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality. You may grant further rights if you adopt a license.

This paragraph seems entirely innocious. So, what does your keen lawyer mind see in it that I don't?

How sure are you about your answers to all this? We're fairly sure we know how well the GPL holds up in court; how well would your interpretation of all this hold up?

What questions have I forgotten to ask?


And finally, the last question I'd be asking my lawyers:

What's your bill come to? That much? Is using Github worth that much to me?

02 March, 2017 07:35PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Rational thoughts on the GitHub ToS change

I woke this morning to Thorsten claiming the new GitHub Terms of Service could require the removal of Free software projects from it. This was followed by joeyh removing everything from github. I hadn’t actually been paying attention, so I went looking for some sort of summary of whether I should be worried and ended up reading the actual ToS instead. TL;DR version: No, I’m not worried and I don’t think you should be either.

First, a disclaimer. I’m not a lawyer. I have some legal training, but none of what I’m about to say is legal advice. If you’re really worried about the changes then you should engage the services of a professional.

The gist of the concerns around GitHub’s changes are that they potentially circumvent any license you have applied to your code, either converting GPL licensed software to BSD style (and thus permitting redistribution of binary forms without source) or making it illegal to host software under certain Free software licenses on GitHub due to being unable to meet the requirements of those licenses as a result of GitHub’s ToS.

My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service. There are sadly too many people who upload code there without a license, meaning that technically no one can do anything with it. Don’t do this people; make sure that any project you put on GitHub has some sort of license attached to it (don’t write your own - it’s highly likely one of Apache/BSD/GPL will suit your needs) so people know whether they can make use of it or not. “I don’t care” is not a valid reason not to do this.

Section D, relating to user generated content, is the one causing the problems. It’s possibly easiest to walk through each subsection in order.

D1 says GitHub don’t take any responsibility for your content; you make it, you’re responsible for it, they’re not accepting any blame for harm your content does nor for anything any member of the public might do with content you’ve put on GitHub. This seems uncontentious.

D2 reaffirms your ownership of any content you create, and requires you to only post 3rd party content to GitHub that you have appropriate rights to. So I can’t, for example, upload a copy of ‘Friday’ by Rebecca Black.

Thorsten has some problems with D3, where GitHub reserve the right to remove content that violates their terms or policies. He argues this could cause issues with licenses that require unmodified source code. This seems to be alarmist, and also applies to any random software mirror. The intent of such licenses is in general to ensure that the pristine source code is clearly separate from 3rd party modifications. Removal of content that infringes GitHub’s T&Cs is not going to cause an issue.

D4 is a license grant to GitHub, and I think forms part of joeyh’s problems with the changes. It affirms the content belongs to the user, but grants rights to GitHub to store and display the content, as well as make copies such as necessary to provide the GitHub service. They explicitly state that no right is granted to sell the content at all or to distribute the content outside of providing the GitHub service.

This term would seem to be the minimum necessary for GitHub to ensure they are allowed to provide code uploaded to them for download, and provide their web interface. If you’ve actually put a Free license on your code then this isn’t necessary, but from GitHub’s point of view I can understand wanting to make it explicit that they need these rights to be granted. I don’t believe it provides a method of subverting the licensing intent of Free software authors.

D5 provides more concern to Thorsten. It seems he believes that the ability to fork code on GitHub provides a mechanism to circumvent copyleft licenses. I don’t agree. The second paragraph of this subsection limits the license granted to the user to be the ability to reproduce the content on GitHub - it does not grant them additional rights to reproduce outside of GitHub. These rights, to my eye, enable the forking and viewing of content within GitHub but say nothing about my rights to check code out and ignore the author’s upstream license.

D6 clarifies that if you submit content to a GitHub repo that features a license you are licensing your contribution under these terms, assuming you have no other agreement in place. This looks to be something that benefits projects on GitHub receiving contributions from users there; it’s an explicit statement that such contributions are under the project license.

D7 confirms the retention of moral rights by the content owner, but states they are waived purely for the purposes of enabling GitHub to provide service, as stated under D4. In particular this right is revocable so in the event they do something you don’t like you can instantly remove all of their rights. Thorsten is more worried about the ability to remove attribution and thus breach CC-BY or some BSD licenses, but GitHub’s whole model is providing attribution for changesets and tracking such changes over time, so it’s hard to understand exactly where the service falls down on ensuring the provenance of content is clear.

There are reasons to be wary of GitHub (they’ve taken a decentralised revision control system and made a business model around being a centralised implementation of it, and they store additional metadata such as PRs that aren’t as easily extracted), but I don’t see any indication that the most recent changes to their Terms of Service are something to worry about. The intent is clearly to provide GitHub with the legal basis they need to provide their service, rather than to provide a means for them to subvert the license intent of any Free software uploaded.

02 March, 2017 06:13PM

Antoine Beaupré

A short history of password hashers

These are notes from my research that led to the publication of the password hashers article. This article is more technical than the previous ones and compares the various cryptographic primitives and algorithms used in the various software I have reviewed. The criteria for inclusion on this list is fairly vague: I mostly included a password hasher if it was significantly different from the previous implementations in some way, and I have included all the major ones I could find as well.

The first password hashers

Nic Wolff claims to be the first to have written such a program, all the way back in 2003. Back then the hashing algorithm was MD5, although Wolff has now updated the algorithm to use SHA-1 and still maintains his webpage for public use. Another ancient but unrelated implementation, is the Standford University Applied Cryptography's pwdhash software. That implementation was published in 2004 and unfortunately, that implementation was not updated and still uses MD5 as an hashing algorithm, but at least it uses HMAC to generate tokens, which makes the use of rainbow tables impractical. Those implementations are the simplest password hashers: the inputs are simply the site URL and a password. So the algorithms are, basically, for Wolff's:

token = base64(SHA1(password + domain))

And for Standford's PwdHash:

token = base64(HMAC(MD5, password, domain)))

SuperGenPass

Another unrelated implementation that is still around is supergenpass is a bookmarklet that was created around 2007, originally using MD5 as well but now supports SHA512 now although still limited to 24 characters like MD5 (which needlessly limits the entropy of the resulting password) and still defaults MD5 with not enough rounds (10, when key derivation recommendations are more generally around 10 000, so that it's slower to bruteforce).

Note that Chris Zarate, the supergenpass author, actually credits Nic Wolff as the inspiration for his implementation. Supergenpass is still in active development and is available for the browser (as a bookmarklet) or mobile (as an webpage). Supergenpass allows you to modify the password length, but also add an extra profile secret which adds to the password and generates a personalized identicon presumably to prevent phishing but it also introduces the interesting protection, the profile-specific secret only found later in Password Hasher Plus. So the Supergenpass algorithm looks something like this:

token = base64(SHA512(password + profileSecret + ":" + domain, rounds))

The Wijjo Password Hasher

Another popular implementation is the Wijjo Password Hasher, created around 2006. It was probably the first shipped as a browser extension which greatly improved the security of the product as users didn't have to continually download the software on the fly. Wijjo's algorithm also improved on the above algorithms, as it uses HMAC-SHA1 instead of plain SHA-1 or HMAC-MD5, which makes it harder to recover the plaintext. Password Hasher allows you to set different password policies (use digits, punctuation, mixed case, special characters and password length) and saves the site names it uses for future reference. It also happens that the Wijjo Password Hasher, in turn, took its inspiration on different project, hashapass.com, created in 2006 and also based on HMAC-SHA-1. Indeed, hashapass "can easily be generated on almost any modern Unix-like system using the following command line pattern":

echo -n parameter \
| openssl dgst -sha1 -binary -hmac password \
| openssl enc -base64 \
| cut -c 1-8

So the algorithm here is obviously:

token = base64(HMAC(SHA1, password, domain + ":" + counter)))[:8]

... although in the case of Password Hasher, there is a special routine that takes the token and inserts random characters in locations determined by the sum of the values of the characters in the token.

Password Hasher Plus

Years later, in 2010, Eric Woodruff ported the Wijjo Password Hasher to Chrome and called it Password Hasher Plus. Like the original Password Hasher, the "plus" version also keeps those settings in the extension and uses HMAC-SHA-1 to generate the password, as it is designed to be backwards-compatible with the Wijjo Password Hasher. Woodruff did add one interesting feature: a profile-specific secret key that gets mixed in to create the security token, like what SuperGenPass does now. Stealing the master password is therefore not enough to generate tokens anymore. This solves one security concern with Password Hasher: an hostile page could watch your keystrokes and steal your master password and use it to derive passwords on other sites. Having a profile-specific secret key, not accessible to the site's Javascript works around that issue, but typing the master password directly in the password field, while convenient, is just a bad idea, period. The final algorithm looks something like:

token = base64(HMAC(SHA1, password, base64(HMAC(SHA1, profileSecret, domain + ":" + counter))))

Honestly, that seems rather strange, but it's what I read from the source code, which is available only after decompressing the extension nowadays. I would have expected the simplest version:

token = base64(HMAC(SHA1, HMAC(SHA1, profileSecret, password), domain + ":" + counter))

The idea here would be "hide" the master password from bruteforce attacks as soon as possible... But maybe this is all equivalent.

Regardless, Password Hasher Plus then takes the token and applies the same special character insertion routine as the Password Hasher.

LessPass

Last year, Guillaume Vincent a french self-described "humanist and scuba diving fan" released the lesspass extension for Chrome, Firefox and Android. Lesspass introduces several interesting features. It is probably the first to include a commandline version. It also uses a more robust key derivation algorithm (PBKDF2) and takes into account the username on the site, allowing multi account support. The original release (version 1) used only 8192 rounds which is now considered too low. In the bug report it was interesting to note that LessPass couldn't do the usual practice of running the key derivation for 1 second to determine the number of rounds needed as the results need to be deterministic.

At first glance, the LessPass source code seems clear and easy to read which is always a good sign, but of course, the devil is in the details. One key feature that is missing from Password Hasher Plus is the profile-specific seed, although it should be impossible, for a hostile web page to steal keystrokes from a browser extension, as far as I know.

The algorithm then gets a little more interesting:

entropy = PBKDF2(SHA256, masterPassword, domain + username + counter, rounds, length)
where
    rounds=10000
    length=32

entropy is then used to pick characters to match the chosen profile.

Regarding code readability, I got quickly confused by the PBKDF2 implementation: SubtleCrypto.ImportKey() doesn't seem to support PBKDF2 in the API, yet it's how it is used there... Is it just something to extract key material? We see later what looks like a more standard AES-based PBKDF2 implementation, but this code looks just strange to me. It could be me unfamilarity with newer Javascript coding patterns, however.

There is also a lesspass-specific character picking routing that is also not base64, and different from the original Password Hasher algorithm.

Master Password

A review of password hashers would hardly be complete without mentioning the Master Password and its elaborate algorithm. While the applications surrounding the project are not as refined (there is no web browser plugin and the web interface can't be easily turned into a bookmarklet), the algorithm has been well developed. Of all the password managers reviewed here, Master Password uses one of the strongest key derivation algorithms out there, scrypt:

key = scrypt( password, salt, cost, size, parallelization, length )
where
salt = "com.lyndir.masterpassword" + len(username) + name
cost = 32768
size = 8
parallelization = 2
length = 64
entropy = hmac-sha256(key, "com.lyndir.masterpassword" + len(domain) + domain + counter )

Master Password the uses one of 6 sets of "templates" specially crafted to be "easy for a user to read from a screen and type using a keyboard or smartphone" and "compatible with most site's password policies", our "transferable" criteria defined in the first passwords article. For example, the default template mixes vowels, consonants, numbers and symbols, but carefully avoiding possibly visibly similar characters like O and 0 or i and 1 (although it does mix 1 and l, oddly enough).

The main strength of Master Password seems to be the clear definition of its algorithm (although Hashpass.com does give out OpenSSL commandline examples...), which led to its reuse in another application called freepass. The Master Password app also doubles as a stateful password manager...

Other implementations

I have also considered including easypasswords, which uses PBKDF2-HMAC-SHA1, in my list of recommendations. I discovered only recently that the author wrote a detailed review of many more password hashers and scores them according to their relative strength. In the end, I ended up covering more LessPass since the design is very similar and LessPass does seem a bit more usable. Covering LessPass also allowed me to show the contrast and issues regarding the algorithm changes, for example.

It is also interesting to note that the EasyPasswords author has criticized the Master Password algorithm quite severely:

[...] scrypt isn’t being applied correctly. The initial scrypt hash calculation only depends on the username and master password. The resulting key is combined with the site name via SHA-256 hashing then. This means that a website only needs to break the SHA-256 hashing and deduce the intermediate key — as long as the username doesn’t change this key can be used to generate passwords for other websites. This makes breaking scrypt unnecessary[...]

During a discussion with the Master Password author, he outlined that "there is nothing "easy" about brute-force deriving a 64-byte key through a SHA-256 algorithm." SHA-256 is used in the last stage because it is "extremely fast". scrypt is used as a key derivation algorithm to generate a large secret and is "intentionnally slow": "we don't want it to be easy to reverse the master password from a site password". "But it' unnecessary for the second phase because the input to the second phase is so large. A master password is tiny, there are only a few thousand or million possibilities to try. A master key is 8^64, the search space is huge. Reversing that doesn't need to be made slower. And it's nice for the password generation to be fast after the key has been prepared in-memory so we can display site passwords easily on a mobile app instead of having to lock the UI a few seconds for every password."

Finally, I considered covering Blum's Mental Hash (also covered here and elsewhere). This consists of an algorithm that can basically be ran by the human brain directly. It's not for the faint of heart, however: if I understand it correctly, it will require remembering a password that is basically a string of 26 digits, plus compute modulo arithmetics on the outputs. Needless to say, most people don't do modulo arithmetics every day...

02 March, 2017 02:45PM

hackergotchi for Guido Günther

Guido Günther

Debian Fun in February 2017

Debian LTS

February marked the 22nd month I contributed to Debian LTS under the Freexian umbrella. I had 8 hours allocated which I used by:

  • the 2nd half of a LTS frontdesk week
  • an update to lts-cve-triage.py so we don't ignore undetermined issues anymore
  • testing the bind9 update prepared by Thorsten Alteholz
  • testing of apache2 packages prepared by Jonas Meurer and Antoine Beaupré
  • triaging of QEMU CVEs and fixing most if them resulting in DLA-842-1

Other Debian stuff

  • libvirt and gtk-vnc uploads to fix CVEs in unstable and stretch
  • A git-buildpackage upload to unstable to unbreak importing large histories with import-dsc
  • Some CSS improvements for git-buildpackage to (hopefully) make the manual easier to read.

Some other Free Software activities

Nothing exciting, just some minor fixes at several places:

02 March, 2017 10:15AM

hackergotchi for Martín Ferrari

Martín Ferrari

Prometheus in Jessie(bpo) and Stretch

Prometheus logo Just over 2 years ago, I started packaging Prometheus for Debian. It was a daunting task, mainly because the Golang ecosystem is so young, almost none of its dependencies were packaged, and upstream practices are very different from Debian's.

Today I want to announce that this is complete.

The main part of the Prometheus stack is available in Debian testing, which very soon will become the Stretch release:

These are available for a bunch of architectures (sadly, not all of them), and are the most important building blocks to deploy Prometheus in your network.

I have also finished preparing backports for all the required dependencies, and jessie-backports has a complete Prometheus stack too!

Adding to these, the archive already has the client libraries for Go, Perl, Python, and Django; and a bunch of other exporters. Except for the Postgres exporter, most of these are going to be in the Stretch release, and I plan to prepare backports for Jessie too:

Note that not all of these have been packaged by me, luckily other Debianites are also working on Prometheus packaging!

I am confident that Prometheus is going to become one of the main monitoring tools in the near future, and I am very happy that Debian is the first distribution to offer official packages for it. If you are interested, there is still lots of work ahead. Patches, bug reports, and co-maintainers are welcome!

Update 3/3/2017: Today the Perl client library was uploaded to unstable, and it is waiting for ftp-master approval.

Comment

02 March, 2017 07:31AM

Hideki Yamane

Debian docker image is smaller than Oracle Linux 7

From Oracle Developers Blog,
We've just introduced a new base Oracle Linux 7-slim Docker image that's a minuscule 114MB. Ok, so it's not quite as small as Alpine, but it's now the smallest base image of any of the major distributions. Check out the numbers in the graph to see just how small.
It's not fair, Oracle. You talked about -slim image for Oracle Linux, then you should do for other distros, too.
$ sudo docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
debian              jessie-slim         232f5cd0c765        2 days ago          80 MB
debian              jessie              978d85d02b87        2 days ago          123 MB
oraclelinux         7-slim              f005b5220b05        8 days ago          114 MB
Debian's jessie-slim image is 80MB, smaller than oraclelinux 7-slim image.

And, we're going to go Debian 9 "Stretch"
debian              stretch-slim        02ee50628785        2 days ago          57.1 MB
debian              stretch             6f4d77d39d73        2 days ago          100 MB
It's smaller than Oracle's -slim image by default, and 1/2 size for -slim image. Nice, isn't it? :)

02 March, 2017 05:03AM by Hideki Yamane (noreply@blogger.com)