September 23, 2018

hackergotchi for Charles Plessy

Charles Plessy

I moved to Okinawa!

I moved with my family to Okinawa in August, in the Akano neighborhood in the Uruma city. We arrived on time to see a bunch of eisaa, traditional dances using lots for drums, that often take place at the end of August. Each neighborhood has its own band and we hope we can join next year.

We live in a concrete building with a shared optic fiber connection. It has a good ping to the mainland, but the speed for big downloads is catastrophic in the evenings, when all families are using the fiber at the same time. Impossible to manage a simple sbuild-update -dragu unstable, and I could not contribute anything to Debian since them. It is frustrating; however there might be solutions through our GitLab forge.

On the work side, I joined the Okinawa Institute of Science and Technology Graduate University (OIST). It is a formidable place, [open to the public])(https://www.oist.jp/unguided-campus-visit) even on week-end (note the opening hours of the café). If you come visit, please let me know!

23 September, 2018 12:33PM

September 22, 2018

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

Handling an old Digital Photo Frame (AX203) with Debian (and gphoto2)

Some days ago I found an key chain at home that was a small digital photo frame, and it seems that was not used since 2009 (old times when I was not using Debian at home yet). The photo frame was still working (I connected it with an USB cable and after some seconds, it turned on), and showed 37 photos from 2009 indeed.

When I connected it with USB cable to the computer, it was asking “Connect USB? Yes/No” I pressed the button saying “yes” and nothing happened in the computer (I was expecting an USB drive to be shown in Dolphin, but no).

I looked at “dmesg” output and it was shown as a CDROM:

[ 1620.497536] usb 3-2: new full-speed USB device number 4 using xhci_hcd
[ 1620.639507] usb 3-2: New USB device found, idVendor=1908, idProduct=1320
[ 1620.639513] usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 1620.639515] usb 3-2: Product: Photo Frame
[ 1620.639518] usb 3-2: Manufacturer: BUILDWIN
[ 1620.640549] usb-storage 3-2:1.0: USB Mass Storage device detected
[ 1620.640770] usb-storage 3-2:1.0: Quirks match for vid 1908 pid 1320: 20000
[ 1620.640807] scsi host7: usb-storage 3-2:1.0
[ 1621.713594] scsi 7:0:0:0: CD-ROM buildwin Photo Frame 1.01 PQ: 0 ANSI: 2
[ 1621.715400] sr 7:0:0:0: [sr1] scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray
[ 1621.715745] sr 7:0:0:0: Attached scsi CD-ROM sr1
[ 1621.715932] sr 7:0:0:0: Attached scsi generic sg1 type 5

But not automounted.
I mounted it and then looked at the files, but I couldn’t find photos there, only these files:

Autorun.inf FEnCodeUnicode.dll LanguageUnicode.ini
DPFMate.exe flashlib.dat StartInfoUnicode.ini

The Autorun.inf file was pointing to the DPFMate.exe file.

I connected the device to a Windows computer and then I could run the DPFMate.exe program, and it was a program to manage the photos in the device.

I was wondering if I could manage the device from Debian and then searched for «dpf “digital photo frame” linux dpfmate» and found this page:

http://www.penguin.cz/~utx/hardware/Abeyerr_Digital_Photo_Frame/

Yes, that one was my key chain!

I looked for gphoto in Debian, going to https://packages.debian.org/gphoto and then learned that the program I need to install was gphoto2.
I installed it and then went to its Quick Start Guide to learn how to access the device, get the photos etc. In particular, I used these commands:

gphoto2 --auto-detect

Model Port 
----------------------------------------------------------
AX203 USB picture frame firmware ver 3.4.x usbscsi:/dev/sg1

gphoto2 --get-all-files

(it copied all the pictures that were in the photo frame, to the current folder in my computer)

gphoto2 --upload-file=name_of_file

(to put some file in the photo frame)

gphoto2 --delete-file=1-38

(to delete the file 1 to 38 in the photo frame).

22 September, 2018 06:20PM by larjona

September 21, 2018

hackergotchi for Jo Shields

Jo Shields

Mike Gabriel

You may follow me on Mastodon

I never fancied having accounts with the big players that much, so I never touched e.g. Twitter.

But Mastodon is the kind of service that works for me. You can find me on https://fosstodon.org.

My nick over there is sunweaver. I'll be posting intersting stuff of my work there, probably more regularly than on the blog.

21 September, 2018 10:15AM by sunweaver

September 20, 2018

hackergotchi for Daniel Pocock

Daniel Pocock

Resigning as the FSFE Fellowship's representative

I've recently sent the following email to fellows, I'm posting it here for the benefit of the wider community and also for any fellows who don't receive the email.


Dear fellows,

Given the decline of the Fellowship and FSFE's migration of fellows into a supporter program, I no longer feel that there is any further benefit that a representative can offer to fellows.

With recent blogs, I've made a final effort to fulfill my obligations to keep you informed. I hope fellows have a better understanding of who we are and can engage directly with FSFE without a representative. Fellows who want to remain engaged with FSFE are encouraged to work through your local groups and coordinators and attend the annual general meeting in Berlin on 7 October as active participation is the best way to keep an organization on track.

This resignation is not a response to any other recent events. From a logical perspective, if the Fellowship is going to evolve out of a situation like this, it is in the hands of local leaders and fellowship groups, it is no longer a task for a single representative.

There are many positive experiences I've had working with people in the FSFE community and I am also very grateful to FSFE for those instances where I have been supported in activities for free software.

Going forward, leaving this role will also free up time and resources for other free software projects that I am engaged in.

I'd like to thank all those of you who trusted me to represent you and supported me in this role during such a challenging time for the Fellowship.

Sincerely,

Daniel Pocock

20 September, 2018 05:15PM by Daniel.Pocock

Russell Coker

Words Have Meanings

As a follow-up to my post with Suggestions for Trump Supporters [1] I notice that many people seem to have private definitions of words that they like to use.

There are some situations where the use of a word is contentious and different groups of people have different meanings. One example that is known to most people involved with computers is “hacker”. That means “criminal” according to mainstream media and often “someone who experiments with computers” to those of us who like experimenting with computers. There is ongoing discussion about whether we should try and reclaim the word for it’s original use or whether we should just accept that’s a lost cause. But generally based on context it’s clear which meaning is intended. There is also some overlap between the definitions, some people who like to experiment with computers conduct experiments with computers they aren’t permitted to use. Some people who are career computer criminals started out experimenting with computers for fun.

But some times words are misused in ways that fail to convey any useful ideas and just obscure the real issues. One example is the people who claim to be left-wing Libertarians. Murray Rothbard (AKA “Mr Libertarian”) boasted about “stealing” the word Libertarian from the left [2]. Murray won that battle, they should get over it and move on. When anyone talks about “Libertarianism” nowadays they are talking about the extreme right. Claiming to be a left-wing Libertarian doesn’t add any value to any discussion apart from demonstrating the fact that the person who makes such a claim is one who gives hipsters a bad name. The first time penny-farthings were fashionable the word “libertarian” was associated with left-wing politics. Trying to have a sensible discussion about politics while using a word in the opposite way to almost everyone else is about as productive as trying to actually travel somewhere by penny-farthing.

Another example is the word “communist” which according to many Americans seems to mean “any person or country I don’t like”. It’s often invoked as a magical incantation that’s supposed to automatically win an argument. One recent example I saw was someone claiming that “Russia has always been communist” and rejecting any evidence to the contrary. If someone was to say “Russia has always been a shit country” then there’s plenty of evidence to support that claim (Tsarist, communist, and fascist Russia have all been shit in various ways). But no definition of “communism” seems to have any correlation with modern Russia. I never discovered what that person meant by claiming that Russia is communist, they refused to make any comment about Russian politics and just kept repeating that it’s communist. If they said “Russia has always been shit” then it would be a clear statement, people can agree or disagree with that but everyone knows what is meant.

The standard response to pointing out that someone is using a definition of a word that is either significantly different to most of the world (or simply inexplicable) is to say “that’s just semantics”. If someone’s “contribution” to a political discussion is restricted to criticising people who confuse “their” and “there” then it might be reasonable to say “that’s just semantics”. But pointing out that someone’s writing has no meaning because they choose not to use words in the way others will understand them is not just semantics. When someone claims that Russia is communist and Americans should reject the Republican party because of their Russian connection it’s not even wrong. The same applies when someone claims that Nazis are “leftist”.

Generally the aim of a political debate is to convince people that your cause is better than other causes. To achieve that aim you have to state your cause in language that can be understood by everyone in the discussion. Would the person who called Russia “communist” be more or less happy if Russia had common ownership of the means of production and an absence of social classes? I guess I’ll never know, and that’s their failure at debating politics.

20 September, 2018 01:10PM by etbe

hackergotchi for Lars Wirzenius

Lars Wirzenius

vmdb2 roadmap

I now have a rudimentary [roadmap][] for reaching 1.0 of [vmdb2][], my Debian image building tool.

Visual roadmap

The visual roadmap is generated from the following YAML file:

vmdb2_1_0:
  label: |
    vmdb2 is production ready
  depends:
    - ci_builds_images
    - docs
    - x220_install

docs:
  label: |
    vmdb2 has a user
    manual of acceptable
    quality

x220_install:
  label: |
    x220 can install Debian
    onto a Thinkpad x220
    laptop

ci_builds_images:
  label: |
    CI builds and publishes
    images using vmdb2
  depends:
    - amd64_images
    - arm_images

amd64_images:
  label: |
    CI: amd64 images

arm_images:
  label: |
    CI: arm images of
    various kinds

20 September, 2018 08:06AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

binb 0.0.1: binb is not Beamer

Following a teaser tweet two days ago, we are thrilled to announce that binb version 0.0.1 arrived on CRAN earlier this evening.

binb extends a little running joke^Htradition I created a while back and joins three other CRAN packages offering RMarkdown integration:

  • tint for tint is not Tufte : pdf or html papers with a fresher variant of the famed Tufte style;
  • pinp for pinp is not PNAS : two-column pdf vignettes in the PNAS style (which we use for several of our packages);
  • linl for linl is not Letter : pdf letters

All four offer easy RMarkdown integration, leaning heavily on the awesome super-power of pandoc as well as general R glue.

This package (finally) wraps something I had offered for Metropolis via a simpler GitHub repo – a repo I put together more-or-less spur-of-the-moment-style when asked for it during the useR! 2016 conference. It also adds the lovely IQSS Beamer theme by Ista Zahn which offers a rather sophisticated spin on the original Metropolis theme by Matthias Vogelgesang.

We put two simple teasers on the GitHub repo.

Metropolis

Consider the following minimal example, adapted from the original minimal example at the bottom of the Metropolis page:

---
title: A minimal example
author: Matthias Vogelgesang
date: \today
institute: Centre for Modern Beamer Themes
output: binb::metropolis
---

# First Section

## First Frame

Hello, world!

It creates a three-page pdf file which we converted into this animated gif (which loses font crispness, sadly):

IQSS

Similarly, for IQSS we use the following input adapting the example above but showing sections and subsections for the nice headings it generates:

---
title: A minimal example
author: Ista Zahn
date: \today
institute: IQSS
output: binb::iqss
---

# First Section

## First Sub-Section

### First Frame

Hello, world!

# Second Section

## Second Subsection

### Second Frame

Another planet!

This creates this pdf file which we converted into this animated gif (also losing font crispness):

The initial (short) NEWS entry follows:

Changes in binb version 0.0.1 (2018-09-19)

  • Initial CRAN release supporting Metropolis and IQSS

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 September, 2018 01:55AM

September 19, 2018

hackergotchi for Gunnar Wolf

Gunnar Wolf

Privacy and Anonymity Colloquium • Activity program announced!

It's only two weeks to the beginning of the privacy and anonymity colloquium we will be celebrating at the Engineering Faculty of my University. Of course, it's not by mere chance we are holding this colloquium starts just after the Tor Meeting, which will happen for the first time in Latin America (and in our city!)

So, even though changes are still prone to happen, I am happy to announce the activity program for the colloquium!

I know some people will ask, so — We don't have the infrastructure to commit to having a video feed from it. We will, though, record the presentations on video, and I have the committment to the university to produce a book from it within a year time. So, at some point in the future, I will be able to give you a full copy of the topics we will discuss!

But, if you are in Mexico City, no excuses: You shall come to the colloquium!

AttachmentSize
poster.pdf881.35 KB
poster_small.jpg81.22 KB

19 September, 2018 10:07PM by gwolf

hackergotchi for Johannes Schauer

Johannes Schauer

mmdebstrap: unprivileged reproducible multi-mirror Debian chroot in 11 s

I wrote an alternative to debootstrap. I call it mmdebstrap which is short for multi-mirror debootstrap. Its interface is very similar to debootstrap, so you can just do:

$ sudo mmdebstrap unstable ./unstable-chroot

And you'll get a Debian unstable chroot just as debootstrap would create it. It also supports the --variant option with minbase and buildd values which install the same package sets as debootstrap would.

A list of advantages in contrast to debootstrap:

  • more than one mirror possible (or really anything that is a legal apt sources.list entry)
  • security and updates mirror included for Debian stable chroots (a wontfix for debootstrap)
  • 2-3 times faster (for debootstrap variants)
  • chroot with apt in 11 seconds (if only installing Essential: yes and apt)
  • gzipped tarball with apt is 27M small
  • bit-by-bit reproducible output (if $SOURCE_DATE_EPOCH is set)
  • unprivileged operation using Linux user namespaces, fakechroot or proot (mode is chosen automatically)
  • can operate on filesystems mounted with nodev
  • foreign architecture chroots with qemu-user (without manually invoking --second-stage)

You can find the code here:

https://gitlab.mister-muffin.de/josch/mmdebstrap

19 September, 2018 06:46PM

Mark Brown

2018 Linux Audio Miniconference

As in previous years we’re trying to organize an audio miniconference so we can get together and talk through issues, especially design decisons, face to face. This year’s event will be held on Sunday October 21st in Edinburgh, the day before ELC Europe starts there. Cirrus Logic have generously offered to host this in their Edinburgh office:

7B Nightingale Way
Quartermile
Edinburgh
EH3 9EG

As with previous years let’s pull together an agenda through a mailing list discussion on alsa-devel – if you’ve got any topics you’d like to discuss please join the discussion there.

There’s no cost for the miniconference but if you’re planning to attend please sign up using the document here.

19 September, 2018 06:40PM by broonie

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, August 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 220 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for September).
  • Antoine Beaupré did 23.75 hours.
  • Ben Hutchings did 5 hours (out of 15 hours allocated + 8 extra hours, thus keeping 8 extra hours for September).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did not manage to work but returned all his hours to the pool (out of 23.75 hours allocated + 19.5 extra hours).
  • Holger Levsen did 10 hours (out of 8 hours allocated + 16 extra hours, thus keeping 14 extra hours for September).
  • Hugo Lefeuvre did nothing (out of 10 hours allocated, but he gave back those hours).
  • Markus Koschany did 23.75 hours.
  • Mike Gabriel did 6 hours (out of 8 hours allocated, thus keeping 2 extra hours for September).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated + 8 remaining hours, thus keeping 11.5 extra hours for September).
  • Roberto C. Sanchez did 6 hours (out of 18h allocated, thus keeping 12 extra hours for September).
  • Santiago Ruano Rincón did 8 hours (out of 20 hours allocated, thus keeping 12 extra hours for September).
  • Thorsten Alteholz did 23.75 hours.

Evolution of the situation

The number of sponsored hours decreased to 206 hours per month, we lost two sponsors and gained only one.

The security tracker currently lists 38 packages with a known CVE and the dla-needed.txt file has 24 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

19 September, 2018 08:41AM by Raphaël Hertzog

September 18, 2018

hackergotchi for Daniel Pocock

Daniel Pocock

What is the relationship between FSF and FSFE?

Ever since I started blogging about my role in FSFE as Fellowship representative, I've been receiving communications and queries from various people, both in public and in private, about the relationship between FSF and FSFE. I've written this post to try and document my own experiences of the issue, maybe some people will find this helpful. These comments have also been shared on the LibrePlanet mailing list for discussion (subscribe here)

Being the elected Fellowship representative means I am both a member of FSFE e.V. and also possess a mandate to look out for the interests of the community of volunteers and donors (they are not members of FSFE e.V). In both capacities, I feel uncomfortable about the current situation due to the confusion it creates in the community and the risk that volunteers or donors may be confused.

The FSF has a well known name associated with a distinctive philosophy. Whether people agree with that philosophy or not, they usually know what FSF believes in. That is the power of a brand.

When people see the name FSFE, they often believe it is a subsidiary or group working within the FSF. The way that brands work, people associate the philosophy with the name, just as somebody buying a Ferrari in Berlin expects it to do the same things that a Ferrari does in Boston.

To give an example, when I refer to "our president" in any conversation, people not knowledgeable about the politics believe I am referring to RMS. More specifically, if I say to somebody "would you like me to see if our president can speak at your event?", some people think it is a reference to RMS. In fact, FSFE was set up as a completely independent organization with distinct membership and management and therefore a different president. When I try to explain this to people, they sometimes lose interest and the conversation can go cold very quickly.

FSFE leadership have sometimes diverged from FSF philosophy, for example, it is not hard to find some quotes about "open source" and one fellow recently expressed concern that some people behave like "FSF Light". But given that FSF's crown jewels are the philosophy, how can an "FSF Light" mean anything? What would "Ferrari Light" look like, a red lawnmower? Would it be a fair use of the name Ferrari?

Some concerned fellows have recently gone as far as accusing the FSFE staff of effectively domain squatting or trolling the FSF (I can't link to that because of FSFE's censorship regime). When questions appear about the relationship in public, there is sometimes a violent response with no firm details. (I can't link to that either because of FSFE's censorship regime)

The FSFE constitution calls on FSFE to "join forces" with the FSF and sometimes this appears to happen but I feel this could be taken further.

FSF people have also produced vast amounts of code (the GNU Project) and some donors appear to be contributing funds to FSFE in gratitude for that or in the belief they are supporting that. However, it is not clear to me that funds given to FSFE support that work. As Fellowship representative, a big part of my role is to think about the best interests of those donors and so the possibility that they are being confused concerns me.

Given the vast amounts of money and goodwill contributed by the community to FSFE e.V., including a recent bequest of EUR 150,000 and the direct questions about this issue I feel it is becoming more important for both organizations to clarify the issue.

FSFE has a transparency page on the web site and this would be a good place to publish all documents about their relationship with FSF. For example, FSFE could publish the documents explaining their authorization to use a name derived from FSF and the extent to which they are committed to adhere to FSF's core philosophy and remain true to that in the long term. FSF could also publish some guidelines about the characteristics of a sister organization, especially when that organization is authorized to share the FSF's name.

In the specific case of sister organizations who benefit from the tremendous privilege of using the FSF's name, could it also remove ambiguity if FSF mandated the titles used by officers of sister organizations? For example, the "FSFE President" would be referred to as "FSFE European President", or maybe the word president could be avoided in all sister organizations.

People also raise the question of whether FSFE can speak for all Europeans given that it only has a large presence in Germany and other organizations are bigger in other European countries. Would it be fair for some of those other groups to aspire to sister organization status and name-sharing rights too? Could dozens of smaller FSF sister organizations dilute the impact of one or two who go off-script?

Even if FSFE was to distance itself from FSF or even start using a new name and philosophy, as a member, representative and also volunteer I would feel uncomfortable with that as there is a legacy of donations and volunteering that have brought FSFE to the position the organization is in today.

That said, I would like to emphasize that I regard RMS and the FSF, as the original FSF, as having the final authority over the use of the name and I fully respect FSF's right to act unilaterally, negotiate with sister organizations or simply leave things as they are.

If you have questions or concerns about this topic, I would invite you to raise them on the LibrePlanet-discuss mailing list or feel free to email me directly.

18 September, 2018 11:21PM by Daniel.Pocock

hackergotchi for Jonathan McDowell

Jonathan McDowell

hackergotchi for Joey Hess

Joey Hess

censored Amazon review of Sandisk Ultra 32GB Micro SDHC Card

★ counterfeits in amazon pipeline

The 32 gb card I bought here at Amazon turned out to be fake. Within days I was getting read errors, even though the card was still mostly empty.

The logo is noticably blurry compared with a 32 gb card purchased elsewhere. Also, the color of the grey half of the card is subtly wrong, and the lettering is subtly wrong.

Amazon apparently has counterfiet stock in their pipeline, google "amazon counterfiet" for more.

You will not find this review on Sandisk Ultra 32GB Micro SDHC UHS-I Card with Adapter - 98MB/s U1 A1 - SDSQUAR-032G-GN6MA because it was rejected. As far as I can tell my review violates none of Amazon's posted guidelines. But it's specific about how to tell this card is counterfeit, and it mentions a real and ongoing issue that Amazon clearly wants to cover up.

18 September, 2018 06:03PM

Reproducible builds folks

Reproducible Builds: Weekly report #177

Here’s what happened in the Reproducible Builds effort between Sunday September 9 and Saturday September 15 2018:

Patches filed

diffoscope development

Chris Lamb made a large number of changes to diffoscope, our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages:

These changes were then uploaded as diffoscope version 101.

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org by Holger Levsen this month, including:

Misc.

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

18 September, 2018 05:35PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Digital Minimalism and Deep Work

Russ Allbery of the Debian project writes reviews of books he has read on his blog. It was through Russ's review that I learned of "Deep Work" by Cal Newport, and duly requested it from my local library.

I've a long-held skepticism of self-help books, but several aspects of this one strike the right notes for me. The author is a Computer Scientist, so there's a sense of kinship there, but the writing also follows the standard academic patterns of citing sources and a certain rigour to the new ideas that are presented. Despite this, there are a few sections of the book which I felt lacked much supporting evidence, or where some obvious questions of the relevant concept were not being asked. One of the case studies in the book is of a part-time PhD student with a full-time job and a young child, which I can relate to. The author obviously follows his own advice: he runs a productivity blog at calnewport.com and has no other social media presences. One of the key productivity tips he espouses in the book (and elsewhere) is simply "quit social media".

Through Newport's blog I learned that the title of his next book is Digital Minimalism. This intrigued me, because since I started thinking about minimalism myself, I've wondered about the difference of approach needed between minimalism in the "real world" and the digital domains. It turns out the topic of Newport's next book is about something different: from what I can tell, focussing on controlling how one spends one's time online for maximum productivity.

That's an interesting topic which I have more to write about at some point. However, my line of thought for the title "digital minimalism" spawned from reading Marie Kondo, Fumio Sakai and others. Many of the tips they offer to their readers revolve around moving meaning away from physical clutter and into the digital domain: scan your important papers, photograph your keepsakes, and throw away the physical copies. It struck me that whilst this was useful advice for addressing the immediate problem of clutter in the physical world, it exacerbates the problem of digital clutter, especially if we don't have good systems for effectively managing digital archives. Broadly speaking, I don't think we do: at least, not ones that are readily accessible to the majority of people. I have a hunch that most have no form of data backup in place at all, switch between digital hosting services on a relatively ad-hoc manner (flickr, snapchat, instagram…) and treat losing data (such as when an old laptop breaks, or a tablet or phone is stolen) as a fact of life, rather than something that could be avoided if our tools (or habits, or both) were better.

18 September, 2018 12:44PM

Russ Allbery

Review: The Collapsing Empire

Review: The Collapsing Empire, by John Scalzi

Series: Interdependency #1
Publisher: Tor
Copyright: March 2017
ISBN: 0-7653-8889-8
Format: Kindle
Pages: 333

Cardenia Wu-Patrick was never supposed to become emperox. She had a quiet life with her mother, a professor of ancient languages who had a brief fling with the emperox but otherwise stayed well clear of the court. Her older half-brother was the imperial heir and seemed to enjoy the position and the politics. But then Rennered got himself killed while racing and Cardenia ended up heir whether she wanted it or not, with her father on his deathbed and unwanted pressure on her to take over Rennered's role in a planned marriage of state with the powerful Nohamapetan guild family.

Cardenia has far larger problems than those, but she won't find out about them until becoming emperox.

The Interdependency is an interstellar human empire balanced on top of a complex combination of hereditary empire, feudal guild system, state religion complete with founding prophet, and the Flow. The Flow is this universe's equivalent of the old SF trope of a wormhole network: a strange extra-dimensional space with well-defined entry and exit points and a disregard for the speed of light. The Interdependency relies on it even more than one might expect. As part of the same complex and extremely long-term plan of engineered political stability that created the guild, empire, and church balance of power, the Interdependency created an economic web in which each system is critically dependent on imports from other systems. This plus the natural choke points of the Flow greatly reduces the chances of war.

It also means that Cardenia has inherited an empire that is more fragile than it may appear. Secret research happening at the most far-flung system in the Interdependency is about to tell her just how fragile.

John Clute and Malcolm Edwards provided one of the most famous backhanded compliments in SF criticism in The Encyclopedia of Science Fiction when they described Isaac Asimov as the "default voice" of science fiction: a consistent but undistinguished style that became the baseline that other writers built on or reacted against. The field is now far too large for there to be one default voice in that same way, but John Scalzi's writing reminds me of that comment. He is very good at writing a specific sort of book: a light science fiction story that draws as much on Star Trek as it does on Heinlein, comfortably sits on the framework of standard SF tropes built by other people, adds a bit of humor and a lot of banter, and otherwise moves reliably and competently through a plot. It's not hard to recognize Scalzi's writing, so in that sense he has less of a default voice than Asimov had, but if I had to pick out an average science fiction novel his writing would come immediately to mind. At a time when the field is large enough to splinter into numerous sub-genres that challenge readers in different ways and push into new ideas, Scalzi continues writing straight down the middle of the genre, providing the same sort of comfortable familiarity as the latest summer blockbuster.

This is not high praise, and I am sometimes mystified at the amount of attention Scalzi gets (both positive and negative). I think his largest flaw (and certainly the largest flaw in this book) is that he has very little dynamic range, particularly in his characters. His books have a tendency to collapse into barely-differentiated versions of the same person bantering with each other, all of them sounding very much like Scalzi's own voice on his blog. The Collapsing Empire has emperox Scalzi grappling with news from scientist Scalzi carried by dutiful Scalzi with the help of profane impetuous Scalzi, all maneuvering against devious Scalzi. The characters are easy to keep track of by the roles they play in the plot, and the plot itself is agreeably twisty, but if you're looking for a book to hook into your soul and run you through the gamut of human emotions, this is not it.

That is not necessarily a bad thing. I like that voice; I read Scalzi's blog regularly. He's reliable, and I wonder if that's the secret to his success. I picked up this book because I wanted to read a decent science fiction novel and not take a big risk. It delivered exactly what I asked for. I enjoyed the plot, laughed at some of the characters, felt for Cardenia, enjoyed the way some villainous threats fell flat because of characters who had a firm grasp of what was actually important and acted on it, and am intrigued enough by what will happen next that I'm going to read the sequel. Scalzi aimed to entertain, succeeded, and got another happy customer. (Although I must note that I would have been happier if my favorite character in the book, by far, did not make a premature exit.)

I am mystified at how The Collapsing Empire won a Locus Award for best science fiction novel, though. This is just not an award sort of book, at least in my opinion. It's book four in an urban fantasy series, or the sixth book of Louis L'Amour's Sackett westerns. If you like this sort of thing, you'll like this version of it, and much of the appeal is that it's not risky and requires little investment of effort. I think an award winner should be the sort of book that lingers, that you find yourself thinking about at odd intervals, that expands your view of what's possible to do or feel or understand.

But that complaint is more about awards voters than about Scalzi, who competently executed on exactly what was promised on the tin. I liked the setup and I loved the structure of Cardenia's inheritance of empire, so I do kind of wish I could read the book that, say, Ann Leckie would have written with those elements, but I was entertained in exactly the way that I wanted to be entertained. There's real skill and magic in that.

Followed by The Consuming Fire. This book ends on a cliffhanger, as apparently does the next one, so if that sort of thing bothers you, you may want to wait until they're all available.

Rating: 7 out of 10

18 September, 2018 03:39AM

September 17, 2018

Carl Chenet

You Think the Visual Studio Code binary you use is a Free Software? Think again.

Did you download your binary of Visual Studio Code directly from the official website? If so, you’re not using a Free Software and only Microsoft knows what was added to this binary. And you should think of the worst possible.

It says « Open Source » and offers to download non open source binary packages. Very misleading.

The Microsoft Trick

I’m not a lawyer, I could be wrong or not accurate enough in my analysis (sorry!) but I’ll try nonetheless to give my understanding of the situation because the current state of licensing of Visual Studio Code tries to fool most users.

Microsoft uses here a simple but clever trick allowed by the license of the code source of Visual Studio Code: the MIT license, a permissive Free Software license.

Indeed, the MIT license is really straightforward. Do whatever you want with this software, keeps the original copyright and I’m not responsible of what could happen with this software. Ok. Except that, for the situation of Visual Studio Code, it only covers the source code, not the binary.

Unlike most of the GPL-based licenses for which both the source code and the binary built from this source code are covered by the terms of the license, using the MIT license authorizes Microsoft to make available the source code of the software, but do whatever they want with the binary of this software. And let’s be crystal-clear: 99,99% of the VSC users will never ever use directly the source code.

What a non-free license by Microsoft is

And of course Microsoft does not use purposely the MIT license for the binary of Visual Studio Code. In fact they use a fully-armed, Freedom-restricting license, the Microsoft Software License.

Lets have a look at some pieces of it. You can find the full license here: https://code.visualstudio.com/license

This license applies to the Visual Studio Code product. The source code is available under the MIT license agreement.

First sentence of the license. The difference between the license of the source code and the « product », meaning the binary you’re going to use, is clearly stated.

Data Collection. The software may collect information about you and your use of the software, and send that to Microsoft.

Yeah right, no kidding. Big Surprise from Microsoft.

UPDATES. The software may periodically check for updates, and download and install them for you. You may obtain updates only from Microsoft or authorized sources. Microsoft may need to update your system to provide you with updates. You agree to receive these automatic updates without any additional notice. Updates may not include or support all existing software features, services, or peripheral devices.

I’ll break your installation without further notice and I don’t care what you were doing with it before, because, you know.

SCOPE OF LICENSE (…) you may not:

  • work around any technical limitations in the software;

Also known as « hacking » since… years.

  • reverse engineer, decompile or disassemble the software, or otherwise attempt to derive the source code for the software, except and to the extent required by third party licensing terms governing use of certain open source components that may be included in the software;

Because, there is no way anybody should try to know what we are doing with the binary running on your computer.

  • share, publish, rent or lease the software, or provide the software as a stand-alone offering for others to use.

I may be wrong (again I’m not a lawyer), but it seems to me they forbid you to redistribute this binary, except for the conditions mentioned in the INSTALLATION AND USE RIGHTS section (mostly for the need of your company or/and for giving demos of your products using VSC).

The following sections EXPORT RESTRICTIONS and CONSUMER RIGHTS; REGIONAL VARIATIONS include more and more restrictions about using and sharing the binary.

DISCLAIMER OF WARRANTY. The software is licensed “as-is.”

At last a term which could be identified as a term of a Free Software license. But in this case it’s of course to limit any obligation Microsoft could have towards you.

So the Microsoft software license is definitely not a Free Software license, if you were not convinced by the clever trick of dual licensing the source code and the binary.

What You Could Do

Some answers exist to use VSC in good condition. After all, the source code of VSC comes as a Free Software. So why not building it yourself? It also seems some initiatives appeared, like this repository. That could be a good start.

About the GNU/Linux distributions, packaging VSC (see here for the discussion in Debian) would be a great way to avoid people being abused by the Microsoft trick in order they use a « product » breaking almost any term of what makes a Free Software.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.io, a Job board dedicated to Free and Open Source Jobs in the US.

Follow Me On Social Networks

 

 

17 September, 2018 10:00PM by Carl Chenet

hackergotchi for Jonathan Dowland

Jonathan Dowland

which spare laptop?

I'm in a perpetual state of downsizing and ridding my life (and my family's life) of things we don't need: sometimes old computers. My main (nearly my sole) machine is my work-provided Thinkpad T470s: a fantastic laptop that works so well I haven't had anything to write about it. However, I decided that it was worth keeping just one spare, for emergencies or other odd situations. I have two candidate machines in my possession.

In the blue corner

left: X61S; right: R600

left: X61S; right: R600

Toshiba Portégé R600. I've actually owned this now for 7 years, buying it originally to replace my beloved x40 which I loaned to my partner. At the time my main work machine was still a desktop. I received a new work laptop soon after buying this so it ended up gathering dust in a cupboard.

It's an extremely light laptop, even by today's standards. It compares favourably with the Apple Macbook Air 11" in that respect. A comfortable keyboard, but no trackpoint and a bog-standard trackpad. 1280x800 16:9 display, albeit TN panel technology with very limited viewing angles. Analog VGA video out on the laptop, but digital DVI-D out is possible via a separate dock, which was cheap and easy to acquire and very stowable. An integrated optical media drive which could be useful. Max 3G RAM (1G soldered, 2G in DIMM slot).

The CPU is apparently a generation newer but lower voltage and thus slower than its rival, which is…

In the red corner

x61s

x61s

Thinkpad X61s. The proportions match the Thinkpad X40, so it has a high nostalgia factor. Great keyboard, I love trackpoints, robust build. It has the edge on CPU over the Toshiba. A theoretical maximum of 8G (2x4) RAM, but practically nearer 4G (2x2), as the 4G sticks are too expensive. This is probably the "heart" choice.

The main drawback of the X61s is the display options: a 1024x768 TN panel, and no digital video out: VGA only on the laptop, and VGA only on the optional dock. It's possible to retro-fit a better panel, but it's not easy and the parts are now very hard to find. It's also a surprisingly heavy machine: heavier than I remember the X40 being, but it's been long enough ago that my expectations have changed.

The winner

Surprising myself perhaps more than anyone else, I've ended up opting for the Toshiba. The weight was the clincher. The CPU performance difference was too close to matter, and 3G RAM is sufficient for my spare laptop needs. Once I'd installed a spare SSD as the main storage device, day-to-day performance is very good. The resolution difference didn't turn out to be that important: it's still low enough that side-by-side text editor and browser feels crowded, so I end up using the same window management techniques as I would on the X61s.

What do I use it for? I've taken it on a couple of trips or holidays which I wouldn't want to risk my work machine for. I wrote nearly all of liquorice on it in downtime on a holiday to Turkey whilst my daughter was having her afternoon nap. I'm touching up this blog post on it now!

I suppose I should think about passing on the X61s to something/someone else.

17 September, 2018 01:48PM

hackergotchi for Steve Kemp

Steve Kemp

PAM HaveIBeenPwned module

So the PAM module which I pondered about in my previous post now exists:

I did mention "sponsorship" in my post which lead to a couple of emails, and the end result of that was that a couple of folk donated to charity in my/its name. Good enough.

Perhaps in the future I'll explore patreon/similar, but I don't feel very in-demand so I'll avoid it for the moment.

Anyway I guess it should be Debian-packaged for neatness, but I'll resist for the moment.

17 September, 2018 09:01AM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Linus apologising

Someone pointed me towards this email, in which Linus apologizes for some of his more unhealthy behaviour.

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

To me, this came somewhat as a surprise. I'm not really involved in Linux kernel development, and so the history of what led up to this email mostly passed unnoticed, at least for me; but that doesn't mean I cannot recognize how difficult this must have been to write for him.

As I know from experience, admitting that you have made a mistake is hard. Admitting that you have been making the same mistake over and over again is even harder. Doing so publically? Even more so, since you're placing yourself in a vulnerable position, one that the less honorably inclined will take advantage of if you're not careful.

There isn't much I can contribute to the whole process, but there is this: Thanks, Linus, for being willing to work on those things, which can only make the community healthier as a result. It takes courage to admit things like that, and that is only to be admired. Hopefully this will have the result we're hoping for, too; but that, only time can tell.

17 September, 2018 06:45AM

September 16, 2018

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Lookalikes

Was my festive shirt the model for the men’s room signs at Daniel K. Inouye International Airport in Honolulu? Did I see the sign on arrival and subconsciously decide to dress similarly when I returned to the airport to depart Hawaii?

16 September, 2018 06:18PM by Benjamin Mako Hill

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

GIMP 2.10

GIMP 2.10 landed in Debian Testing a few weeks ago and I have to say I'm very happy about it. The last major version of GIMP (2.8) was released in 2012 and the new version fixes a lot of bugs and improved the user interface.

I've updated my Beginner's Guide to GIMP (sadly only in French) and in the process I found out a few things I thought I would share:

The new GIMP logo

Theme

The default theme is Dark. Although it looks very nice in my opinion, I don't feel it's a good choice for productivity. The icon pack the theme uses is a monochrome flat 2D render and I feel it makes it hard to differentiate the icons from one another.

I would instead recommend on using the Light theme with the Color icon pack.

Single Window Mode

GIMP now enables Single Window Mode by default. That means that Dockable Dialog Windows like the Toolbar or the Layer Window cannot be moved around, but instead are locked to two docks on the right and the left of the screen.

Although you can hide and show these docks using Tab, I feel Single Window Mode is more suitable for larger screens. On my laptop, I still prefer moving the windows around as I used to do in 2.8.

You can disable Single Window Mode in the Windows tab.

16 September, 2018 04:00AM by Louis-Philippe Véronneau

hackergotchi for Clint Adams

Clint Adams

Two days afterward

Sheena plodded down the stairs barefoot, her shiny bunions glinting in the cheap fluorescent light. “My boobs hurt,” she announced.

“That happens every month,” mumbled Luke, not looking up from his newspaper.

“It does not!” she retorted. “I think I'm perimenopausal.”

“At age 29?” he asked skeptically.

“Don't mansplain perimenopause to me!” she shouted.

“Okay,” he said, putting down the paper and walking over to embrace her.

“My boobs hurt,” she whispered.

Posted on 2018-09-16
Tags: mintings

16 September, 2018 12:03AM

September 15, 2018

hackergotchi for Jonathan Dowland

Jonathan Dowland

Backing the wrong horse?

I started using the Ruby programming in around 2003 or 2004, but stopped at some point later, perhaps around 2008. At the time I was frustrated with the approach the Ruby community took for managing packages of Ruby software: Ruby Gems. They interact really badly with distribution packaging and made the jobs of organisations like Debian more difficult. This was around the time that Ruby on Rails was making a big splash for web application development (I think version 2.0 had just come out). I did fork out for the predominant Ruby on Rails book to try it out. Unfortunately the software was evolving so quickly that the very first examples in the book no longer worked with the latest versions of Rails. I wasn't doing a lot of web development that at the time anyway, so I put the book, Rails and Ruby itself on the shelf and moved on to looking at the Python programming language instead.

Since then I've written lots of Python, both professionally and personally. Whenever it looked like a job was best solved with scripting, I'd pick up Python. I hadn't stopped to reflect on the experience much at all, beyond being glad I wasn't writing Perl any more (the first language I had any real traction with, 20 years ago).

I'm still writing Python on most work days, and there are bits of it that I do really like, but there are also aspects I really don't. Some of the stuff I work on needs to work in both Python 2 and 3, and that can be painful. The whole 2-versus-3 situation is awkward: I'd much rather just focus on 3, but Python 3 didn't ship in (at least) RHEL 7, although it looks like it will in 8.

Recently I dusted off some 12-year old Ruby code and had a pleasant experience interacting with Ruby again. It made me wonder, had I perhaps backed the wrong horse? In some respects, clearly not: being proficient with Python was immediately helpful when I started my current job (and may have had a hand in getting me hired). But in other respects, I wonder how much time I've wasted wrestling with e.g. Python's verbose, rigid regular expression library when Ruby has nice language-native regular expression operators (taken straight from Perl), or the really awkward support for Unicode in Python 2 (this reminds me of Perl for all the wrong reasons)

Next time I have a computing problem to solve where it looks like a script is the right approach, I'm going to give Ruby another try. Assuming I don't go for Haskell instead, of course. Or, perhaps I should try something completely different? One piece of advice that resonated with me from the excellent book The Pragmatic Programmer was "Learn a new (programming) language every year". It was only recently that I reflected that I haven't learned a completely new language for a very long time. I tried Go in 2013 but my attempt petered out. Should I pick that back up? It has a lot of traction in the stuff I do in my day job (Kubernetes, Docker, Openshift, etc.). "Rust" looks interesting, but a bit impenetrable at first glance. Idris? Lua? Something else?

15 September, 2018 12:13PM

hackergotchi for Steve Kemp

Steve Kemp

Recommendations for software?

A quick post with two questions:

  • What spam-filtering software do you recommend?
  • Is there a PAM module for testing with HaveIBeenPwnd?
    • If not would you sponsor me to write it? ;)

So I've been using crm114 to perform spam-filtering on my incoming mail, via procmail, for the past few years.

Today I discovered it had archived about 12Gb of my email history, because I'd never pruned it. (Beneath ~/.crm/.)

So I wonder if there are better/simpler/different Bayesian-filters out there at that I should be switching to? Recommendations welcome - but don't say "SpamAssassin", thanks!

Secondly the excellent Have I Been Pwned site provides an API which allows you to test if a password has been previously included in a leak. This is great, and I've integrated their API in a couple of my own applications, but I was thinking on the bus home tonight it might be worth tying into PAM.

Sure in the interests of security people should use key-based authentication for SSH, but .. most people don't. Even so, if keys are used exclusively, a PAM module would allow you to validate the password which is used for sudo hasn't previously been leaked.

So it seems like there is value in a PAM module to do a lookup at authentication-time, via libcurl.

15 September, 2018 09:01AM

September 14, 2018

hackergotchi for Wouter Verhelst

Wouter Verhelst

Autobuilding Debian packages on salsa with Gitlab CI

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

stages:
  - build
  - autopkgtest
.build: &build
  before_script:
  - apt-get update
  - apt-get -y install devscripts adduser fakeroot sudo
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
  artifacts:
    paths:
    - built
  script:
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  after_script:
  - mkdir built
  - dcmd mv ../*ges built/
.test: &test
  before_script:
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  script:
  - autopkgtest built/*ges -- null
build:testing:
  <<: *build
  image: debian:testing
build:unstable:
  <<: *build
  image: debian:sid
test:testing:
  <<: *test
  dependencies:
  - build:testing
  image: debian:testing
test:unstable:
  <<: *test
  dependencies:
  - build:unstable
  image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

stages:
  - build
  - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build
  before_script:
  - apt-get update
  - apt-get -y install devscripts autoconf automake adduser fakeroot sudo
  - autoreconf -f -i
  - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r
  - adduser --disabled-password --gecos "" builduser
  - chown -R builduser:builduser .
  - chown builduser:builduser ..
  stage: build
  artifacts:
    paths:
    - built
  script:
  - sudo -u builduser dpkg-buildpackage -b -rfakeroot
  after_script:
  - mkdir built
  - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test
  before_script:
  - apt-get update
  - apt-get -y install autopkgtest
  stage: autopkgtest
  script:
  - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

build:testing:
  <<: *build
  image: debian:testing
build:unstable:
  <<: *build
  image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

test:testing:
  <<: *test
  dependencies:
  - build:testing
  image: debian:testing
test:unstable:
  <<: *test
  dependencies:
  - build:unstable
  image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

UPDATE (2018-09-16): dropped the autoreconf call, isn't needed (it was there because it didn't work from the first go, and I thought that might have been related, but that turned out to be a red herring, and I forgot to drop it)

14 September, 2018 02:45PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

New website for vmdb2

I've set up a new website for vmdb2, my tool for building Debian images (basically "debootstrap, except in a disk image"). As usual for my websites, it's ugly. Feedback welcome.

14 September, 2018 01:00PM

September 13, 2018

hackergotchi for Holger Levsen

Holger Levsen

20180913-reproducible-builds-paris-meeting

Reproducible Builds 2018 Paris meeting

Many lovely people interested in reproducible builds will meet again at a three-day event in Paris we will welcome both previous attendees and new projects alike! We hope to discuss, connect and exchange ideas in order to grow the reproducible builds effort and we would be delighted if you'd join! And this is the space we'll bring into life:

And whilst the exact content of the meeting will be shaped by the participants when we do it, the main goals will include:

  • Updating & exchanging the status of reproducible builds in various projects.
  • Improving collaboration both between and inside projects.
  • Expanding the scope and reach of reproducible builds to more projects.
  • Working and hacking together on solutions.
  • Brainstorming designs for tools enabling end-users to get the most benefits from reproducible builds.
  • Discussing how reproducible builds will be usable and meaningful to users and developers alike.

Please reach out if you'd like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

13 September, 2018 09:54PM

hackergotchi for Daniel Pocock

Daniel Pocock

What is the difference between moderation and censorship?

FSFE fellows recently started discussing my blog posts about Who were the fellowship? and An FSFE Fellowship Representative's dilemma.

Fellows making posts in support of reform have reported their emails were rejected. Some fellows had CC'd me on their posts to the list and these posts never appeared publicly. These are some examples of responses received by a fellow trying to post on the list:

The list moderation team decided now to put your email address on moderation for one month. This is not censorship.

One fellow forwarded me a rejected message to look at. It isn't obscene, doesn't attack anybody and doesn't violate the code of conduct. The fellow writes:

+1 for somebody to answer the original questions with real answers
-1 for more character assassination

Censors moderators responded to that fellow:

This message is in constructive and unsuited for a public discussion list.

Why would moderators block something like that? In the same thread, they allowed some very personal attack messages in favour of existing management.

Moderation + Bias = Censorship

Even links to the public list archives are giving errors and people are joking that they will only work again after the censors PR team change all the previous emails to comply with the censorship communications policy exposed in my last blog.

Fellows have started noticing that the blog of their representative is not being syndicated on Planet FSFE any more.

Some people complained that my last blog didn't provide evidence to justify my concerns about censorship. I'd like to thank FSFE management for helping me respond to that concern so conclusively with these heavy-handed actions against the community over the last 48 hours.

The collapse of the fellowship described in my earlier blog has been caused by FSFE management decisions. The solutions need to come from the grass roots. A totalitarian crackdown on all communications is a great way to make sure that never happens.

FSFE claims to be a representative of the free software community in Europe. Does this behaviour reflect how other communities operate? How successful would other communities be if they suffocated ideas in this manner?

This is what people see right now trying to follow links to the main FSFE Discussion list archive:

13 September, 2018 09:09PM by Daniel.Pocock

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, August 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from July. I worked only 5 hours and therefore carried over 18 hours to September.

I prepared and uploaded updates to the linux-4.9 (DLA 1466-1, DLA 1481-1) and linux-latest-4.9 packages.

13 September, 2018 11:54AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.17

digest version 0.6.17 arrived on CRAN earlier today after a day of gestation in the bowels of CRAN, and should get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64 and murmur32 algorithms) permitting easy comparison of R language objects.

This release brings another robustifications thanks to Radford Neal who noticed a segfault in 32 bit mode on Sparc running Solaris. Yay for esoteric setups. But thanks to his very nice pull request, this is taken care of, and it also squashed one UBSAN error under the standard gcc setup. But two files remain with UBSAN issues, help would be welcome!

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 September, 2018 12:06AM

September 12, 2018

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Disappointment on the new commute

Imagine my disappointment when I discovered that signs on Stanford’s campus pointing to their “Enchanted Broccoli Forest” and “Narnia”—both of which that I have been passing daily on my new commute—merely indicate the location of student living groups with whimsical names.

12 September, 2018 04:53PM by Benjamin Mako Hill

Arturo Borrero González

Distributing static routes with DHCP

Networking

This week I had to deal with a setup in which I needed to distribute additional static network routes using DHCP.

The setup is easy but there are some caveats to take into account. Also, DHCP clients might not behave as one would expect.

The starting situation was a working DHCP clinet/server deployment. Some standard virtual machines would request for their network setup over the network. Nothing new. The DHCP server is dnsmasq, and the daemon is running under Openstack control, but this has nothing to do with the DHCP problem itself.

By default, it seems dnsmasq sends to clients the Routers (code 3) option, which usually contains the gateway for clients in the subnet to use. My situation required to distribute one additional static route for another subnet. My idea was for DHCP clients to end with this simple routing table:

user@dhcpclient:~$ ip r
default via 10.0.0.1 dev eth0 
10.0.0.0/24 dev eth0  proto kernel  scope link  src 10.0.0.100 
172.16.0.0/21 via 10.0.0.253 dev eth0 <--- extra static route

To distribute this extra static route, you only need to edit the dnsmasq config file and add a line like this:

dhcp-option=option:classless-static-route,172.16.0.0/21,10.0.0.253

For my initial tests of this config I was simply doing requesting to refresh the lease from the client DHCP. This got my new static route online, but if in the case of a reboot, the client DHCP would not get the default route. The different behaviour is documented in dhclient-script(8).

To try something similar to a reboot situation, I had to use this command:

user@dhcpclient:~$ sudo ifup --force eth0
Internet Systems Consortium DHCP Client 4.3.1
Copyright 2004-2014 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/eth0/xx:xx:xx:xx:xx:xx
Sending on   LPF/eth0/xx:xx:xx:xx:xx:xx
Sending on   Socket/fallback
DHCPREQUEST on eth0 to 255.255.255.255 port 67
DHCPACK from 10.0.0.1
RTNETLINK answers: File exists
bound to 10.0.0.100 -- renewal in 20284 seconds.

Anyway this was really surprissing at first, and led me to debug DHCP packets using dhcpdump:

  TIME: 2018-09-11 18:06:03.496
    IP: 10.0.0.1 (xx:xx:xx:xx:xx:xx) > 10.0.0.100 (xx:xx:xx:xx:xx:xx)
    OP: 2 (BOOTPREPLY)
 HTYPE: 1 (Ethernet)
  HLEN: 6
  HOPS: 0
   XID: xxxxxxxx
  SECS: 8
 FLAGS: 0
CIADDR: 0.0.0.0
YIADDR: 10.0.0.100
SIADDR: xx.xx.xx.x
GIADDR: 0.0.0.0
CHADDR: xx:xx:xx:xx:xx:xx:00:00:00:00:00:00:00:00:00:00
OPTION:  53 (  1) DHCP message type         2 (DHCPOFFER)
OPTION:  54 (  4) Server identifier         10.0.0.1
OPTION:  51 (  4) IP address leasetime      43200 (12h)
OPTION:  58 (  4) T1                        21600 (6h)
OPTION:  59 (  4) T2                        37800 (10h30m)
OPTION:   1 (  4) Subnet mask               255.255.255.0
OPTION:  28 (  4) Broadcast address         10.0.0.255
OPTION:  15 ( 13) Domainname                xxxxxxxx
OPTION:  12 ( 21) Host name                 xxxxxxxx
OPTION:   3 (  4) Routers                   10.0.0.1
OPTION: 121 (  8) Classless Static Route    xxxxxxxxxxxxxx .....D..                 
[...]
---------------------------------------------------------------------------

(you can use this handy command both in server and client side)

So, the DHCP server was sending both the Routers (code 3) and the Classless Static Route (code 121) options to the clients. So, why would fail the client to install both routes?

I obtained some help from folks on IRC and they pointed me towards RFC3442:

DHCP Client Behavior
[...]
   If the DHCP server returns both a Classless Static Routes option and
   a Router option, the DHCP client MUST ignore the Router option.

So, clients are supposed to ignore the Routers (code 3) option if they get an additional static route. This is very counter-intuitive, but can be easily workarounded by just distributing the default gateway route as another classless static route:

dhcp-option=option:classless-static-route,0.0.0.0/0,10.0.0.1,172.16.0.0/21,10.0.0.253
#                                         ^^ default route   ^^ extra static route 

Obviously this was my first time in my career dealing with this setup and situation. My conclussion is that even old-enough protocols like DHCP can sometimes behave in a counter-intuitive way. Reading RFCs is not always funny, but can help understand what’s going on.

You can read the original issue in Wikimedia Foundation’s Phabricator ticket T202636, including all the back-and-forth work I did. Yes, is open to the public ;-)

12 September, 2018 08:00AM

Iustin Pop

o-tour 2018 (Halbmarathon)

My first race redo at the same distance/ascent meters. Let’s see how it went… 45.2km, 1’773m altitude gain (officially: 45km, 1’800m). This was the Halbmarathon distance, compared to the full Marathon one, which is 86km/3’000m.

Pre-race

I registered for this race right after my previous one, and despite it having much more altitude meters, I was looking forward to it.

That is, until the week of the race. The entire week was just off. Work life, personal life, everything seemed out of sync. Including a half-sleepless night on Wednesday, which ruined my sleep schedule for the rest of the week and also my plans for the light maintenance rides before the event. And which also made me feel half-sick due to lack of sleep.

I prepared for my ride on Saturday (bike check, tyre pressure check, load bike on car), and I went to bed—late, again difficult to fall asleep—not being sure I’ll actually go to the race. I had a difficult night sleep, but actually I managed to wake up on the alarm. OK, chances looking somewhat better for getting to the race. Total sleep: 5 hours. Ouch!

So I get in the car—about 15 minutes later than planned—and start, only to find a road closure on the most direct route to the highway, and police people directing the traffic—at around 07:10—on the “new” route, resulting in yet another detour, and me getting stressed enough on which way to go and not paying attention to exact speed on a downhill that I got a flashed by a speed camera. Sigh…

The rest of the drive was uneventful, I reach Alpnach, I park, I get to the start/finish location, get my number, and finally get to the start line with two minutes (!!) to spare. The most “just-in-time” I ever was at a race, as I’m usually way early. By this time I was even in a later starting block since mine was already setup and would have been difficult to reach.

Oh, and because I was so late, and because this is smaller race (number of participants, setup, etc.), I didn’t find a place to fill my water bottle. And this, for the one time I didn’t fill it in advance. Fun!

The race

So given all this, I set low expectations for the race, and decided to consider it a simple Sunday ride. Will take it easy on the initial 12.5km, 1’150m climb, and then will see how it goes. There was a food station two thirds in the climb, so I said I’ll hopefully not get too dehydrated until then.

The climb starts relaxed-I was among the last people starting—and 15 minutes in, my friend the lower back says “ah, you’re climbing again, remember I’m here too”, which was way too early. So, I said to myself, nothing to lose, let’s just switch to standing every time my back gets tired, and stand until my legs get tired, then switch again.

The climb here was on pavement, so standing was pretty easy. And, to my surprise, this worked quite well: while standing I also went much faster (by much, I mean probably ~2-3km/h) than sitting so I was advancing in the long stretch of people going up the mountain, and my back was very relieved every time I switched.

So, up and down and up and down in the saddle, and up and up and up on the mountain, until I get to the food station. Water! I quickly drink an entire bottle (750ml!!), refill, and go on.

After the food station, the route changed to gravel, and this made pedalling while standing more difficult, due to less grip and slipping if you’re not careful. I tried the sit/stand/sit routine, but it was getting more difficult, so I went, slower, until a point I had to stop. I was by now in the sun, hot, and tired. And annoyed at the low volume out of the water bottle, so I opened it, and drank just like from a glass, and emptied it quickly - yet again! I felt much better, and restarted pedalling, eager to get to the top.

The last part of the climb is quite steep and more or less on a trail, so here I was pushing the bike, but since I didn’t have any goals did not feel guilty about it. Up and up, and finally I reach the top (altitude: 1’633m, elevation gained: 1’148m out of ~1’800m), and I can breathe easier knowing that the most difficult part is over.

From here, it was finally a good race. The o-tour route is much more beautiful than I remembered, but also more technically difficult, to the point of being quite annoying: it runs for long stretches on very uneven artificial paths, like if someone built a paved road, but the goal was to have the most uneven surface, all rocks being at an angle, instead of aiming for an even surface. For hikers this is excellent, especially in wet conditions, but for trying to move a bike forward, or even more, forward uphill, is annoying. There were stretches of ~5% grade that I was pushing the bike, due to how annoying biking on that surface was.

The route also has nice single track sections, some easily navigable, some not, at least for me, and some that I had to carry the bike. Or even carry the bike on my shoulder while I was climbing over roots. A very nice thing, and sadly uncommon in this series of races.

One other fun aspect of the race was the mud. Especially in the forests, there was enough water left on tracks that one got splashed quite often, and outside (where the soil doesn’t have the support of the rood), less water but quite deep mud. Deep enough that at one point, I misjudged how deep the around 3 meters long mud-alike section was, and I had enough speed so that my front wheel got stuck in mud, and slowly (and I imagine gracefully as well :P, of course) I went over the handlebars in the softest mud I ever landed in. Landed, as halfway up my elbows (!), hands full of mud, gloves muddy as hell, legs down to the ankle in mud so shoes also muddy, and me finding the situation the funniest moment of the race. The guy behind me asked if everything is alright, and I almost couldn’t answer due to laughing out-loud.

Back to serious stuff now. The rest of the “meters of climbing left”, about 600+ meters, were supposed to be distributed in about 4 sections, all about the same profile except the first one which was supposed to be a bit longer and flatter. At least, that’s what the official map was showing, in a somewhat stylised way. And that’s what I based my effort dosage on.

Of course, real life is not stylised, and there 2 small climbs (as expected), and then a long and slow climb (definitely unexpected). I managed to stay on the bike, but the unexpected long climb—two kilometres—exhausted my reserves, despite being a relatively small grade (~5%, gained ~100m). I really was not planning for it, and I paid for that. Then a bit of downhill, then another short climb, another downhill—on road, 60km/h!—and then another medium-sized climb: 1km long, gaining 60m. Then a slow and long descent, a 700m/50m climb, a descent, and another climb, short but more difficult: 900m/80m (~9%). By this time, I was spent, and was really looking forward to the final descent, which I remembered was half pavement, half very nice single-track. And indeed it was superb, after all that climbing. Yay!

And then, reaching basically the end of the race (a few kilometres left), I remembered something else: this race has a climb at the end! This is where the missing climbing meters were hiding!

So, after eight kilometres of fun, 1.5km of easy climbing to gain 80m of ascent. Really trivial, a regular commute almost, but for me at this stage, it was painful and the most annoying thing ever…

And then, reaching the final two kilometres of light descent on paved road, and finishing the race. Yay!

Overall, given the way the week went, this race was much easier than I hoped, and quite enjoyable. Why? No idea. I’ll just take the bonus points and not complain ☺

Real champions

After about two minutes of me finishing, I hear the speaker saying that the second placed woman in the long distance was nearing, and that it was Esther Süss! I’ve never seen her in person as far as I know, nor any of the other leaders in these races, since usually the end times are much apart. In this case, I apparently finished between the first and second places in the women’s race (there was 3m05s difference between them). This also explained what all those photographers with telephotos at the finish line were waiting for, and why they didn’t take my picture :)))))) In any case, I was very happy to see her in person, since I’m very impressed that at 44 years old, she’s still competing and most of the time winning against other women, 10 or even 20 years younger than her. Gives a bit of hope for older people like me. Of course minus being on the thinner side (unlike me), and actually liking long climbs (unlike me), and winning (definitely unlike me). Not even bringing up the world championships gold medals, OK?

Race analysis

Hydration, hydration…

As I mentioned above, I drank a lot at the beginning of the race. I continued to drink, and by 2 hours I was 3 full bottles in, at 2:40 I finished the fourth bottle.

Four bottles is 3 litres of liquid, which is way more than my usual consumption since I stopped carrying my hydration pack. In the Eiger bike challenge, done in much hotter temperatures and for longer, I think I drank about the same or only slightly more (not sure exactly). Then temperature: 19° average, 33° max, 6½ hours, this time: 16.2° average, 20° max, ~4 hours. And this time, with 4L in 4 hours, I didn’t need to run to the bathroom as I finished (at all).

The only conclusion I can make is that I sweat much more than I think, and that I must more actively drink water. I don’t want to go back to hydration pack in a race (definitely yes for more relaxed rides), so I need to use all the food stops to drink and refill.

General fitness vs. leg muscles

I know my core is weak, but it’s getting hilarious that 15 minutes into the climbing, I start getting signals. This is not happening on flat nor indoor for at least 2-2½ hours, so the conclusion is that I need to get fitter (core) and also do more outdoors real climbing training—just long (slower) climbs.

The sit-stand-sit routine was very useful, but it did result in even my hands getting tired from having to move and stabilise the bike. So again, need to get fitter overall and do more cross-training.

That is, as if I didn’t know it already ☹

Numbers

This is now beyond subjective, let’s see how the numbers look like:

  • 2016:
    • time: overall 3h49m34.4s, start-Langis 2h44m31s, Langis-finish: 1h05m02s.
    • age category: overall 70/77, start-Langis ranking: 70, Langis-finish: 72.
    • overall gender ranking: overall 251/282, start-Langis: 250, Langis-finish: 255.
  • 2018:
    • time: 3h53m43.4s, start-Langis: 2h50m11s, Langis-finish: 1h03m31s.
    • age category 70/84, start-Langis: 71, Langis-finish: 70.
    • overall gender ranking: overall 191/220, start-Langis: 191, Langis-finish: 189.

The first conclusion is that I’ve done infinitesimally better in the overall rankings: 252/282=0.893, 191/220=0.868, so better but only trivially so, especially given the large decline in participants on the short distance (the long one had the same). I cannot compare age category, because ☺

The second interesting titbit is that in 2016, I was relatively faster on the climb plus first part of the high-altitude route, and relatively slower on the second half plus descent, both in the age category and the overall category. In 2018, this reversed, and I gained places on the descent. Time comparison, ~6 minutes slower in the first half, 1m30s faster on the second one.

But I find my numbers so close that I’m surprised I neither significantly improved nor slowed down in two years. Yes, I’m not consistently training, but still… I kind of expect some larger difference, one way or another. Strava also says that I beat my 2016 numbers on 7 segments, but only got second place to that on 14 others, so again a wash.

So, weight gain aside, it seems nothing much has changed. I need to improve my consistency in training 10× probably to see a real difference. On the other hand, maybe this result is quite good, given my much less consistent training than in 2016 — ¯\_(ツ)_/¯.

Equiment-wise, I had a different bike now (full suspension vs. hardtail), and—compared to previous race two weeks ago, at least—I had the tyre pressure quite well dialled in for this event. So I was able to go fast, and indeed overtake a couple of people on the flat/light descents, and more importantly, was not overtaken by other people on the long descent. My brakes were much better as well, so I was a bit more confident, but the front brake started squeaking again when it got hot, so I need to improve this even more. But again, not even the changed equipment made much of a difference ☺

I’ll finish here with an image of my “heroic efforts”:

Not very proud of this… Not very proud of this…

I’m very surprised that they put a photographer at the top of a climb, except maybe to motivate people to pedal up the next year… I’ll try to remember this ☺

12 September, 2018 06:51AM

September 11, 2018

hackergotchi for Jonathan McDowell

Jonathan McDowell

PSA: the.earth.li ceasing Debian mirror service

This is a public service announcement that the.earth.li (the machine that hosts this blog) will cease service as a Debian mirror on 1st February 2019 at the latest.

It has already been removed from the official list of Debian mirrors. Please update your sources.list to point to an alternative sooner rather than later.

The removal has been driven by a number of factors:

  • This mirror was originally setup when I was running Black Cat Networks, and a local mirror was generally useful to us. It’s 11+ years since Black Cat was sold, and 7+ since it moved away from that network.
  • the.earth.li currently lives with Bytemark, who already have an official secondary mirror. It does not add any useful resilience to the mirror network.
  • For a long time I’ve been unable to mirror all release architectures due to disk space limitations; I think such mirrors are of limited usefulness unless located in locations with dubious connectivity to alternative full mirrors.
  • Bytemark have been acquired by IOMart and I’m uncertain as to whether my machine will remain there long term - the acquisition announcement focuses on their cloud service rather than mentioning physical server provision. Disk space requirements are one of my major costs and the Debian mirror makes up ⅔ of my current disk usage. Dropping it will make moving host easier for me, should it prove necessary.

I can’t find an exact record of when I started running a mirror, but it was certainly before April 2005. 13 years doesn’t seem like a bad length of time to have been providing the service. Personally I’ve moved to deb.debian.org but if the network location of the is the reason you chose it then mirror.bytemark.co.uk should be a good option.

11 September, 2018 07:22PM

Russell Coker

Thinkpad X1 Carbon Gen 6

In February I reviewed a Thinkpad X1 Carbon Gen 1 [1] that I bought on Ebay.

I have just been supplied the 6th Generation of the Thinkpad X1 Carbon for work, which would have cost about $1500 more than I want to pay for my own gear. ;)

The first thing to note is that it has USB-C for charging. The charger continues the trend towards smaller and lighter chargers and also allows me to charge my phone from the same charger so it’s one less charger to carry. The X1 Carbon comes with a 65W charger, but when I got a second charger it was only 45W but was also smaller and lighter.

The laptop itself is also slightly smaller in every dimension than my Gen 1 version as well as being noticeably lighter.

One thing I noticed is that the KDE power applet disappears when battery is full – maybe due to my history of buying refurbished laptops I haven’t had a battery report itself as full before.

Disabling the touch pad in the BIOS doesn’t work. This is annoying, there are 2 devices for mouse type input so I need to configure Xorg to only read from the Trackpoint.

The labels on the lid are upside down from the perspective of the person using it (but right way up for people sitting opposite them). This looks nice for observers, but means that you tend to put your laptop the wrong way around on your desk a lot before you get used to it. It is also fancier than the older model, the red LED on the cover for the dot in the I in Thinkpad is one of the minor fancy features.

As the new case is thinner than the old one (which was thin compared to most other laptops) it’s difficult to open. You can’t easily get your fingers under the lid to lift it up.

One really annoying design choice was to have a proprietary Ethernet socket with a special dongle. If the dongle is lost or damaged it will probably be expensive to replace. An extra USB socket and a USB Ethernet device would be much more useful.

The next deficiency is that it has one USB-C/DisplayPort/Thunderbolt port and 2 USB 3.1 ports. USB-C is going to be used for everything in the near future and a laptop with only a single USB-C port will be as annoying then as one with a single USB 2/3 port would be right now. Making a small laptop requires some engineering trade-offs and I can understand them limiting the number of USB 3.1 ports to save space. But having two or more USB-C ports wouldn’t have taken much space – it would take no extra space to have a USB-C port in place of the proprietary Ethernet port. It also has only a HDMI port for display, the USB-C/Thunderbolt/DisplayPort port is likely to be used for some USB-C device when you want an external display. The Lenovo advertising says “So you get Thunderbolt, USB-C, and DisplayPort all rolled into one”, but really you get “a choice of one of Thunderbolt, USB-C, or DisplayPort at any time”. How annoying would it be to disconnect your monitor because you want to read a USB-C storage device?

As an aside this might work out OK if you can have a DisplayPort monitor that also acts as a USB-C hub on the same cable. But if so requiring a monitor that isn’t even on sale now to make my laptop work properly isn’t a good strategy.

One problem I have is that resume from suspend requires holding down power button. I’m not sure if it’s hardware or software issue. But suspend on lid close works correctly and also suspend on inactivity when running on battery power. The X1 Carbon Gen 1 that I own doesn’t suspend on lid close or inactivity (due to a Linux configuration issue). So I have one laptop that won’t suspend correctly and one that won’t resume correctly.

The CPU is an i5-8250U which rates 7,678 according to cpubenchmark.net [2]. That’s 92% faster than the i7 in my personal Thinkpad and more importantly I’m likely to actually get that performance without having the CPU overheat and slow down, that said I got a thermal warning during the Debian install process which is a bad sign. It’s also only 114% faster than the CPU in the Thinkpad T420 I bought in 2013. The model I got doesn’t have the fastest possible CPU, but I think that the T420 didn’t either. A 114% increase in CPU speed over 5 years is a long way from the factor of 4 or more that Moore’s law would have predicted.

The keyboard has the stupid positions for the PgUp and PgDn keys I noted on my last review. It’s still annoying and slows me down, but I am starting to get used to it.

The display is FullHD, it’s nice to have a laptop with the same resolution as my phone. It also has a slider to cover the built in camera which MIGHT also cause the microphone to be disconnected. It’s nice that hardware manufacturers are noticing that some customers care about privacy.

The storage is NVMe. That’s a nice feature, although being only 240G may be a problem for some uses.

Conclusion

Definitely a nice laptop if someone else is paying.

The fact that it had cooling issues from the first install is a concern. Laptops have always had problems with cooling and when a laptop has cooling problems before getting any dust inside it’s probably going to perform poorly in a few years.

Lenovo has gone too far trying to make it thin and light. I’d rather have the same laptop but slightly thicker, with a built-in Ethernet port, more USB ports, and a larger battery.

11 September, 2018 10:33AM by etbe

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

AsioHeaders 1.12.1-1

A first update to the AsioHeaders package arrived on CRAN today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release is the first following the initial upload of version 1.11.0-1 in 2015. I had noticed the updated 1.12.1 version a few days ago, and then Joe Cheng surprised me with a squeaky clean PR as he needed it to get RStudio’s websocket package working with OpenSSL 1.1.0.

I actually bumbled up the release a little bit this morning, uploading 1.12.1 first and then 1.12.1-1 as we like having a packaging revision. Old habits die hard. So technically CRAN, but we may clean that up and remove the 1.12.1 release from the archive as 1.12.1-1 is identical but for two bytes in DESCRIPTION.

The NEWS entry follow, it really is just the header update done by Joe plus some Travis maintenance.

Changes in version 1.12.1-1 (2018-09-10)

  • Upgraded to Asio 1.12.1 (Joe Cheng in #2)

  • Updated Travis CI support via newer run.sh

Via CRANberries, there is a diffstat report relative to the previous release, as well as this time also one between the version-corrected upload and the main one.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 September, 2018 01:21AM

September 10, 2018

hackergotchi for Matthew Garrett

Matthew Garrett

The Commons Clause doesn't help the commons

The Commons Clause was announced recently, along with several projects moving portions of their codebase under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold[1], where the definition of being sold includes being used as a component of an online pay-for service. As described in the FAQ, this changes the effective license of the work from an open source license to a source-available license. However, the site doesn't go into a great deal of detail as to why you'd want to do that.

Fortunately one of the VCs behind this move wrote an opinion article that goes into more detail. The central argument is that Amazon make use of a great deal of open source software and integrate it into commercial products that are incredibly lucrative, but give little back to the community in return. By adopting the commons clause, Amazon will be forced to negotiate with the projects before being able to use covered versions of the software. This will, apparently, prevent behaviour that is not conducive to sustainable open-source communities.

But this is where things get somewhat confusing. The author continues:

Our view is that open-source software was never intended for cloud infrastructure companies to take and sell. That is not the original ethos of open source.

which is a pretty astonishingly unsupported argument. Open source code has been incorporated into proprietary applications without giving back to the originating community since before the term open source even existed. MIT-licensed X11 became part of not only multiple Unixes, but also a variety of proprietary commercial products for non-Unix platforms. Large portions of BSD ended up in a whole range of proprietary operating systems (including older versions of Windows). The only argument in favour of this assertion is that cloud infrastructure companies didn't exist at that point in time, so they weren't taken into consideration[2] - but no argument is made as to why cloud infrastructure companies are fundamentally different to proprietary operating system companies in this respect. Both took open source code, incorporated it into other products and sold them on without (in most cases) giving anything back.

There's one counter-argument. When companies sold products based on open source code, they distributed it. Copyleft licenses like the GPL trigger on distribution, and as a result selling products based on copyleft code meant that the community would gain access to any modifications the vendor had made - improvements could be incorporated back into the original work, and everyone benefited. Incorporating open source code into a cloud product generally doesn't count as distribution, and so the source code disclosure requirements don't trigger. So perhaps that's the distinction being made?

Well, no. The GNU Affero GPL has a clause that covers this case - if you provide a network service based on AGPLed code then you must provide the source code in a similar way to if you distributed it under a more traditional copyleft license. But the article's author goes on to say:

AGPL makes it inconvenient but does not prevent cloud infrastructure providers from engaging in the abusive behavior described above. It simply says that they must release any modifications they make while engaging in such behavior.

IE, the problem isn't that cloud providers aren't giving back code, it's that they're using the code without contributing financially. There's no difference between what cloud providers are doing now and what proprietary operating system vendors were doing 30 years ago. The argument that "open source" was never intended to permit this sort of behaviour is simply untrue. The use of permissive licenses has always allowed large companies to benefit disproportionately when compared to the authors of said code. There's nothing new to see here.

But that doesn't mean that the status quo is good - the argument for why the commons clause is required may be specious, but that doesn't mean it's bad. We've seen multiple cases of open source projects struggling to obtain the resources required to make a project sustainable, even as many large companies make significant amounts of money off that work. Does the commons clause help us here?

As hinted at in the title, the answer's no. The commons clause attempts to change the power dynamic of the author/user role, but it does so in a way that's fundamentally tied to a business model and in a way that prevents many of the things that make open source software interesting to begin with. Let's talk about some problems.

The power dynamic still doesn't favour contributors

The commons clause only really works if there's a single copyright holder - if not, selling the code requires you to get permission from multiple people. But the clause does nothing to guarantee that the people who actually write the code benefit, merely that whoever holds the copyright does. If I rewrite a large part of a covered work and that code is merged (presumably after I've signed a CLA that assigns a copyright grant to the project owners), I have no power in any negotiations with any cloud providers. There's no guarantee that the project stewards will choose to reward me in any way. I contribute to them but get nothing back in return - instead, my improved code allows the project owners to charge more and provide stronger returns for the VCs. The inequity has shifted, but individual contributors still lose out.

It discourages use of covered projects

One of the benefits of being able to use open source software is that you don't need to fill out purchase orders or start commercial negotiations before you're able to deploy. Turns out the project doesn't actually fill your needs? Revert it, and all you've lost is some development time. Adding additional barriers is going to reduce uptake of covered projects, and that does nothing to benefit the contributors.

You can no longer meaningfully fork a project

One of the strengths of open source projects is that if the original project stewards turn out to violate the trust of their community, someone can fork it and provide a reasonable alternative. But if the project is released with the commons clause, it's impossible to sell any forked versions - anyone who wishes to do so would still need the permission of the original copyright holder, and they can refuse that in order to prevent a fork from gaining any significant uptake.

It doesn't inherently benefit the commons

The entire argument here is that the cloud providers are exploiting the commons, and by forcing them to pay for a license that allows them to make use of that software the commons will benefit. But there's no obvious link between these things. Maybe extra money will result in more development work being done and the commons benefiting, but maybe extra money will instead just result in greater payout to shareholders. Forcing cloud providers to release their modifications to the wider world would be of benefit to the commons, but this is explicitly ruled out as a goal. The clause isn't inherently incompatible with this - the negotiations between a vendor and a project to obtain a license to be permitted to sell the code could include a commitment to provide patches rather money, for instance, but the focus on money makes it clear that this wasn't the authors' priority.

What we're left with is a license condition that does nothing to benefit individual contributors or other users, and costs us the opportunity to fork projects in response to disagreements over design decisions or governance. What it does is ensure that a range of VC-backed projects are in a better position to improve their returns, without any guarantee that the commons will be left better off. It's an attempt to solve a problem that's existed since before the term "open source" was even coined, by simply layering on a business model that's also existed since before the term "open source" was even coined[3]. It's not anything new, and open source derives from an explicit rejection of this sort of business model.

That's not to say we're in a good place at the moment. It's clear that there is a giant level of power disparity between many projects and the consumers of those projects. But we're not going to fix that by simply discarding many of the benefits of open source and going back to an older way of doing things. Companies like Tidelift[4] are trying to identify ways of making this sustainable without losing the things that make open source a better way of doing software development in the first place, and that's what we should be focusing on rather than just admitting defeat to satisfy a small number of VC-backed firms that have otherwise failed to develop a sustainable business model.

[1] It is unclear how this interacts with licenses that include clauses that assert you can remove any additional restrictions that have been applied
[2] Although companies like Hotmail were making money from running open source software before the open source definition existed, so this still seems like a reach
[3] "Source available" predates my existence, let alone any existing open source licenses
[4] Disclosure: I know several people involved in Tidelift, but have no financial involvement in the company

comment count unavailable comments

10 September, 2018 11:26PM

Reproducible builds folks

Reproducible Builds: Weekly report #176

Here’s what happened in the Reproducible Builds effort between Sunday September 2 and Saturday September 8 2018:

Patches filed

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

10 September, 2018 05:12PM

Sven Hoexter

Firefox 60, Yubikey, U2F vs my Google Account

tl;dr; Yes, you can use Firefox 60 in Debian/stretch with your U2F device to authenticate your Google account, but you've to use Chrome for the registration.

Thanks to Mike, Moritz and probably others there's now Firefox 60 ESR in Debian/stretch. So I took it as a chance to finally activate my for work YubiKey Nano as a U2F/2FA device for my at work Google account. Turns out it's not so simple. Basically Google told me that this browser is not support and I should install the trojan horse (Chrome) to use this feature. So I gave in, installed Chrome, logged in to my Google account and added the Yubikey as the default 2FA device. Then I quit Chrome, went back to Firefox and logged in again to my Google account. Bäm it works! The Yubikey blinks, I can touch it and I'm logged in.

Just in case: you probably want to install "u2f-host" to have "libu2f-host0" available which ships all the udev rules to detect common U2F devices correctly.

10 September, 2018 12:13PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Short-term contracting work?

I'm starting a new job in about a month. Until then, it'd be really helpful if I could earn some money via a short-term contracting or consulting job. If your company or employer could benefit from any of the following, please get in touch. I will invoice via a Finnish company, not as a person (within the EU, at least, this makes it easier for the clients). I also reside in Finland, if that matters (meaning, meeting outside of Helsinki gets tricky).

  • software architecture design and review
  • coding in Python, C, shell, or code review
  • documentation: writing, review
  • git training
  • help with automated testing: unit tests, integration tests
  • help with Ansible
  • packaging and distributing software as .deb packages

10 September, 2018 06:48AM

hackergotchi for Daniel Pocock

Daniel Pocock

An FSFE Fellowship Representative's dilemma

The FSFE Fellowship representative role may appear trivial, but it is surprisingly complicated. What's best for FSFE, what is best for the fellows and what is best for free software are not always the same thing.

As outlined in my blog Who are/were the FSFE Fellowship?, fellows have generously donated over EUR 1,000,000 to FSFE and one member of the community recently bequeathed EUR 150,000. Fellows want to know that this money is spent well, even beyond their death.

FSFE promised them an elected representative, which may have given them great reassurance about the checks and balances in the organization. In practice, I feel that FSFE hasn't been sincere about this role and it is therefore my duty to make fellows aware of what representation means in practice right now.

This blog has been held back for some time in the hope that things at FSFE would improve. Alas, that is not the case and with the annual general meeting in Berlin only four weeks away, now is the time for the community to take an interest. As fellowship representative, I would like to invite members of the wider free software community to attend as guests of the fellowship and try to help FSFE regain legitimacy.

Born with a conflict of interest

According to the FSFE e.V. constitution, as it was before elections were abolished, the Fellows elected according to §6 become members of FSFE e.V.

Yet all the other fellows who voted, the people being represented, are not considered members of FSFE e.V. Sometimes it is possible to view all fellows together as a unit, a separate organization, The Fellowship. Sometimes not all fellows want the same thing and a representative has to view them each as individuals.

Any representative of this organization, The Fellowship and the individual fellows, has a strong ethical obligation to do what is best for The Fellowship and each fellow.

Yet as the constitution recognizes the representative as a member of FSFE e.V., some people have also argued that he/she should do what is best for FSFE e.V.

What happens when what is best for The Fellowship is not in alignment with what is best for FSFE e.V.?

It is also possible to imagine situations where doing what is best for FSFE e.V. and doing what is best for free software in general is not the same thing. In such a case the representative and other members may want to resign.

Censorship of the Fellowship representatives by FSFE management

On several occasions management argued that communications to fellows need to be censored adapted to help make money. For example, when discussing an email to be sent to all fellows in February about the risk of abolishing elections, the president warned:

"people might even stop to support us financially"

if they found out about the constitutional changes. He subsequently subjected the email to censorship modification by other people.

This was not a new theme: in a similar discussion in August 2017 about communications from the representatives, another senior member of the executive team had commented:

"It would be beneficial if our PR team could support in this, who have the experience from shaping communication in ways which support retention of our donors."

A few weeks later, on 20 March, FSFE's management distributed a new censorship communications policy, requiring future emails to prioritize FSFE's interests and mandating that all emails go through the censors PR team. As already explained, a representative has an ethical obligation to prioritize the interests of the people represented, The Fellowship, not FSFE's interests. The censorship communications policy appears deliberately incompatible with that obligation.

As the elected representative of a 1500-strong fellowship, it seems obscene that communications to the people represented are subject to censorship by the very staff the representative scrutinizes. The situation is even more ludicrous when the organization concerned claims to be an advocate of freedom.

This gets to the core of our differences: FSFE appeared to be hoping a representative would be a stooge, puppet or cheerleader who's existence might "support retention of ... donors". Personally, I never imagined myself like that. Given the generosity of fellows and the large amounts of time and money contributed to FSFE, I feel obliged to act as a genuine representative, ensuring money already donated is spent effectively on the desired objectives and ensuring that communications are accurate. FSFE management appear to hope their clever policy document will mute those ambitions.

Days later, on 25 March, FSFE management announced the extraordinary general meeting to be held in the staff office in Berlin, to confirm the constitutional change and as a bonus, try to abruptly terminate the last representative, myself. Were these sudden changes happening by coincidence, or rather, a nasty reprisal for February's email about constitutional changes? I had simply been trying to fulfill my ethical obligations to fellows and suddenly I had become persona non grata.

When I first saw this termination proposal in March, it really made me feel quite horrible. They were basically holding a gun to my head and planning a vote on whether to pull the trigger. For all purposes, it looked like gangster behavior happening right under my nose in a prominent free software organization.

Both the absurdity and hostility of these tactics was further underlined by taking this vote on my role behind my back on 26 May, while I was on a 10 day trip to the Balkans pursuing real free software activities in Albania and Kosovo, starting with OSCAL.

In the end, while the motion to abolish elections was passed and fellows may never get to vote again, only four of the official members of the association backed the abusive motion to knife me and that motion failed. Nonetheless, it left me feeling I would be reluctant to trust FSFE again. An organization that relies so heavily on the contributions of volunteers shouldn't even contemplate treating them, or their representatives, with such contempt. The motion should never have been on the agenda in the first place.

Bullet or boomerang?

In May, I thought I missed the bullet but it appears to be making another pass.

Some senior members of FSFE e.V. remain frustrated that a representative's ethical obligations can't be hacked with policy documents and other juvenile antics. They complain that telling fellows the truth is an act of treason and speaking up for fellows in a discussion is a form of obstruction. Both of these crimes are apparently grounds for reprisals, threats, character assassination and potentially expulsion.

In the most outrageous act of scapegoating, the president has even tried to suggest that I am responsible for the massive exodus from the fellowship examined in my previous blog. The chart clearly shows the exodus coincides with the attempt to force-migrate fellows to the supporter program, long after the date when I took up this role.

Senior members have sent me threats to throw me out of office, most recently the president himself, simply for observing the basic ethical responsibilities of a representative.

Leave your conscience at the door

With the annual general meeting in Berlin only four weeks away, the president is apparently trying to assemble a list of people to throw the last remaining representative out of the association completely. It feels like something out of a gangster movie. After all, altering and suppressing the results of elections and controlling the behavior of the candidates are the modus operandi of dictators and gangsters everywhere.

Will other members of the association exercise their own conscience and respect the commitment of representation that was made to the community? Or will they leave their conscience at the door and be the president's puppets, voting in block like in many previous general meetings?

The free software ecosystem depends on the goodwill of volunteers and donors, a community that can trust our leaders and each other. If every free software organization behaved like this, free software wouldn't exist.

A president who conspires to surround himself with people who agree with him, appointing all his staff to be voting members of the FSFE e.V. and expelling his critics appears unlikely to get far promoting the organization's mission when he first encounters adults in the real world.

The conflict of interest in this role is not of my own making, it is inherent in FSFE's structure. If they do finally kill off the last representative, I'll wear it like a badge of honor, for putting the community first. After all, isn't that a representative's role?

As the essayist John Gardner wrote

“The citizen can bring our political and governmental institutions back to life, make them responsive and accountable, and keep them honest. No one else can.”

10 September, 2018 06:33AM by Daniel.Pocock

September 09, 2018

Hideki Yamane

Earthquake struck Hokkaido and caused blackout, but security.d.o run without trouble

Dec 2014, security.debian.org mirror came to Hokkaido, Japan. And in September 2018, Huge earthquake (magnitude 6.7) has hit Hokkaido. It was a surprise because the Japan government said such a large earthquake would shake Hokkaido is less than 0.2% in 30 years.


After earthquake (left) and Before (right)
Below pics are left: after earthquake / right: before earthquake

And it causes a blackout for the whole of Hokkaido, of course, it includes Sakura Internet Ishikari DC. Ishikari DC had worked with an emergency power supply for almost 60 hours(!), so its security mirror run without any error.


09 September, 2018 05:01PM by Hideki Yamane (noreply@blogger.com)

Iustin Pop

Printing paper: matte vs. glossy revisited

Let’s revisit some choices… whether they were explicit or not.

For the record, a Google search for “matte vs glossy” says “about 180.000.000 results found”, so it’s like emacs versus vi, except that only gets a paltry 10 million hits.

Tech background

Just a condensed summary that makes some large simplifications, skip if you already know this.

Photographic printing paper is normally of three main types: matte, glossy, and canvas. Glossy is the type one usually finds for normal small prints out of a printing shop/booth, matte is, well, like the normal document print paper, and canvas is really stretchable “fabric”. In the matte camp, there is the smooth vs. textured vs. rag-type (alternatively, smooth, light texture, textured), and in the glossy land, there’s luster (semi-gloss) and glossy (with the very glossy ones being unpleasant to the touch, even). Making some simplifications here, of course. In canvas land, I have no idea ☺

The black ink used for printing differs between glossy and matte, since you need a specific type to ensure that you get the deepest blacks possible for that type of paper. Some printers have two black ink “heads”, others—like (most?) Epson printers—have a single one and can switch between the two inks. This switching is costly since it needs to flush all current ink and then load the new ink, thus it wastes ink.

OK, with this in mind, let’s proceed.

My original paper choices

When I originally bought my photo printer (about five years ago), I thought at the beginning I’ll mostly print on matte paper. Good quality matte paper has a very special feel (in itself), whereas (very) glossy paper is what you usually see cheap prints on (the kind of you would have gotten 20 years ago from a photo developing shop). Good glossy paper is much more subdued, but still on the same “shiny” basis (compared to matte).

So I bought my printer, started printing—on matte paper—and because of the aforementioned switching cost, for a while all was good in matte-only land. I did buy quite a few sample packs for testing, including glossy.

Of course, at one point, the issue of printing small (e.g. the usual 10×15cm format) appeared, and because most paper you find in this format in non-specialist stores is glossy, I started printing on glossy as well. And then I did some large format prints also using glossy, and… well, glossy does have the advantage of more “impact” (colours jump up at you much more), so I realised it’s not that bad in glossy land. Time to use/test with all that sample paper!

Thus, I did do quite a bit of experimenting to decide which are my “go-to” papers and settled on four, two matte and two glossy. But because there’s always “need one small photo printed”, I never actively used the matte papers beyond my tests… Both matte papers were smooth matte, since the texture effect I found quite unpleasant with some photos, especially portraits.

So many years passed, with one-off printing, and the usual replacement of all other colours. But the matte black cartridge still had ~20% ink left, that I wasn’t using, so I ended up with having the original cartridge. Its manufacture date is 2013/08, so it’s more than five years old now. Epson says “for best results, use within 6 months), so at this time it’s about ten times the recommended age.

Accidental revisiting the matte-vs-glossy

Fast forward to earlier this week, and as I was printing a small photo for a friend, it reminded me that I the Epson paper I find in shops in Switzerland is much thinner than what I found once in US, and that for long I wanted to look up what other small format (10×15cm, A5, 5×7in, etc.) I can find in higher quality. I look at my preferred brands, and I find actually fine art paper in small format, but to my surprise, there’s also the option of smooth matte paper!

Small-format matte paper, and especially for portraits, sounded very strange; I wondered how would this actually feels (in hand). One of the best money spent during my paper research was a sample (printed) book from Hahnemühle in A5 format (this one, which I can’t find on the Hahnemühle web site, hence the link to a shop), which contains almost all their papers with—let’s hope—appropriate subjects. I grab it, search for the specific matte paper I saw available in small format (Photo Rag 308), and… WOW. I couldn’t believe my eyes and fingers. Definitely different than any small photo I’ve (personally) ever seen.

The glossy paper - Fine Art Pearl (285gsm) also looked much superior to the Epson Premium Glossy Photo paper I was using. So, time to make a three-way test.

OK, but that still left a problem - while I do have some (A4) paper of Photo Rag, I didn’t have matte ink; or rather, I had some but a very, very old one. Curiosity got the better of me - at worst, some clogging and some power cleaning (more ink waste), but I had to try it.

I chose one recent portrait photo in some subdued colours, printed (A4) using standard Epson Photo Glossy paper, then printed using Fine Art Pearl (again, what a difference!) and then, prepare to print using Photo Rag… switch black ink, run a quick small test pattern print (OK-ish), and print away. To my surprise, it did manage to print, with no problems even on this on-the-dark-side photograph.

And yes, it was as good as the sample was promising, at least for this photograph. I can only imagine how things will look and feel in small format. And I say feel because a large part of the printed photograph appeal is in the paper texture, not only the look.

Conclusion

So, two takeaways.

First, comparing these three papers, I’ve wasted a lot of prints (for friends/family/etc.) on sub-standard paper. Why didn’t I think of small-paper choices before, and only focused on large formats? Doesn’t make sense, but I’m glad I learned this now, at least.

Second, what’s with the “best used within 6 months”? Sure, 6 months is nothing if you’re a professional (as in, doing this for $day job), so maybe Epson didn’t test more than 1 year lifetimes, but still, I’m talking here about printing after 5 years.

The only thing left now is to actually order some packs and see how a small photo book will look like in the matte version. And in any case, I’ve found a better choice even for the glossy option.

What about textured matte?

In all this, where are the matte textured papers? Being very textured and much different from everything I talked above (Photo Rag is smooth matte), the normal uses for these are art reproductions. The naming of this series (for Hahnemühle) is also in-line: Albrecht Dürer, William Turner, German and Museum Etching, etc.

The sample book has these papers as well, with the following subjects:

  • Torchon: a photograph of a fountain; so-so effect, IMHO;
  • Albrecht Dürer: abstract art reproduction
  • William Turner: a family picture (photograph, not paint)!!
  • German Etching: something that looks like a painting
  • Museum Etching: abstract art

I was very surprised that between all those “art reproductions”, the William Turner one, a quite textured paper, had a well matching family picture that is, IMHO, excellent. I really don’t have a feeling on “would this paper match this photograph” or “what kind paper would match it”, so I’m often surprised like this. In this case, it wasn’t just passable, it was an excellent match. You can see it on the product page—you need to go to the third picture in the slideshow, and of course that’s the digital picture, not what you get in real life.

Unless I get some epiphany soon, “what can one use textured matte paper for” will remain an unsolved mystery. Or just a research item, assuming I find the time, the same way I find Hahnemühle’s rice paper very cool but I have no idea what to print on it. Ah, amateurs ☺

As usual, comments are welcome.

09 September, 2018 04:11PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (July and August 2018)

The following contributors got their Debian Developer accounts in the last two months:

  • William Blough (bblough)
  • Shengjing Zhu (zhsj)
  • Boyuan Yang (byang)
  • Thomas Koch (thk)
  • Xavier Guimard (yadd)
  • Valentin Vidic (vvidic)
  • Mo Zhou (lumin)
  • Ruben Undheim (rubund)
  • Damiel Baumann (daniel)

The following contributors were added as Debian Maintainers in the last two months:

  • Phil Morrell
  • Raúl Benencia
  • Brian T. Smith
  • Iñaki Martin Malerba
  • Hayashi Kentaro
  • Arnaud Rebillout

Congratulations!

09 September, 2018 03:00PM by Jean-Pierre Giraud

Russell Coker

Fail2ban

I’ve recently setup fail2ban [1] on a bunch of my servers. It’s purpose is to ban IP addresses associated with password guessing – or whatever other criteria for badness you configure. It supports Linux, OpenBSD [2] and probably most Unix type OSs too. I run Debian so I’ve been using the Debian packages of fail2ban.

The first thing to note is that it is very easy to install and configure (for the common cases at least). For a long time installing it had been on my todo list but I didn’t make the time to do it, after installing it I realised that I should have done it years ago, it was so easy.

Generally to configure it you just create a file under /etc/fail2ban/jail.d with the settings you want, any settings that are different from the defaults will override them. For example if you have a system running dovecot on the default ports and sshd on port 999 then you could put the following in /etc/fail2ban/jail.d/local.conf:

[dovecot]
enabled = true

[sshd]
port = 999

By default the Debian package of fail2ban only protects sshd.

When fail2ban is running on Linux the command “iptables -L -n -v|grep f2b” will show the rules that match inbound traffic and the names of the chains they direct traffic to. To see if fail2ban has acted to protect a service you can run a command like “iptables -L f2b-sshd -n” to see the iptables rules.

The fail2ban entries in the INPUT table go before other rules, so it should work with any custom iptables rules you have configured as long as either fail2ban is the last thing to be started or your custom rules don’t flush old entries.

There are hooks for sending email notifications etc, that seems excessive to me but it’s always good to have options to extend a program.

In the past I’ve tried using kernel rate limiting to minimise hostile activity. That didn’t work well as there are legitimate end users who do strange things (like a user who setup their web-cam to email them every time it took a photo).

Conclusion

Fail2ban has some good features. I don’t think it will do much good at stopping account compromise as anything that is easily guessed could be guessed using many IP addresses and anything that has a good password can’t be guessed without taking many years of brute-force attacks while also causing enough noise in the logs to be noticed. What it does do is get rid of some of the noise in log files which makes it easier to find and fix problems. To me the main benefit is to improve the signal to noise ratio of my log files.

09 September, 2018 06:58AM by etbe

hackergotchi for Clint Adams

Clint Adams

Firefox Extensions and Other Tragedies

Several months ago a Google employee told me not to panic about the removal of XUL because Firefox had probably mainlined the functionality I need from my ossified xul-ext packages. This appears to have been wildly inaccurate.

Antoine and Paul appear to have looked into such matters and I am not filled with optimism.

In preparation of the impending doom that is a firefox migration to buster, I have finally ditched RequestPolicy by turning uBlock Origin up to 11.

This means that I am only colossally screwed by a lack of replacements for Pentadactyl and Cookie Monster.

It appears that Waterfox is not in Debian so I cannot try that out.

Posted on 2018-09-09
Tags: bamamba

09 September, 2018 12:21AM

September 08, 2018

hackergotchi for Joey Hess

Joey Hess

usb drives with no phantom load

For a long time I've not had any network attached storage at home, because it's offgrid and power budget didn't allow it. But now I have 16 terabytes of network attached storage, that uses no power at all when it's not in use, and automatically spins up on demand.

I used a USB hub with per-port power control. But even with a USB drive's port powered down, there's a parasitic draw of around 3 watts per drive. Not a lot, but with 4 drives that's more power wasted than leaving a couple of ceiling lights on all the time. So I put all the equipment behind a relay too, so it can be fully powered down.

I'm using systemd for automounting the drives, and have it configured to power a drive's USB port on and off as needed using uhubctl. This was kind of tricky to work out how to do, but it works very well.

Here's the mount unit for a drive, media-joey-passport.mount:

[Unit]
Description=passport
Requires=startech-hub-port-4.service
After=startech-hub-port-4.service
[Mount]
Options=noauto
What=/dev/disk/by-label/passport
Where=/media/joey/passport

That's on port 4 of the USB hub, the startech-hub-port-4.service unit file is this:

[Unit]
Description=Startech usb hub port 4
PartOf=media-joey-passport.mount
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/usr/sbin/uhubctl -a on -p 4 ; /bin/sleep 20
ExecStop=/usr/sbin/uhubctl -a off -p 4

The combination of PartOf with Requires and After in these units makes systemd start the port 4 service before mounting the drive, and stop it after unmounting. This was the hardest part to work out.

The sleep 20 is a bit unfortunate, it seems that it can take a few seconds for the drive to power up enough for the kernel to see it, and so without that the mount can fail, leaving the drive powered on indefinitely. Seems there ought to be a way to declare an additional dependency and avoid needing that sleep? Update: See my comment below for a better way.

Finally, the automount unit for the drive, media-joey-passport.automount:

[Unit]
Description=Automount passport
[Automount]
Where=/media/joey/passport
TimeoutIdleSec=300
[Install]
WantedBy=multi-user.target

The TimeoutIdleSec makes it unmount after around 5 minutes of not being used, and then its USB port gets powered off.

I decided to not automate the relay as part of the above, instead I typically turn it on for 5 hours or so, and use the storage whenever I want during that window. One advantage to that is cron jobs can't spin up the drives in the early morning hours.

08 September, 2018 07:12PM

hackergotchi for Daniel Pocock

Daniel Pocock

Who are/were the FSFE Fellowship? Starting Fellowship 2.0?

Since the FSFE Fellowship elected me as representative in April 2017, I've received a lot of questions from fellows and the wider community about what the Fellowship actually is. As representative, it is part of my role to help ensure that fellows are adequately informed and I hope to work towards that with this blog.

The FSFE Fellowship was started in 2005 and has grown over the years.

In 2009, around the time the Fellowship elections commenced, Georg Greve, FSFE's founder commented

The Fellowship is an activity of FSFE, and indeed one of the primary ways to get involved in the organisation. It is a place for community action, collaboration, communication, fun, and recruitment that also helps fund the other activities of FSFE, for example, the political work.

Later in 2009, articles appeared in places like Linux Pro Magazine promising

From November 2009, the Free Software Foundation Europe will be offering three free Fellowships each month to open source activists.

In May 2018, when Fellowship elections were abolished by a group of nine people, mainly staff, meeting in Berlin, a small news item was put out on a Saturday, largely unnoticed by the community, arguing that fellows have no right to vote because

the community would never accept similar representation for corporate donors it is inappropriate to have such representation for any purely financial contributor.

How can long-standing FSFE members responsible for "community action, collaboration, communication, fun, and recruitment" be mistaken for a "purely financial contributor"? If open source activists were given free Fellowships, how can they be even remotely compared to a "corporate donor" at all? How can FSFE so easily forget all the effort fellows put in over the years?

The minutes show just one vote to keep democracy.

I considered resigning from the role but I sincerely hope that spending more time in the role might help some remaining Fellows.

Financial contributions

Between 2009 and 2016, fellows gave over EUR 1,000,000 to FSFE. Some are asking what they got in return, the financial reports use just six broad categories to show how EUR 473,595 was spent in 2016. One person asked if FSFE only produced EUR 37,464 worth of t-shirts and stickers, is the rest of the budget just overhead costs? At the very least, better public reporting is required. The budget shows that salaries are by far the biggest expense, with salaries, payroll overheads and office facilities being almost all of the budget.

In 2016 one single donor bequeathed EUR 150,000 to FSFE. While the donor's name may legitimately be suppressed for privacy reasons, management refuse to confirm if this person was a fellow or give the Fellowship representatives any information to ensure that the organization continues to remain consistent to the philosophy in practice whenever the will had been written. For an organization that can so easily abandon its Fellowship and metamorphise into a corporate lobby group, it is easy to imagine that a donor who wrote a will five or ten years ago may not recognize the organization today.

With overall revenues (2016) of EUR 650,000 and fellows contributing less than thirty percent of that, management may feel they don't need to bother with fellows or elections any more and they can rely on corporate funding in future. How easy it is to forget the contributions of individual donors and volunteers who helped FSFE reach the point they are in today.

Force-migration to the supporter program

Ultimately, as people have pointed out, the Fellowship has been a sinking ship. Membership was growing consistently for eight months after the community elected me but went into reverse from about December 2017 when fellows were force-migrated to the supporter program. Fellows have a choice of many free software organizations to contribute their time, skill and donations to and many fellows were prompted to re-evaluate after the Fellowship changes. Naturally, I have been contemplating the same possibilities.

Many fellows had included their status as an FSFE Fellow in their email signature and business card. When speaking at conferences, many fellows have chosen to be introduced as an FSFE Fellow. Fellows tell me that they don't want to change their business card to say FSFE Supporter, it feels like a downgrade. Has FSFE made this change in a bubble and misjudged the community?

A very German organization

FSFE's stronghold is Germany, 665 fellows, roughly half the Fellowship. With membership evaporating, maybe FSFE can give up trying to stretch into the rest of Europe and try to regroup at home. For example, in France, FSFE has only 42 fellows, that is one percent of the 4,000 members in April, the premier free software organization of the French speaking world. FSFE's standing in other large countries like the UK (83), Italy (62), Netherlands (59) and Spain (65) is also very rudimentary.

Given my very basic level of German (somewhere between A1 and A2), I feel very privileged that a predominantly German community has chosen to vote for me as their representative.

Find your country in the data set.

FSFE beyond the fellowship

As the elections have been canceled, any members of the community who want to continue voting as a member of the FSFE association or attend the annual meeting, whether you were a fellow or not, are invited to do so by clicking here to ask for the president to confirm your status as an FSFE member.

Fellowship 2.0?

Some people have asked whether the Fellowship should continue independently of FSFE.

It is clear that the fellows in Germany, Austria and Switzerland have the critical mass to set up viable associations of their own, for example, a Free Software Fellowship e.V.. If German fellows did this, they could elect their own board and run their own bank account with revenues over EUR 100,000 per year just from the existing membership base.

Personally, I volunteered to act as a representative of fellows but not as the leader or founder of a new organization. An independent Fellowship could run its own bank account to collect donations and then divide funds between different organizations instead of sending it all to the central FSFE account. An arrangement like this could give fellows more leverage to demand transparency and accounting about campaign costs, just as a large corporate donor would. If you really want your money to go as far as possible and get the best results for free software, this is a very sensible approach and it will reward those organizations who have merit.

If other fellows want to convene a meeting to continue the Fellowship, please promote it through the FSFE mailing lists and events.

Concluding remarks

Volunteers are a large and crucial part of the free software movement. To avoid losing a community like the Fellowship, it is important to treat volunteers equally and fully engage them in decision making through elections and other means. I hope that this blog will help fellows understand who we are so we can make our own decisions about our future instead of having FSFE staff tell us who to be.


Download data used in this blog.

08 September, 2018 04:37PM by Daniel.Pocock

Russell Coker

Google and Certbot (Letsencrypt)

Like most people I use Certbot AKA Letsencrypt to create SSL certificates for my sites. It’s a great service, very easy to use and it generally works well.

Recently the server running www.coker.com.au among other domains couldn’t get a certbot certificate renewed, here’s the error message:

Failed authorization procedure. mail.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "mail.gw90.de" was considered an unsafe domain by a third-party API, listen.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "listen.gw90.de" was considered an unsafe domain by a third-party API

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: mail.gw90.de
   Type:   unauthorized
   Detail: "mail.gw90.de" was considered an unsafe domain by a third-
   party API

   Domain: listen.gw90.de
   Type:   unauthorized
   Detail: "listen.gw90.de" was considered an unsafe domain by a
   third-party API

It turns out that Google Safebrowsing had listed those two sites. Visit https://listen.gw90.de/ or https://mail.gw90.de/ today (and maybe for some weeks or months in the future) using Google Chrome (or any other browser that uses the Google Safebrowsing database) and it will tell you the site is “Dangerous” and probably refuse to let you in.

One thing to note is that neither of those sites has any real content, I only set them up in Apache to get SSL certificates that are used for other purposes (like mail transfer as the name suggests). If Google had listed my blog as a “Dangerous” site I wouldn’t be so surprised, WordPress has had more than a few security issues in the past and it’s not implausible that someone could have compromised it and made it serve up hostile content without me noticing. But the two sites in question have a DocumentRoot that is owned by root and was (until a few days ago) entirely empty, now they have a index.html that just says “This site is empty”. It’s theoretically possible that someone could have exploited a RCE bug in Apache to make it serve up content that isn’t in the DocumentRoot, but that seems unlikely (why waste an Apache 0day on one of the less important of my personal sites). It is possible that the virtual machine in question was compromised (a VM on that server has been compromised before [1]) but it seems unlikely that they would host bad things on those web sites if they did.

Now it could be that some other hostname under that domain had something inappropriate (I haven’t yet investigated all possibilities). But if so Google’s algorithm has a couple of significant problems, firstly if they are blacklisting sites related to one that had an issue then it would probably make more sense to blacklist by IP address (which means including some coker.com.au entries on the same IP). In the case of a compromised server it seems more likely to have multiple bad sites on one IP than multiple bad subdomains on different IPs (given that none of the hostnames in question have changed IP address recently and Google of course knows this). The next issue is that extending blacklisting doesn’t make sense unless there is evidence of hostile intent. I’m pretty sure that Google won’t blacklist all of ibm.com when (not if) a server in that domain gets compromised. I guess they have different policies for sites of different scale.

Both I and a friend have reported the sites in question to Google as not being harmful, but that hasn’t changed anything yet. I’m very disappointed in Google, listing sites, not providing any reason why (it could be a hostname under that domain was compromised and if so it’s not fixed yet BECAUSE GOOGLE DIDN’T REPORT A PROBLEM), and not removing the listing when it’s totally obvious there’s no basis for it.

While it makes sense for certbot to not issue SSL certificates to bad sites. It seems that they haven’t chosen a great service for determining which sites are bad.

Anyway the end result was that some of my sites had an expired SSL certificate for a day. I decided not to renew certificates before they expired to give Google a better chance of noticing their mistake and then I was busy at the time they expired. Now presumably as the sites in question have an invalid SSL certificate it will be even harder to convince anyone that they are not hostile.

08 September, 2018 09:11AM by etbe

Paul Wise

The WebExtocalypse

Mozilla recently dropped support for Firefox XUL extensions.

The initial threat of this prompted me to discover how to re-enable XUL extensions by modifying Firefox's omni.ja file. That clearly is not going to last very long since Mozilla is also deleting XPCOM interfaces but I note the Tor Browser is temporarily still using XUL extensions.

Since I have some extensions I wrote for myself, I will need to rewrite them as WebExtension add-ons.

The first thing to do is check how to install WebExtension add-ons. My local XUL extensions are run from the corresponding git trees. Using an example extension I discovered that this no longer works. The normal way to install add-ons is to use the web-ext tool, upload to the Mozilla app store and then install from there. This seems like overkill for an unpolished local add-on. One way to workaround this is to disable signing but that seems suboptimal if one has installed Mozilla-signed add-ons, which I will probably have to do until Debian packages more add-ons. Luckily Mozilla offers alternative "sideloading" distribution mechanisms and Debian enables these by default for the Debian webext-* packages. Installing a symlink to the git repository into the extensions directory and adding a gecko identifier to the add-on manifest.json file works.

Then I started looking at how to rewrite XUL extensions and discovered the user-interface options are limited compared to XUL. So the Galeon-style smart-bookmarks workaround plugin I use a lot is not even possible to implement as a WebExtension add-on and will require some changes to search, bookmarks or WebExtensions user-interface APIs or a solution external to Firefox like a floating toolbar.

Another plugin I wrote adds a few buttons to the toolbar but WebExtension add-ons are only allowed to add one button to the toolbar. The plugin is more logical as an address bar button but again WebExtension add-ons are only allowed to add one button to the address bar. Each of these allow popups for additional user-interface. So the options are to split this into multiple plugins, one per button or to reqire a second click in the popups.

The remaining task is to migrate from each of the xul-ext-* Debian packages. Some folks have already completed their transition and documented it.

Some packages simply got updated to the corresponding webext-* packages. Some packages were updated upstream but aren't yet in webext-* packages.

Some packages were no longer developed upstream but were updated in forks or reimplementations:

Some packages are no longer useful upstream but alternatives are available:

  • Adblock Plus: acquired by the untrustworthy advertising industry, replaced by uBlock Origin
  • Stylish: acquired by the untrustworthy advertising industry, replaced by Stylus
  • Cookie Monster: cookie-autodelete or uMatrix are possible alternatives
  • DOM Inspector: the native web developer tools are almost the same
  • HTTPS Finder: smart-https, https-by-default are alternatives and https-everywhere is kind of an alternative
  • livehttpheaders: the native web developer tools are mostly an alternative but headers are missing from the page info dialog

Some packages are blocked by missing APIs because they are not yet permitted to replace the Certificate Authorities with alternate trust models such as DNSSEC+DANE, Certificate Patrol, Perspectives, Monkeysphere or Communism.

Like many technology transitions, this one was done for good reasons but is extremely disruptive and a time sink for users and developers. I still have floppy disks that could contain viruses or poetry but I will never find out their content.

08 September, 2018 08:43AM

September 07, 2018

hackergotchi for Gunnar Wolf

Gunnar Wolf

So it is settled: Thinkpad FTW!

So, I hope this will help me go back to being more productive!

I ended up buying a Lenovo Thinkpad SK-8845 keyboard. As it was mentioned by Martin, jelly and Marcos on my previous blog post (hey! This is one of the rare ocasions where I must say Thanks Lazyweb!), it is not a new model, but it seems to be in mint shape... Plus, I got it for only MX$745 (that is, ≈US$37), shipped to my office and all!

My experiences so far? Mostly positive. Yes, I would prefer the trackpad to be a bit larger (it is approx 6×4cm). Most noticeably, I spent some time getting my setup working, as I had to remap my keys — I rely quite a bit on the Super and Multi keys (oh, are you not a Unix person? Super is Mod4, usually located at the Windows keys; I reconfigured the Menu key to be Multi or Compose, to be able to input §ṫℝ∀ℕĠ̣∃ symbols, even some useful ones from time to time). This keyboard has no Windows or Menu keys, so I was playing a bit with how my fingers accept Super being at CapsLock and Multi being and ScrollLock... Lets see!

Also, I am super-happy with my laptop's keyboard (Thinkpad as well, X230), and I thought not having different mental models for laptop and office keyboards would be a win... But this is the seven-row Thinkpad model, and the X230 has the six-row one. Not much changes to the finger memory, but I've found myself missing the Esc key (one row higher) and PgUp/PgDn (in the upper corner instead of around the cursor keys). Strangest, I initially thought I would be able to remap Super and Multi to the two keys where I expected PgUp and PgDn to be (what are their names?), but... Looking at the keycodes they send, it is just not possible — They are hardwired to send Alt + → or Alt + ←. Will come handy, I guess, and I will get used to them. But they are quite odd, I think. With all the people that complained loudly when Lenovo abandoned the seven-row in favor of the six-row layout... I guess I'm about to discover something good..?

AttachmentSize
new_kbd.jpg220.43 KB
just_kbd.jpg258.96 KB

07 September, 2018 06:00PM by gwolf

September 06, 2018

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

I’m running an ethereum node

cjac@server0:~/Downloads/geth-linux-amd64-1.8.14-316fc7ec$ df -h ~/.ethereum/
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vg00-ethereum  148G  130G   11G  93% /home/cjac/.ethereum

this was from before I ran

./geth --syncmode "fast" --cache=1024

06 September, 2018 09:11PM by C.J. Collier

Iustin Pop

A change of scenery

After biking and biking this summer (and not making much progress), this past weekend was on a different tune. For my second time (ever), went for a kayaking weekend, this time on Lake Geneva. Basically, this:

Small kayak, meet big lake! Small kayak, meet big lake!

which is very different from my usual “outdoor” environment.

My previous time kayaking was a basic course, so lots of introduction and explanations (on land), plus a tiny bit of paddling, plus lots of rescue stuff (self and others). Definitely useful, but didn’t feel like actually “going kayaking”.

This weekend however, it was the next step - going a bit further distance, more paddling, more “on the way” learning. And while the weather wasn’t very nice on the first day (see above), the second day’s morning was awesome (the afternoon was again just drab):

Nice outdoor weather! Nice outdoor weather!

Why Kayaking?

Well, for one, water is awesome. And then the question is, what kind of water activity? Not swimming since I suck at that.

It all started with an ad, I think on digitec’s web site, about a certain brand of inflatable catamarans. I did not know these things existed at all (for people living in an apartment instead of castle), so it opened my interest into accessible water sports for city dwellers.

One thing led to another and to yet another and in the end I reached the conclusion that learning kayaking is most interesting from a couple points of view:

  1. It is self-powered, and kind of an endurance activity. This is important for me. While sailing is definitely on my long term radar for the skill itself, paddling is much more basic and involved (from my point of view), so it possibly answers my search for a sport to counterbalance my biking (upper/core vs. core/lower).

  2. You can paddle on a river (which seems a bit strange to me), on a lake, on a big lake, and even on the sea. I mean here along the shore, not crossing a sea—only crazy people would do that. So there is place to grow one’s skills.

  3. In a proper sea kayak, you are “in” the water. On a SUP board (which I also did a few times), you’re “on” the water. But a kayak seems a very, hmm, intimate way of travelling on the water. And because you sit much lower, it’s not so much a fight for survival, err, balance as on a SUP board. You can actually take time to take a picture, drink water, etc.

  4. It is a symmetrical sport. Very important for me, and again bonus points over stand-up paddling, since you have a symmetrical paddle (feathering aside, see the wikipedia article on paddles).

So after all thinking done, and after finding good course options, I was eager to learn basic kayaking.

How’s it feel?

The most funny thing is how slow moving on the water is. After my second kayaking weekend, it looks like with neutral wind and no waves, my personal sustained speed is somewhere around 6km/h (3.2 knots) for around 10 minutes, 5.2km/h (2.8 knots) for half an hour. The exact value is not important, the point is, it’s definitely less than 10km/h. I can run faster, I can bike significantly faster, I can even bike uphill (up to a certain inclination) at this speed or faster even.

And yet, this is a speed that still makes you feel that you’re “going”, despite being just a bit faster than walking. And if you get wind or some waves, it feels fast even! A bit funny, but true. And because you can go straight for your target, over minutes the distance does add up. Just not fast enough :)

Another fun thing about kayaking is how much fun shallow water is. On a sunny day, you can actually see lots of things in the water if it is shallow. I didn’t know for example how underwater plants look like :)

One downside however is that I can’t yet make this an aerobic activity. Since I’m mostly cycling and in some odd years running, my upper body is not strong enough to be able to paddle strongly beyond some very short distances, so I get tired (muscles) before I can actually get my heart rate up. On these two weekend days, max HR as recorded with a chest strap (so should be accurate) was 107, respectively 115 bpm. Which is a far far away cry from my biking/running steady state, not even speaking of max. I guess this will get fixed with more experience and training?

Water is complex

The other very curious thing I learned is that water is much, much more complex than I thought. Started to read some books and it’s definitely complex on paper, but even on a lake, it reacts to wind in oh so interesting ways! A bit of wind and the surface changes immediately, some real wind and in 10 minutes you have waves (on a lake! I didn’t know this is possible…) that become a real impediment to easy, straight paddling.

As an example, it went from this:

Normal water for now… Normal water for now…

to this in less than five minutes (and the picture doesn’t do it justice):

Wow, waves! Wow, waves!

and to even a bit more, in a very short time span. From an easy paddle, to struggling against waves and the wind (which is trying to both push the kayak back and also turn it). And then, ten minutes later, all good again. I was like “huh?”…

And then, the next day started with again what I call “normal” water (just small ripples), but by the afternoon it became this:

A very big mirror A very big mirror

Which again confused me greatly. How can a large body of water be this still? Even ships passing were only generating large waves, but the waves themselves had a smooth surface.

Olympus TG-5 Tough camera

I bought last year an Olympus Tough camera (a TG-5), with family beach vacations in mind. That got some use out of it, but it’s actually perfect for kayaking! I shot a large number of pictures this weekend with it—which is why I was so slow, maybe. But it was fun, to not fear water, or even better, to just put the camera in the water:

Shallow water! Shallow water!

However, I also learned that being in a kayak even in nice weather gives a very unstable platform, so even with optical stabilisation (which the TG-5 does have), pictures will many times get tilted or even miss the framing significantly. So my new modus operandi is to set the camera on sequential low (that means with mechanical shutter), and fire 4-6 frames of every picture. This way, it’s somewhat guaranteed that each burst will have a usable picture (as usable as can be from this small sensor).

Even with a small sensor, in nice sunny weather, pictures do look nice, as you can see in the “on the water” pictures in this post, and at a stretch, it can even pretend to be good for regular photography, still from the water of course:

Castle Chillon, water view Castle Chillon, water view

And because it’s small and easy to easy, you don’t scare the wildlife, for example:

Hello! Hello!

So overall I’m very happy with this camera, for its intended purpose.

One funny thing regarding taking pictures on the water was that, as ships were passing nearby—which is a good thing, because big ship means many large waves, which are awesome fun!—I kept taking photos. And at one point, I got a very funny sensation, and a feeling first of “I’ve been doing this before” and then “good angle, launch all torpedoes!”. When I realised what was happening—flashbacks from computers games from ages ago—I started laughing out-loud. Funny how the brain works and how it makes connections… So here’s one such picture:

Just imagine a periscope view for this one… Just imagine a periscope view for this one…

Thoughts on the future

Kayaking is fun, but it’s much less accessible sport than biking. I still don’t know if I can actually pursue this long term beyond “relaxation” level, because the time overhead to getting on the water is prohibitive, for all solutions I’ve investigated so far. It seems what you can do while living in an apartment is one of inflatable, foldable, or modular kayaks, and all come with their downsides.

A “real” sea kayak is a beautiful thing, and makes one dream of expeditions; dangerous, beautiful, real expeditions out on the ocean (coast). But that’s out of the reach of normal people living in a land-locked country…

Well, we’ll see what the future brings. At least, getting out on the water once in a while is a nice thing, and one that I’m looking forward to when I’ll be able to.

The non-kayaking side

While the weekend was mostly around kayaking, the nice location did lend itself to other “normal” activities, like enjoying the local food, a bit of walk around, and taking regular land-based photos. Not the first time, and likely not the last time we’ll visit this area.

And with that, a last picture (but you can see more here):

Good night! Good night!

06 September, 2018 12:07AM

September 05, 2018

hackergotchi for Gunnar Wolf

Gunnar Wolf

Letter to UNAM's Rector regarding the facts of September 3rd; omission, complicity and impunity are also violence

Our university, among the largest in the world and among the most important in Latin America, had an unexpected and traumatic event last September 3rd: A group of students from one of the high schools our university operates, peacefully protesting, demanding mostly proper study conditions and better security for their area, were violently attacked by a large, organized group. Things are still very much in flux, and we have yet to see what this really meant, and what are its consequences.

But in the meantime, I cannot but take as mine the following words, by Comité Cerezo. I am sorry for not translating into English, interested people will be able to do so using automated services or human talent.

Original here: Carta al Rector de la UNAM por los hechos sucedidos el 3 de septiembre: la omisión, complicidad e impunidad también son violencia

Ciudad Universitaria 4 de septiembre de 2018

Enrique Luis Graue Wiechers
Rector de la Universidad Nacional Autónoma de México

Ante los hechos suscitados el día 3 de septiembre en la explanada de Rectoría de la UNAM y sus alrdedores, el Comité Cerezo México, cuyos integrantes en su mayoría formamos parte de la comunidad universitaria como egresados, estudiantes en activo, académicos y trabajadores, nos dirigimos a usted con el objetivo de manifestar que, como la gran mayoría de quienes se han pronunciado, repudiamos los hechos de violencia por medio de los cuales un grupo de sujetos atacaron violentamente a estudiantes que se manifestaban pacíficamente ejerciendo su derecho humano a la protesta. Sin embargo, consideramos que el repudio a la violencia y la promesa de investigación queda corta ante los hechos ocurridos. Por ello, maniestamos que:

1. Repudiamos con la misma fuerza la actitud omisa e indolente que en los distintos videos e imágenes se observa por parte del cuerpo Auxilio UNAM ante los hechos de violencia. Incluso nos preguntamos por qué elementos de esta corporación de seguridad se acercaron a los grupos de jóvenes que atacaban a los manifestantes e incluso los saludaron de mano en lugar de impedir que agredieran a los estudiantes.

2. Repudiamos el hecho de que, a priori, en algunos comunicados de las autoridades se afirmara que los agresores eran personas ajenas a la comunidad académica. De acuerdo a informaciones que circulan en redes sociales (y que por supuesto deben ser verificadas) algunos de los agresores forman parte de la comunidad estudiantil y de grupos que operan, al menos en CCH Azcapotzalco, CCH Naucalpan y CCH Vallejo. La condena a la violencia y la afirmación pronta de que los agresores no son integrantes de la comunidad es un acto incongruente con la promesa de investigar los hechos. En el mismo sentido afirmar que los hechos que se vivieron buscan enturbiar el ambiente sin tener una investigación clara de qué grupo operó, sin tener claridad en la cadena de mando y en la implicación de algunas autoridades no abona en nada a la resolución del conflicto.

3. Manifestamos nuestro extrañamiento por el hecho de que pese a que en los pronunciamientos de las autoridades se afirma que están abiertas al diálogo, no se haya mencionado que las demandas por las que los estudiantes se manifestaban en Rectoría serán atendidas y de qué modo.
Ante esto, exigimos a las autoridades responsables que a la brevedad:

a) Expliquen a la comunidad universitaria por qué el cuerpo de Auxilio UNAM, como en otros casos ya públicos, no detuvo a los agresores ni intentó contenerlos. Es necesario también que expliquen a la comunidad por qué un integrante de Auxilio UNAM afirmó ante un medio de comunicación en un video que “tenían órdenes de arriba de no actuar”. La comunidad universitaria exige claridad en la rendición de cuentas de cómo y por qué se operó de ese modo. Asimismo, deben aclarar quiénes eran los funcionarios que en los distintos videos están cerca o saludan al grupo de agresores y por qué en lugar de impedir los hechos se limitaron a mirar y en algunos casos a interactuar con estos grupos.

b) Que la investigación de los hechos así como sus avances se hagan públicos. Esa investigación implica una gran exhaustividad y claridad. Las autoridades deben explicar a todos ¿Quiénes eran los jóvenes, y muchos no tan jóvenes, agresores? ¿A qué grupo o grupos pertenecen? ¿Cómo se trasladaron a la Rectoría? Pero no basta con la aclaración de los hechos que componen el ataque, es necesario también que se investigue quién ordenó u orquestó tal ataque, la cadena de omisiones que lo hicieron posible así como la investigación de las autoridades involucradas o no en tales hechos, de tal manera que no sólo se investigue a los ejecutores de las agresiones sino a la cadena completa de mando que las planeó u ordenó.

c) Que se atienda y brinde todo el apoyo necesario para los alumnos atacados, sus familiares y amigos de manera integral y apoyándolos en todas las acciones que ellos necesiten no sólo en su atención médica y psicológica, sino en el acompañamiento jurídico en caso de que quieran proceder contra los agresores.

d) Que de inmediato se nombre un representante de Rectoría que se haga responsable de recibir a una comisión que presente el pliego petitorio o las demandas de los estudiantes y que de inmediato rinda cuentas de la manera en que se atenderán esas demandas. De lo contrario decir que el diálogo y la apertura es la solución sin establecer mecanismos concretos y claros de cómo se atenderán las demandas de los estudiantes es sólo una declaración que no alcanza a resolver el problema.

e) Vigilar que bajo ninguna circunstancia, los estudiantes que han decidido parar actividades y aquellos que están marchando y/o concentrándose en la explanada de Rectoría, como ejercicios del derecho humano a la protesta por los graves hechos ocurridos el 3 de septiembre en la Rectoría, sean intimidados, molestados, amenazados o agredidos por grupos porriles (ajenos o no a la comunidad universitaria) ni por autoridades o integrantes de la misma comunidad.

05 September, 2018 06:09PM by gwolf

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in August 2018

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Really good news this month as Yavor Doganov provided patches for  gamazons (#885735), gnomekiss (#885740) and teg (#885751) which all depended on obsolete GNOME 2 libraries. He succeeded in porting them to GooCanvas and GNOME 3. We are currently aware of some issues in Teg (#907834) and would appreciate more feedback from game testers. In any case this was a non-trivial feat and many thanks go to Yavor who prevented the removal of three games from Debian.
  • I applied a patch from Adrian Bunk which made FreeOrion (#906746) more portable and packaged the latest and greatest release 0.4.8 later.
  • I fixed a broken start script in FreeCol due to OpenJDK 10 changes. (#907661)
  • The Spring RTS engine was affected by a GCC-8 RC bug. (#906409)
  • I backported FreeCiv 2.6.0 to Stretch.
  • I updated some games to the latest standards in Debian, made some minor changes and applied patches to fix FTCBFS bugs or build failures due to a missing libm library. Those issues were solved in tenmado, supertransball2 (#902537), seahorse-adventures, empire (#900197), phlipple (#907207) and ace-of-penguins (#900200).
  • I sponsored mupen64plus-qt for Dan Hastings.

Debian Java

Misc

Debian LTS

This was my thirtieth month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 13.08.2018 until 19.08.2018 and from 27.08.2018 until 02.09.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in intel-microcode, bind9, confuse, libykneomgr, mp4v2, gdm3, wesnoth-1.10, ruby-zip, otrs2, mathjax, mono, tcpflow, bluez, openssh, mariadb-10.0, tomcat-native, wordpress, thunderbird, spice, spice-gtk, libextractor, postgresql-9.1, libcgroup, zutils, soundtouch, squirrelmail, git-annex, ghostscript, libpgjava, elfutils, libpodofo, libtirpc, libxkbcommon, libtasn1-6, cinder, 389-ds-base, wireshark, php5, libzypp, imagemagick, kfreebsd-10, tiff, discount and polarssl.
  • DLA-1467-1.  Issued a security update for ruby-zip fixing 1 CVE.
  • I worked on gdm3 to fix CVE-2018-14424.  I backported the patch to Jessie but could still trigger a session restart with the POC. Since there is no crash and the session is completely restored, we believe now that this is the intended behavior.  I also tried to contact Chris Coulson, the original bug reporter, for further advice but have not received a reply yet. If we don’t discover another issue we will release a DLA for gdm3 in September.
  • DLA-1472-1. Issued a security update for libcgroup fixing 1 CVE.
  • DLA-1473-1. Issued a security update for otrs2 fixing 1 CVE.
  • DLA-1482-1. Issued a security update for libx11 fixing 3 CVE.
  • DLA-1475-1. Issued a security update for tomcat-native fixing 2 CVE.
  • I am still working on a security update for ghostscript. I have already backported the majority of patches to Jessie to fix a serious sandboxing issue with the -dSAFER mode.  More patches are required to fix the problem and only yesterday more CVE were assigned to them.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my third month and I have been paid to work 12  hours on ELTS.

  • I was in charge of our ELTS frontdesk from 13.08.2018 until 19.08.2018 and I triaged CVE in intel-microcode, azureus, gdm3, couchdb, lxc, squirrelmail, wordpress, wpa, xen, tomcat7, firmware-nonfree, postgresql-9.1, apache2, bluez, dojo, libcommons-compress-java, spice, spice-gtk, tomcat-native, libcgroup, libx11 and samba.
  • ELA-21-1. Issued a security update for openssl fixing 1 CVE.
  • ELA-27-1. Issued a security update for tomcat7 fixing 1 CVE.
  • ELA-28-1. Issued a security update for tomcat-native fixing 2 CVE.
  • ELA-20-2. Issued a regression update for busybox.
  • ELA-29-1. Issued a security update for postgresql-9.1 fixing 1 CVE.
  • ELA-30-1. Issued a security update for libx11 fixing 3 CVE.

Thanks for reading and see you next time.

05 September, 2018 04:16PM by Apo

hackergotchi for Gunnar Wolf

Gunnar Wolf

As for useless keys...

After a long rant with a nice and most useful set of replies regarding my keyboard, yesterday I did the mistake –and I am sure it was the first times in five years– of touching my Power key.

Of course, my computer (which I never shut down) obliged and proceeded to shut itself down, no questions asked – Of course, probably because I don't use a desktopesque WM, so it exhibits the same behavior as the system's actual power switch. I was limited to powerelessly watch it cleanly shut down...

It didn't make me very happy. That key should not exist in a keyboard!

05 September, 2018 03:27PM by gwolf

hackergotchi for Martín Ferrari

Martín Ferrari

Creating containers with systemd

This post is mostly a note to myself, as I can never remember the exact steps. But I think it would be useful for other people.

As much as I dislike systemd, it has to be said that their implementation of a container manager is neat. Simple, lightweight, and well integrated across the systemd tools.

As usual, the Arch wiki has excellent documentation on the matter. I will write another post on how to set up the host, but not today (as I can't remember the details!). Suffice to say that a pre-requisite is to have the systemd-container package installed and configured.

Then, to spin a new instance that will have its own init system and will be started at boot, you only need the following commands:

# Sample values.
DISTRO=stretch
MACH=my_container
# Install base system.
$ debootstrap $DISTRO /var/lib/machines/$MACH
# You need dbus for proper integration from the host.
$ chroot /var/lib/machines/$MACH apt install dbus
# Now it is the time to do any other customisation directly on the filesystem (see below).
# Place your customisations in the override file.
$ mkdir -p /etc/systemd/system/systemd-nspawn@$MACH.service.d/
$ echo $CUSTOM > /etc/systemd/system/systemd-nspawn@$MACH.service.d/override.conf
# Reload systemd, and enable and start the container.
$ systemctl daemon-reload 
$ systemctl enable systemd-nspawn@$MACH
$ systemctl start systemd-nspawn@$MACH
# Profit!
$ machinectl shell $MACH /bin/bash

The override file is important if you -like me- want to use the containers in the same network namespace as the host.

The default configuration for systemd-nspawn uses the following arguments:

$ grep ExecStart /lib/systemd/system/systemd-nspawn@.service
ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest --network-veth -U --settings=override --machine=%i

Note the --network-veth parameter. This creates a virtual ethernet connection between host and guest, which you must then route and/or masquerade properly. It is a sane setting for isolation, but in my case, I usually use these containers as glorified chroots, so I want to share the network. So, in the override file, I clear the ExecStart setting, and then put a modified command line:

$ cat /etc/systemd/system/systemd-nspawn@$MACH.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest -U --settings=override --machine=%i

If you are using this configuration, you should also copy network-related files into the container, before the first start (see below why):

$ cp /etc/hosts /etc/resolv.conf /var/lib/machines/$MACH/etc/

It is important to know that during the first execution with the -U flag (which is the default), systemd-nspawn will chown all the files in the container to use private user IDs. So, you should never modify these files directly, or log in with the chroot command, as you will make them inaccessible for the processes inside the container. The libnss-mymachines package does a nice job of mapping these private users in the host environment.

Comment

05 September, 2018 03:21PM