April 28, 2015

Raphael Geissert

Updates to the Debian sources editor

Since the announcement of the chrome/chromium and firefox/iceweasel extensions that add in-browser editing of Debian sources there have been a few changes:

  • Update to Ace 1.1.9, which brings quite some fixes
  • Downloads with proper file names! this should work with recent browsers. E.g for a patch it will propose package-version_path_file.patch
  • Changing the order of additions and removals in the generated patches (it was strange to see additions first)
  • Improvements to the compatibility with debsources' URL parameters
  • Allow the extension to be enabled in private browsing mode (firefox/iceweasel)
  • Loading of the extension when browsing debsources via https
  • Internal extension packaging cleanup, including the generation of a versioned copy of the web (remote) files, so that users of version 0.0.x do use the remote code version 0.0.x

If your browser performs automatic updates of the extensions (the default), you should soon be upgraded to version 0.0.10 or later, bringing all those changes to your browser.

Want to see more? multi-file editing? emacs and vim editing modes? in-browser storage of the modified files? that and more can be done, so feel free to join me and contribute to the Debian sources editor!

28 April, 2015 06:00AM by Raphael Geissert (noreply@blogger.com)

hackergotchi for Gunnar Wolf

Gunnar Wolf

Bestest birthday ever

Bestest birthday ever

That's all I need to enjoy the best best party ever.

Oh! Shall I mention that we got a beautiful present for the kids from our very dear DebConf official Laminatrix! Photos not yet available, but will provide soon.

28 April, 2015 03:24AM by gwolf

April 27, 2015

hackergotchi for Matthew Garrett

Matthew Garrett

Reducing power consumption on Haswell and Broadwell systems

Haswell and Broadwell (Intel's previous and current generations of x86) both introduced a range of new power saving states that promised significant improvements in battery life. Unfortunately, the typical experience on Linux was an increase in power consumption. The reasons why are kind of complicated and distinctly unfortunate, and I'm at something of a loss as to why none of the companies who get paid to care about this kind of thing seemed to actually be caring until I got a Broadwell and looked unhappy, but here we are so let's make things better.

Recent Intel mobile parts have the Platform Controller Hub (Intel's term for the Southbridge, the chipset component responsible for most system i/o like SATA and USB) integrated onto the same package as the CPU. This makes it easier to implement aggressive power saving - the CPU package already has a bunch of hardware for turning various clock and power domains on and off, and these can be shared between the CPU, the GPU and the PCH. But that also introduces additional constraints, since if any component within a power management domain is active then the entire domain has to be enabled. We've pretty much been ignoring that.

The tldr is that Haswell and Broadwell are only able to get into deeper package power saving states if several different components are in their own power saving states. If the CPU is active, you'll stay in a higher-power state. If the GPU is active, you'll stay in a higher-power state. And if the PCH is active, you'll stay in a higher-power state. The last one is the killer here. Having a SATA link in a full-power state is sufficient to keep the PCH active, and that constrains the deepest package power savings state you can enter.

SATA power management on Linux is in a kind of odd state. We support it, but we don't enable it by default. In fact, right now we even remove any existing SATA power management configuration that the firmware has initialised. Distributions don't enable it by default because there are horror stories about some combinations of disk and controller and power management configuration resulting in corruption and data loss and apparently nobody had time to investigate the problem.

I did some digging and it turns out that our approach isn't entirely inconsistent with the industry. The default behaviour on Windows is pretty much the same as ours. But vendors don't tend to ship with the Windows AHCI driver, they replace it with the Intel Rapid Storage Technology driver - and it turns out that that has a default-on policy. But to make things even more awkwad, the policy implemented by Intel doesn't match any of the policies that Linux provides.

In an attempt to address this, I've written some patches. The aim here is to provide two new policies. The first simply inherits whichever configuration the firmware has provided, on the assumption that the system vendor probably didn't configure their system to corrupt data out of the box[1]. The second implements the policy that Intel use in IRST. With luck we'll be able to use the firmware settings by default and switch to the IRST settings on Intel mobile devices.

This change alone drops my idle power consumption from around 8.5W to about 5W. One reason we'd pretty much ignored this in the past was that SATA power management simply wasn't that big a win. Even at its most aggressive, we'd struggle to see 0.5W of saving. But on these new parts, the SATA link state is the difference between going to PC2 and going to PC7, and the difference between those states is a large part of the CPU package being powered up.

But this isn't the full story. There's still work to be done on other components, especially the GPU. Keeping the link between the GPU and an internal display panel active is both a power suck and requires additional chipset components to be powered up. Embedded Displayport 1.3 introduced a new feature called Panel Self-Refresh that permits the GPU and the screen to negotiate dropping the link, leaving it up to the screen to maintain its contents. There's patches to enable this on Intel systems, but it's still not turned on by default. Doing so increases the amount of time spent in PC7 and brings corresponding improvements to battery life.

This trend is likely to continue. As systems become more integrated we're going to have to pay more attention to the interdependencies in order to obtain the best possible power consumption, and that means that distribution vendors are going to have to spend some time figuring out what these dependencies are and what the appropriate default policy is for their users. Intel's done the work to add kernel support for most of these features, but they're not the ones shipping it to end-users. Let's figure out how to make this right out of the box.

[1] This is not necessarily a good assumption, but hey, let's see

comment count unavailable comments

27 April, 2015 06:33PM

Scott Kitterman

Enabling DNSSEC Support For OpenDKIM

If you are using DNSSEC you can now use it to verify DKIM keys with opendkim.

This does require a bit of configuration.

Opendkim uses unbound for DNSSEC support.

You have to:

  • Install the unbound package (not just the library, which is already pulled in as an opendkim dependency)
  • Configure the DNSSEC trust anchor for unbound ( either in /etc/unbound/unbound.conf or by adding a configuration snippet to /etc/unbound/unbound.conf.d – the latter makes it much less likely you’ll have to resolve conflicts in the configuration file if the default file is changed on later package upgrades)
  • Update /etc/opendkim.conf and add:

ResolverConfiguration     /etc/unbound/unbound.conf

Once that’s done, restart opendkim and your DKIM key queries are DNSSEC protected (you can verify this in your mail logs since opendkim annotates unprotected keys when it logs).

Note:  This should also apply to Ubuntu 14.04, 14.10, and 15.04.

Update: In Wheezy (and Squeeze, at least the version in backports, I didn’t check the release version) and Ubuntu 10.04 (similarly with backports) this was possible too.  The opendkim.conf parameter was called UnboundConfigFile.  You may have to update your local configuration to use the new name when you upgrade.

27 April, 2015 02:43PM by skitterman

hackergotchi for Gunnar Wolf

Gunnar Wolf

Worthy weekend

(No, I'm not talking about a future Ubuntu release... After all, what kind of weird animal would a weekend be?)

This weekend we took the kids outside for the first time (not counting, of course, visits to the pediatrician). We were quite anxious... Of course, they were born somewhat under weight and at 7½ months of gestation. But this Saturday we feelt adventurous, and took the kids out for a day among people!

It might not sound like a big deal, but... Well, we took a not such beautiful or scenic route: We took them to the supermarket, and had a small lunch out. For the first time in the already two months they have been with us.

Dinner with friends at home, having a very good time, and –as expected– a... Very hard night for us. All that excitement had the babies very nervous.

Today –again, for the first time– we took the children out to visit some friends of ours. Again, it was great, they behaved very nicely, and were lovely all around.

Lets see what this night holds in place for us.

Anyway, with them growing slowly but steadily... We are very happy, thankful parents. For the first time since Regina is with me in Mexico, this time we decided we would not have a birthday party (yes, I'm 30 minutes away from being 39 year old). I cannot imagine a fuller, better celebration than what we are having. This two babies are the real event in our lives.

Oh... And by the way, this weekend also saw the release of a great new Debian release: Debian 8, codenamed Jessie. Thanks, folks, for such a great birthday present ;-) For reasons that should by now be obvious, I wasn't able to go to either of the release parties I knew of in Mexico City (even one of them was ~500m from home!)

27 April, 2015 04:34AM by gwolf

Russ Allbery

lbcd 3.5.2

This is a fairly minor patch release of the lbcd daemon, which is a daemon that listens for and responds to a simple UDP protocol to request information about system load. It's used in conjunction with lbnamed for dyanmic DNS, and can also be used as a lightweight way to remotely query load.

The only real change in this version is to support linking with libsystemd instead of libsystemd-daemon, since systemd upstream has merged the various small support libraries into one. I also did my normal merge of changes from C TAP Harness and rra-c-util.

NOTE: This package is actually orphaned. No one else has picked it up, and I still maintain the Debian package, so I went ahead and did a new release with this fix. But I'm not planning on doing any significant work on it, and am happy to hand it off to another maintainer.

You can get the latest release from the lbcd distribution page.

27 April, 2015 12:56AM

rra-c-util 5.7

This release has a somewhat random collection of accumulated fixes.

A couple of them are for the PAM libraries: new support for the Mac OS X PAM implementation, which doesn't use the same options and error codes as the rest of the world. It also uses a different pattern of const declaration, which required some additional Autoconf probing for the fake PAM library for testing.

There are also a few fixes for the systemd probe framework: libsystemd-daemon has been rolled into libsystemd in current versions, and the probe was using $(), which doesn't work on Solaris 10.

The Kerberos Autoconf macros should now hopefully work with the version of Kerberos bundled with Solaris 10.

Finally, this release supports checking for Clang as a compiler and choosing compiler warning flags accordingly, although rra-c-util isn't warning-free with Clang -Weverything yet.

You can get the latest release from the rra-c-util distribution page.

27 April, 2015 12:44AM

C TAP Harness 3.3

The primary purpose of this release is to merge work by D. Brashear on making it easier to show verbose test success and failure output. Now, if runtests is run with the -v option, or the C_TAP_VERBOSE environment variable is set, the full output of each test case will be printed. The output is currently pretty messy, and I hope to improve it in the future, but it's a start.

This release also supports compilation with Clang with all warnings enabled, and resolves all warnings shown by -Weverything (except the ones about padding, because that's not actually a useful warning). It includes new machinery to detect the compiler and switch warning flags. I will hopefully get a chance to go through my other projects and make them build cleanly with Clang, but it requires adding casts for any conversion from unsigned to signed or vice versa, so that may take a while.

You can get the latest version from the C TAP Harness distribution page.

27 April, 2015 12:32AM

hackergotchi for Steve Kemp

Steve Kemp

Validating puppet manifests via git hooks.

It looks like I'll be spending a lot of time working with puppet over the coming weeks.

I've setup some toy deployments on virtual machines, and have converted several of my own hosts to using it, rather than my own slaughter system.

When it comes to puppet some things are good, and some things are bad, as exected, and as any similar tool (even my own). At the moment I'm just aiming for consistency and making sure I can control all the systems - BSD, Debian GNU/Linux, Ubuntu, Microsoft Windows, etc.

Little changes are making me happy though - rather than using a local git pre-commit hook to validate puppet manifests I'm now doing that checking on the server-side via a git pre-receive hook.

Doing it on the server-side means that I can never forget to add the local hook and future-colleagues can similarly never make this mistake, and commit malformed puppetry.

It is almost a shame there isn't a decent collection of example git-hooks, for doing things like this puppet-validation. Maybe there is and I've missed it.

It only crossed my mind because I've had to write several of these recently - a hook to rebuild a static website when the repository has a new markdown file pushed to it, a hook to validate syntax when pushes are attempted, and another hook to deny updates if the C-code fails to compile.

27 April, 2015 12:00AM

April 26, 2015

hackergotchi for Erich Schubert

Erich Schubert

Your big data toolchain is a big security risk!

This post is a follow-up to my earlier post on the "sad state of sysadmin in the age of containers". While I was drafting this post, that story got picked up by HackerNews, Reddit and Twitter, sending a lot of comments and emails my way. Surprisingly many of the comments are supportive of my impression - I would have expected to see much more insults along the lines "you just don't like my-favorite-tool, so you rant against using it". But a lot of people seem to share my concerns. Thanks, you surprised me!
Here is the new rant post, in the slightly different context of big data:

Everybody is doing "big data" these days. Or at least, pretending to do so to upper management. A lot of the time, there is no big data. People do more data anylsis than before, and therefore stick the "big data" label on them to promote themselves and get green light from management, isn't it?
"Big data" is not a technical term. It is a business term, referring to any attempt to get more value out of your business by analyzing data you did not use before. From this point of view, most of such projects are indeed "big data" as in "data-driven revenue generation" projects. It may be unsatisfactory to those interested in the challenges of volume and the other "V's", but this is the reality how the term is used.
But even in those cases where the volume and complexity of the data would warrant the use of all the new toys tools, people overlook a major problem: security of their systems and of their data.

The currently offered "big data technology stack" is all but secure. Sure, companies try to earn money with security add-ons such as Kerberos authentication to sell multi-tenancy, and with offering their version of Hadoop (their "Hadoop distribution").
The security problem is deep inside the "stack". It comes from the way this world ticks: the world of people that constantly follow the latest tool-of-the-day. In many of the projects, you no longer have mostly Linux developers that co-function as system administrators, but you see a lot of Apple iFanboys now. They live in a world where technology is outdated after half a year, so you will not need to support product longer than that. They love reinstalling their development environment frequently - because each time, they get to change something. They also live in a world where you would simply get a new model if your machine breaks down at some point. (Note that this will not work well for your big data project, restarting it from scratch every half year...)
And while Mac users have recently been surprisingly unaffected by various attacks (and unconcerned about e.g. GoToFail, or the fail to fix the rootpipe exploit) the operating system is not considered to be very secure. Combining this with users who do not care is an explosive mixture...
This type of developer, who is good at getting a prototype website for a startup kicking in a short amount of time, rolling out new features every day to beta test on the live users is what currently makes the Dotcom 2.0 bubble grow. It's also this type of user that mainstream products aim at - he has already forgotten what was half a year ago, but is looking for the next tech product to announced soon, and willing to buy it as soon as it is available...
This attitude causes a problem at the very heart of the stack: in the way packages are built, upgrades (and safety updates) are handled etc. - nobody is interested in consistency or reproducability anymore.
Someone commented on my blog that all these tools "seem to be written by 20 year old" kids. He probably is right. It wouldn't be so bad if we had some experienced sysadmins with a cluebat around. People that have experience on how to build systems that can be maintained for 10 years, and securely deployed automatically, instead of relying on puppet hacks, wget and unzipping of unsigned binary code.
I know that a lot of people don't want to hear this, but:
Your Hadoop system contains unsigned binary code in a number of places, that people downloaded, uploaded and redownloaded a countless number of times. There is no guarantee that .jar ever was what people think it is.
Hadoop has a huge set of dependencies, and little of this has been seriously audited for security - and in particular not in a way that would allow you to check that your binaries are built from this audited code anyway.
There might be functionality hidden in the code that just sits there and waits for a system with a hostname somewhat like "yourcompany.com" to start looking for its command and control server to steal some key data from your company. The way your systems are built they probably do not have much of a firewall guarding against such. Much of the software may be constantly calling home, and your DevOps would not notice (nor would they care, anyway).
The mentality of "big data stacks" these days is that of Windows Shareware in the 90s. People downloading random binaries from the Internet, not adequately checked for security (ever heard of anybody running an AntiVirus on his Hadoop cluster?) and installing them everywhere.
And worse: not even keeping track of what they installed over time, or how. Because the tools change every year. But what if that developer leaves? You may never be able to get his stuff running properly again!
Fire-and-forget.
I predict that within the next 5 years, we will have a number of security incidents in various major companies. This is industrial espionage heaven. A lot of companies will cover it up, but some leaks will reach mass media, and there will be a major backlash against this hipster way of stringing together random components.
There is a big "Hadoop bubble" growing, that will eventually burst.
In order to get into a trustworthy state, the big data toolchain needs to:
  • Consolidate. There are too many tools for every job. There are even too many tools to manage your too many tools, and frontends for your frontends.
  • Lose weight. Every project depends on way too many other projects, each of which only contributes a tiny fragment for a very specific use case. Get rid of most dependencies!
  • Modularize. If you can't get rid of a dependency, but it is still only of interest to a small group of users, make it an optional extension module that the user only has to install if he needs this particular functionality.
  • Buildable. Make sure that everybody can build everything from scratch, without having to rely on Maven or Ivy or SBT downloading something automagically in the background. Test your builds offline, with a clean build directory, and document them! Everything must be rebuildable by any sysadmin in a reproducible way, so he can ensure a bug fix is really applied.
  • Distribute. Do not rely on binary downloads from your CDN as sole distribution channel. Instead, encourage and support alternate means of distribution, such as the proper integration in existing and trusted Linux distributions.
  • Maintain compatibility. successful big data projects will not be fire-and-forget. Eventually, they will need to go into production and then it will be necessary to run them over years. It will be necessary to migrate them to newer, larger clusters. And you must not lose all the data while doing so.
  • Sign. Code needs to be signed, end-of-story.
  • Authenticate. All downloads need to come with a way of checking the downloaded files agree with what you uploaded.
  • Integrate. The key feature that makes Linux systems so very good at servers is the all-round integrated software management. When you tell the system to update - and you have different update channels available, such as a more conservative "stable/LTS" channel, a channel that gets you the latest version after basic QA, and a channel that gives you the latest versions shortly after their upload to help with QA. It covers almost all software on your system, so it does not matter whether the security fix is in your kernel, web server, library, auxillary service, extension module, scripting language etc. - it will pull this fix and update you in no time.
Now you may argue that Hortonworks, Cloudera, Bigtop etc. already provide packages. Well ... they provide crap. They have something they call a "package", but it fails by any quality standards. Technically, a Wartburg is a car; but not one that would pass todays safety regulations...
For example, they only support Ubuntu 12.04 - a three year old Ubuntu is the latest version they support... Furthermore, these packages are roughly the same. Cloudera eventually handed over their efforts to "the community" (in other words, they gave up on doing it themselves, and hoped that someone else would clean up their mess); and Hortonworks HDP (any maybe Pivotal HD, too) is derived from these efforts, too. Much of what they do is offering some extra documentation and training for the packages they built using Bigtop with minimal effort.
The "spark" .deb packages of Bigtop, for example, are empty. They forgot to include the .jars in the package. Do I really need to give more examples of bad packaging decisions? All bigtop packages now depend on their own version of groovy - for a single script. Instead of rewriting this script in an already required language - or in a way that it would run on the distribution-provided groovy version - they decided to make yet another package, bigtop-groovy.
When I read about Hortonworks and IBM announcing their "Open Data Platform", I could not care less. As far as I can tell, they are only sticking their label on the existing tools anyway. Thus, I'm also not surprised that Cloudera and MapR do not join this rebranding effort - given the low divergence of Hadoop, who would need such a label anyway?
So why does this matter? Essentially, if anything does not work, you are currently toast. Say there is a bug in Hadoop that makes it fail to process your data. Your business is belly-up because of that, no data is processed anymore, your are vegetable. Who is going to fix it? All these "distributions" are built from the same, messy, branch. There is probably only a dozen of people around the world who have figured this out well enough to be able to fully build this toolchain. Apparently, none of the "Hadoop" companies are able to support a newer Ubuntu than 2012.04 - are you sure they have really understood what they are selling? I have doubts. All the freelancers out there, they know how to download and use Hadoop. But can they get that business-critical bug fix into the toolchain to get you up and running again? This is much worse than with Linux distributions. They have build daemons - servers that continuously check they can compile all the software that is there. You need to type two well-documented lines to rebuild a typical Linux package from scratch on your workstation - any experienced developer can follow the manual, and get a fix into the package. There are even people who try to recompile complete distributions with a different compiler to discover compatibility issues early that may arise in the future.
In other words, the "Hadoop distribution" they are selling you is not code they compiled themselves. It is mostly .jar files they downloaded from unsigned, unencrypted, unverified sources on the internet. They have no idea how to rebuild these parts, who compiled that, and how it was built. At most, they know for the very last layer. You can figure out how to recompile the Hadoop .jar. But when doing so, your computer will download a lot of binaries. It will not warn you of that, and they are included in the Hadoop distributions, too.
As is, I can not recommend to trust your business data into Hadoop.
It is probably okay to copy the data into HDFS and play with it - in particular if you keep your cluster and development machines isolated with strong firewalls - but be prepared to toss everything and restart from scratch. It's not ready yet for prime time, and as they keep on adding more and more unneeded cruft, it does not look like it will be ready anytime soon.

One more examples of the immaturity of the toolchain:
The scala package from scala-lang.org cannot be cleanly installed as an upgrade to the old scala package that already exists in Ubuntu and Debian (and the distributions seem to have given up on compiling a newer Scala due to a stupid Catch-22 build process, making it very hacky to bootstrap scala and sbt compilation).
And the "upstream" package also cannot be easily fixed, because it is not built with standard packaging tools, but with an automagic sbt helper that lacks important functionality (in particular, access to the Replaces: field, or even cleaner: a way of splitting the package properly into components) instead - obviously written by someone with 0 experience in packaging for Ubuntu or Debian; and instead of using the proven tools, he decided to hack some wrapper that tries to automatically do things the wrong way...

I'm convinced that most "big data" projects will turn out to be a miserable failure. Either due to overmanagement or undermanagement, and due to lack of experience with the data, tools, and project management... Except that - of course - nobody will be willing to admit these failures. Since all these projects are political projects, they by definition must be successful, even if they never go into production, and never earn a single dollar.

26 April, 2015 02:41PM

hackergotchi for Joachim Breitner

Joachim Breitner

Fifth place in Godingame World Cup

Last evening, Codingame held a “Programming World Cup” titled “There is no Spoon”. The format is that within four hours, you get to write a program that solves a given task. Submissions are first rated by completeness (there are 13 test inputs that you can check your code again, and further hidden tests that will only be checked after submission) and then by time of submission. You can only submit your code once.

What I like about Codingame is that they support a great number of programming languages in their Web-“IDE”, including Haskell. I had nothing better to do yesterday, so I joined. I was aiming for a good position in the Haskell-specific ranking.

After nearly two hours my code completed all the visible test cases and I submitted. I figured that this was a reasonable time to do so, as it was half-time and there are supposed to be two challenges. I turned out that the first, quite small task, which felt like a warm-up or qualification puzzle, was the first of those two, and that therefore I was done, and indeed the 5th fastest to complete a 100% solution! With only less than 5 minutes difference to the 3rd, money-winning place – if I had known I had such a chance, I had started on time...

Having submitted the highest ranked Haskell code, I will get a T-Shirt. I also defended Haskell’s reputation as an efficient programming language, ranked third in the contest, after C++ (rank 1) and Java (rank 2), but before PHP (9), C# (10) and Python (11), listing only those that had a 100% solution.

The task, solving a Bridges puzzle, did not feel like a great fit for Haskell at first. I was juggling Data.Maps around where otherwise I’d simple attach attributes to object, and a recursive function simulated nothing but a plain loop. But it played off the moment I had to implement guessing parts of the solution, trying what happens and backtracking when it did not work: With all state in parameters and pure code it was very simple to get a complete solution.

My code is of course not very polished, and having the main loop live in the IO monad just to be able to print diagnostic commands is a bit ugly.

The next, Lord of the Ring-themed world cup will be on June 27th. Maybe we will see more than 18 Haskell entries then?

26 April, 2015 01:30PM by Joachim Breitner (mail@joachim-breitner.de)

Petter Reinholdtsen

First Jessie based Debian Edu beta release

I am happy to report that the Debian Edu team sent out this announcement today:

the Debian Edu / Skolelinux project is pleased to announce the first
*beta* release of Debian Edu "Jessie" 8.0+edu0~b1, which for the first
time is composed entirely of packages from the current Debian stable
release, Debian 8 "Jessie".

(As most reading this will know, Debian "Jessie" hasn't actually been
released by now. The release is still in progress but should finish
later today ;)

We expect to make a final release of Debian Edu "Jessie" in the coming
weeks, timed with the first point release of Debian Jessie. Upgrades
from this beta release of Debian Edu Jessie to the final release will
be possible and encouraged!

Please report feedback to debian-edu@lists.debian.org and/or submit
bugs: http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

Debian Edu - sometimes also known as "Skolelinux" - is a complete
operating system for schools, universities and other
organisations. Through its pre- prepared installation profiles
administrators can install servers, workstations and laptops which
will work in harmony on the school network.  With Debian Edu, the
teachers themselves or their technical support staff can roll out a
complete multi-user, multi-machine study environment within hours or
days.

Debian Edu is already in use at several hundred schools all over the
world, particularly in Germany, Spain and Norway. Installations come
with hundreds of applications pre-installed, plus the whole Debian
archive of thousands of compatible packages within easy reach.

For those who want to give Debian Edu Jessie a try, download and
installation instructions are available, including detailed
instructions in the manual explaining the first steps, such as setting
up a network or adding users.  Please note that the password for the
user your prompted for during installation must have a length of at
least 5 characters!

== Where to download ==

A multi-architecture CD / usbstick image (649 MiB) for network booting
can be downloaded at the following locations:

    http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~b1-CD.iso
    rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~b1-CD.iso . 

The SHA1SUM of this image is: 54a524d16246cddd8d2cfd6ea52f2dd78c47ee0a

Alternatively an extended DVD / usbstick image (4.9 GiB) is also
available, with more software included (saving additional download
time):

    http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~b1-USB.iso
    rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~b1-USB.iso 

The SHA1SUM of this image is: fb1f1504a490c077a48653898f9d6a461cb3c636

Sources are available from the Debian archive, see
http://ftp.debian.org/debian-cd/8.0.0/source/ for some download
options.

== Debian Edu Jessie manual in seven languages ==

Please see https://wiki.debian.org/DebianEdu/Documentation/Jessie/ for
the English version of the Debian Edu jessie manual.

This manual has been fully translated to German, French, Italian,
Danish, Dutch and Norwegian Bokmål. A partly translated version exists
for Spanish.  See http://maintainer.skolelinux.org/debian-edu-doc/ for
online version of the translated manual.

More information about Debian 8 "Jessie" itself is provided in the
release notes and the installation manual:
- http://www.debian.org/releases/jessie/releasenotes
- http://www.debian.org/releases/jessie/installmanual


== Errata / known problems ==

    It takes up to 15 minutes for a changed hostname to be updated via
    DHCP (#780461).

    The hostname script fails to update LTSP server hostname (#783087). 

Workaround: run update-hostname-from-ip on the client to update the
hostname immediately.

Check https://wiki.debian.org/DebianEdu/Status/Jessie for a possibly
more current and complete list.

== Some more details about Debian Edu 8.0+edu0~b1 Codename Jessie released 2015-04-25 ==

=== Software updates ===

Everything which is new in Debian 8 Jessie, e.g.:

 * Linux kernel 3.16.7-ctk9; for the i386 architecture, support for
   i486 processors has been dropped; oldest supported ones: i586 (like
   Intel Pentium and AMD K5).

 * Desktop environments KDE Plasma Workspaces 4.11.13, GNOME 3.14,
   Xfce 4.12, LXDE 0.5.6
   * new optional desktop environment: MATE 1.8
   * KDE Plasma Workspaces is installed by default; to choose one of
     the others see the manual.
 * the browsers Iceweasel 31 ESR and Chromium 41
 * LibreOffice 4.3.3
 * GOsa 2.7.4
 * LTSP 5.5.4
 * CUPS print system 1.7.5
 * new boot framework: systemd
 * Educational toolbox GCompris 14.12
 * Music creator Rosegarden 14.02
 * Image editor Gimp 2.8.14
 * Virtual stargazer Stellarium 0.13.1
 * golearn 0.9
 * tuxpaint 0.9.22
 * New version of debian-installer from Debian Jessie.
 * Debian Jessie includes about 43000 packages available for installation.
 * More information about Debian 8 Jessie is provided in its release
   notes and the installation manual, see the link above.

=== Installation changes ===

    Installations done via PXE now also install firmware automatically
    for the hardware present.

=== Fixed bugs ===

A number of bugs have been fixed in this release; the most noticeable
from a user perspective:

 * Inserting incorrect DNS information in Gosa will no longer break
   DNS completely, but instead stop DNS updates until the incorrect
   information is corrected (710362)

 * shutdown-at-night now shuts the system down if gdm3 is used (775608). 

=== Sugar desktop removed ===

As the Sugar desktop was removed from Debian Jessie, it is also not
available in Debian Edu jessie.


== About Debian Edu / Skolelinux ==

Debian Edu, also known as Skolelinux, is a Linux distribution based on
Debian providing an out-of-the box environment of a completely
configured school network. Directly after installation a school server
running all services needed for a school network is set up just
waiting for users and machines being added via GOsa², a comfortable
Web-UI. A netbooting environment is prepared using PXE, so after
initial installation of the main server from CD or USB stick all other
machines can be installed via the network. The provided school server
provides LDAP database and Kerberos authentication service,
centralized home directories, DHCP server, web proxy and many other
services.  The desktop contains more than 60 educational software
packages and more are available from the Debian archive, and schools
can choose between KDE, GNOME, LXDE, Xfce and MATE desktop
environment.

== About Debian ==

The Debian Project was founded in 1993 by Ian Murdock to be a truly
free community project. Since then the project has grown to be one of
the largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and
maintain Debian software. Available in 70 languages, and supporting a
huge range of computer types, Debian calls itself the universal
operating system.

== Thanks ==

Thanks to everyone making Debian and Debian Edu / Skolelinux happen!
You rock.

26 April, 2015 12:10PM

Russell Coker

Anti-Systemd People

For the Technical People

This post isn’t really about technology, I’ll cover the technology briefly skip to the next section if you aren’t interested in Linux programming or system administration.

I’ve been using the Systemd init system for a long time, I first tested it in 2010 [1]. I use Systemd on most of my systems that run Debian/Wheezy (which means most of the Linux systems I run which aren’t embedded systems). Currently the only systems where I’m not running Systemd are some systems on which I don’t have console access, while Systemd works reasonably well it wasn’t a standard init system for Debian/Wheezy so I don’t run it everywhere. That said I haven’t had any problems with Systemd in Wheezy, so I might have been too paranoid.

I recently wrote a blog post about systemd, just some basic information on how to use it and why it’s not a big deal [2]. I’ve been playing with Systemd for almost 5 years and using it in production for almost 2 years and it’s performed well. The most serious bug I’ve found in systemd is Bug #774153 which causes a Wheezy->Jessie upgrade to hang until you run “systemctl daemon-reexec” [3].

I know that some people have had problems with systemd, but any piece of significant software will cause problems for some people, there are bugs in all software that is complex enough to be useful. However the fact that it has worked so well for me on so many systems suggests that it’s not going to cause huge problems, it should be covered in the routine testing that is needed for a significant deployment of any new version of a distribution.

I’ve been using Debian for a long time. The transitions from libc4 to libc5 and then libc6 were complex but didn’t break much. The use of devfs in Debian caused some issues and then the removal of devfs caused other issues. The introduction of udev probably caused problems for some people too. Doing major updates to Debian systems isn’t something that is new or which will necessarily cause significant problems, I don’t think that the change to systemd by default compares to changing from a.out binaries to ELF binaries (which required replacing all shared objects and executables).

The Social Issue of the Default Init

Recently the Debian technical committee determined that Systemd was the best choice for the default init system in Debian/Jessie (the next release of Debian which will come out soon). Decisions about which programs should be in the default install are made periodically and it’s usually not a big deal. Even when the choice is between options that directly involve the user (such as the KDE and GNOME desktop environments) it’s not really a big deal because you can just install a non-default option.

One of the strengths of Debian has always been the fact that any Debian Developer (DD) can just add any new package to the archive if they maintain it to a suitable technical standard and if copyright and all other relevant laws are respected. Any DD who doesn’t like any of the current init systems can just package a new one and upload it. Obviously the default option will get more testing, so the non-default options will need more testing by the maintainer. This is particularly difficult for programs that have significant interaction with other parts of the system, I’ve had difficulties with this over the course of 14 years of SE Linux development but I’ve also found that it’s not an impossible problem to solve.

It’s generally accepted that making demands of other people’s volunteer work is a bad thing, which to some extent is a reasonable position. There is a problem when this is taken to extremes, Debian has over 1000 developers who have to work together so sometimes it’s a question of who gets to do the extra work to make the parts of the distribution fit together. The issue of who gets to do the work is often based on what parts are the defaults or most commonly used options. For my work on SE Linux I often have to do a lot of extra work because it’s not part of the default install and I have to make my requests for changes to other packages be as small and simple as possible.

So part of the decision to make Systemd be the default init is essentially a decision to impose slightly more development effort on the people who maintain SysVInit if they are to provide the same level of support – of course given the lack of overall development on SysVInit the level of support provided may decrease. It also means slightly less development effort for the people who maintain Systemd as developers of daemon packages MUST make them work with it. Another part of this issue is the fact that DDs who maintain daemon packages need to maintain init.d scripts (for SysVInit) and systemd scripts, presumably most DDs will have a preference for one init system and do less testing for the other one. Therefore the choice of systemd as the default means that slightly less developer effort will go into init.d scripts. On average this will slightly increase the amount of sysadmin effort that will be required to run systems with SysVInit as the scripts will on average be less well tested. This isn’t going to be a problem in the short term as the current scripts are working reasonably well, but over the course of years bugs may creep in and a proposed solution to this is to have SysVInit scripts generated from systemd config files.

We did have a long debate within Debian about the issue of default init systems and many Debian Developers disagree about this. But there is a big difference between volunteers debating about their work and external people who don’t contribute but believe that they are entitled to tell us what to do. Especially when the non-contributors abuse the people who do the work.

The Crowd Reaction

In a world filled with reasonable people who aren’t assholes there wouldn’t be any more reaction to this than there has been to decisions such as which desktop environment should be the default (which has caused some debate but nothing serious). The issue of which desktop environment (or which version of a desktop environment) to support has a significant affect on users that can’t be avoided, I could understand people being a little upset about that. But the init system isn’t something that most users will notice – apart from the boot time.

For some reason the men in the Linux community who hate women the most seem to have taken a dislike to systemd. I understand that being “conservative” might mean not wanting changes to software as well as not wanting changes to inequality in society but even so this surprised me. My last blog post about systemd has probably set a personal record for the amount of misogynistic and homophobic abuse I received in the comments. More gender and sexuality related abuse than I usually receive when posting about the issues of gender and sexuality in the context of the FOSS community! For the record this doesn’t bother me, when I get such abuse I’m just going to write more about the topic in question.

While the issue of which init system to use by default in Debian was being discussed we had a lot of hostility from unimportant people who for some reason thought that they might get their way by being abusive and threatening people. As expected that didn’t give the result they desired, but it did result in a small trend towards people who are less concerned about the reactions of users taking on development work related to init systems.

The next thing that they did was to announce a “fork” of Debian. Forking software means maintaining a separate version due to a serious disagreement about how it should be maintained. Doing that requires a significant amount of work in compiling all the source code and testing the results. The sensible option would be to just maintain a separate repository of modified packages as has been done many times before. One of the most well known repositories was the Debian Multimedia repository, it was controversial due to flouting legal issues (the developer produced code that was legal where they lived) and due to confusion among users. But it demonstrated that you can make a repository containing many modified packages. In my work on SE Linux I’ve always had a repository of packages containing changes that haven’t been accepted into Debian, which included changes to SysVInit in about 2001.

The latest news on the fork-Debian front seems to be the call for donations [4]. Apparently most of the money that was spent went to accounting fees and buying a laptop for a developer. The amount of money involved is fairly small, Forbes has an article about how awful people can use “controversy” to get crowd-funding windfalls [5].

MikeeUSA is an evil person who hates systemd [6]. This isn’t any sort of evidence that systemd is great (I’m sure that evil people make reasonable choices about software on occasion). But it is a significant factor in support for non-systemd variants of Debian (and other Linux distributions). Decent people don’t want to be associated with people like MikeeUSA, the fact that the anti-systemd people seem happy to associate with him isn’t going to help their cause.

Conclusion

Forking Debian is not the correct technical solution to any problem you might have with a few packages. Filing bug reports and possibly forking those packages in an external repository is the right thing to do.

Sending homophobic and sexist abuse is going to make you as popular as the GamerGate and GodHatesAmerica.com people. It’s not going to convince anyone to change their mind about technical decisions.

Abusing volunteers who might consider donating some of their time to projects that you like is generally a bad idea. If you abuse them enough you might get them to volunteer less of their time, but the most likely result is that they just don’t volunteer on anything associated with you.

Abusing people who write technical blog posts isn’t going to convince them that they made an error. Abuse is evidence of the absence of technical errors.

26 April, 2015 11:59AM by etbe

Patrick Schoenfeld

Sharing code between puppet providers

So you’ve written that custom puppet type for something and start working on another puppet type in the same module. What if you needed to share some code between this types? Is there a way of code-reuse that works with the plugin sync mechanism?

Yes, there is.

Puppet even has two possible ways of sharing code between types.

Option #1: a shared base provider

A provider in puppet is basically a class associated with a certain (puppet) type and there can be a lot providers for a single type (just look at the package provider!). It seems quiet natural, that it’s possible to define a parent class for those providers. So natural, that even the official puppet documentation writes about it.

Option #2: Shared libraries

The second option is a shared library, shipped in a certain namespace in the lib directory of the module, whereas the idea is mostly sketched in the feature ticket #14149. Basically one defines a class in the special Puppetx namespace, using the author- and module-name in the class name, in order to avoid conflicts with other modules.

require 'puppetx'

module Puppetx::<Author>
  module Puppetx::<Author>::<Modulename>

   ... your helper functions go here ...
  end
end

This example would be saved to

lib/<author>/<modulename>

in your module’s folder and be included in your provider with something along the following:

require 
  File.expand_path(File.join(File.dirname(__FILE__), "..", "..", "..",
    "puppet_x", "<author>", "<modulename>.rb"))

Compatibility with Puppet 4:

In puppet 4 the name of the namespace has changed slightly. It’s now called ‘PuppetX’ instead of ‘Puppetx’ and is stored in a file ‘puppet_x.rb’, which means that the require and the module name itself need to be changed:

require 'puppet_x'

module PuppetX::<authorname>
  module PuppetX::<authorname>::<modulename>

For backward compatibility with puppet 3 you could instead add something like this, according to my co-worker mxey, who knows way more about ruby then I do:

module PuppetX
  module <Author>
    <Modulename> = Puppetx::<Authorname>::<Modulename>
  end
end

Apart from this you’d need to change the require to be conditional on the puppet-version and refer to the module by the aliased version (which is left as an exercise for the reader ;))

26 April, 2015 08:26AM by Patrick Schönfeld

Bits from Debian

Debian 8.0 Jessie has been released!

Alt Jessie has been released

There's a new sheriff in town. And her name is Jessie. We're happy to announce the release of Debian 8.0, codenamed Jessie.

Want to install it? Choose your favourite installation media among Blu-ray Discs, DVDs, CDs and USB sticks. Then read the installation manual. For cloud users Debian also offers pre-built OpenStack images ready to use.

Already a happy Debian user and you only want to upgrade? You are just an apt-get dist-upgrade away from Jessie! Find how, reading the installation guide and the release notes.

Do you want to celebrate the release? Share the banner from this blog in your blog or your website!

26 April, 2015 01:15AM by Ana Guerrero Lopez

April 25, 2015

Andrew Cater

Jessie - very nearly released as I write :) :)

I've spent a chunk of the day testing installs of Jessie on various hardware I have here.

About 8 AMD64 installs later, I've almost done testing many of the variants.

One i586 install done - that's a really old laptop :)

I'm about to re-install the Cubietruck to test an armhf install :)

It's been a really long day - but we're almost there for the final mirror push :)

This is what it's all about - time on IRC, lots of work, lots of fellow feeling: it may only be once every couple of years, but the value is immense.

Also see https://identi.ca/debian  :)

25 April, 2015 11:32PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Neil Williams

Neil Williams

ARMMP Jessie

Whilst the rest of Debian was very busy actually making the Jessie release, I was trying to prepare the ground to test the armhf installer images.

Cubietruck is fine, it’s a nice install and the documentation on the wiki works nicely. I tried to automate it (as it would be something I’d like to test in LAVA – which is new in Jessie) with preseeding but although I have got a jessie-x86 preseed file which works in libvirt, the cubietruck installer doesn’t seem to want to listen to all of the preseeding. Some of the options (notably the wifi firmware screen, hostname and user passwords) don’t go away. x86 wasn’t completely as expected either though – it works nicely in libvirt and Virtual Machine Manager but trying to (automate it again) do it manually on the command line with qemu raised an awkward grub2 bug where it couldn’t work out where to install and failed to then reboot if manually installed.

iMX.53 was the next target – got a few steps along the road with that one, except that the kernel in the installer isn’t able to see the SATA even though the bootloader can, nor can it see the USB stick. This means that the only media the installer can see is the SD card from which the installer was booted – trying to use that ends badly.

Beaglebone-black required a little bit of tweaking. Don’t forget that the firmware & partition.img together don’t give a lot of space, so creating a second partition on the SD card upon which to put the CD image itself is useful. The Debian ARMMP kernel in Jessie doesn’t see USB, so the option here is actually what I wanted. Install from SD card onto eMMC. It’s a 2Gb space, but it’s ideal for a default location, leaving the SD card for other testing. I ran out of time to see about preseeding BBB installs to eMMC but one word of warning, installing to the eMMC doesn’t mean that you have a bootloader on the eMMC – the SD card is still needed to reboot, unless that is fixed manually post-install. (Haven’t got that working yet either.)

Speaking of time, this post is more brief on the detail than I would like for a couple of reasons:

  1. I haven’t actually got things working well enough (clearly from some of the end states above)
  2. the new code to test these in LAVA isn’t ready yet either

I did mean to test on an iMX6.quad Wandboard – will see about that, hopefully, later.

In general, the armhf SD card support is the best start for installing Jessie on a range of ARMv7 hardware. (I haven’t got as far as ARMv8 yet either.) However, when I describe that as “the best start”, it is lacking in many areas – typically due to restrictions in the u-boot support, mainline kernel support and general board variability. It’s a lot easier than it used to be but ARMv7 boards are some distance from a “single installer setup” or even a single set of installer documentation, for any distribution.

25 April, 2015 10:31PM by Neil Williams

Dominique Dumont

The #newinjessie game: automatic configuration upgrade and other stuff in Debian/Jessie

Here are my contribution for the #newinjessie game, i.e. what new stuff I’ve contributed to the Jessie release of Debian.

See you in 2 years for Stretch release

All the best


25 April, 2015 07:21PM by dod

Patrick Schoenfeld

WordPress(ed) again.

I just migrated my blog(s) to WordPress.

Just recently I decided to put more time into blogging again. I wasn’t entirely happy with Movable Type anymore, especially since the last update broke my customized theme and I struggled with the installation of another theme, which basically just were never found, no matter which path I put it into.

What I wanted is just more blogging, not all that technical stuff. And since the Movable Type makers also seem to have gone crazy (their “Getting started” site tells users to head over to movabletype.com to get a 999$ license) I decided to get back to WordPress.

There were reasons, why I haven’t chosen WordPress back when I migrated from blogger to MT, but one has to say anyway, that things have moved a lot since then. And WordPress is as easy as it can be and has a prosper community, something I cannot say about MovableType.

The migration went okay, although there were some oddities in the blog entries exported by MT (e.g. datetime strings with EM and FM at the EOL) and I needed to figure out how the multisite feature in WordPress works. But now I have exactly what I want.

25 April, 2015 02:55PM by Patrick Schönfeld

Jonathan McCrohan

New packages in Debian 8.0 Jessie

[I liked the #newinwheezy effort for the last Debian release, so I tipped Mika off about it again this time]

A short post about what is #newinjessie, Debian's new 8.0 Jessie release. See Mika's debian-devel post for more information.

For this release cycle, I have uploaded two new packages:

  • dtv-scan-tables - Digital Video Broadcasting (DVB) initial scan files
    • This is a new package which splits out the DVB initial scan files from linux-dvb-apps (see below). This is designed to make updating of the scan files easier.
  • git-remote-hg - bidirectional bridge between Git and Mercurial
    • This is a new package which allows a Git client to read and write to Mercurial repositories. Previously provided as an example in the Git package until removed by upstream.

I've also have taken over as (co-)maintainer of some existing packages during this release cycle:

  • linuxtv-dvb-apps - Digital Video Broadcasting (DVB) applications
    • New upstream snapshot, split out DVB initial scan files into dtv-scan-files package (see above), enable hardened buildflags.

I've updated nearly all of my existing packages during this release cycle:

  • dhex - ncurses based hex editor with diff mode
    • New upstream release, various packaging fixes.
  • lcd4linux - Grabs information and displays it on an external lcd
    • New upstream snapshot, add systemd service file, install dummy config file which resolves issues on certain upgrades.
  • libconfig - Parsing and manipulation of structured configuration files
    • New upstream release, various packaging fixes.
  • nyancat - nyancat is a program to display an animated poptart cat in your terminal
    • New upstream release; Nyancat now resizes to fix the terminal!
  • transmission-remote-cli - ncurses interface for the Transmission BitTorrent daemon
    • New upstream release, support new versions of Transmission, various bug and packaging fixes.
  • wavemon - Wireless Device Monitoring Application
    • New upstream release, various bug and packaging fixes, fix FTBFS on arm64.

The following package received no updates during this release cycle due to a combination of no upstream releases and the existing package already being in good shape:

  • figlet - Make large character ASCII banners out of ordinary text

I hope you find them useful. Enjoy!

25 April, 2015 02:02PM by jmccrohan

April 24, 2015

Jonathan Wiltshire

What to expect on Jessie release day

Release day is a nerve-wracking time for several teams. Happily we’ve done it a few times now*, so we have a rough idea of how the process should go.

There have been some preparations going on in advance:

  1. Last week we imposed a “quiet period” on migrations. That’s about as frozen as we can get; it means that even RC bugs need to be exceptional if they aren’t to be deferred to the first point release. Only late-breaking documentation (like the install guide) was accepted.
  2. The security team opened jessie-updates for business, carried out a test upload, and have since released several packages so that those updates are available immediately after release.
  3. The debian-installer team made a final release.
  4. Final debtags data was updated.
  5. Yesterday the testing migration script britney and other automatic maintenance scripts that the release team run were disabled for the duration.
  6. We made final preparations of things that can be done in advance, such as drafting the publicity announcements. These have to be done in advance so translators get chance to do their work overnight (the announcement was signed off at 15:30UTC today, translations are starting to arrive right now!).

The following checklist makes the release actually happen:

  1. Once dinstall is completed at 07:52, archive maintenance is suspended – the FTP masters will do manual work for now.
  2. Very large quantities of coffee will be prepared all across Europe.
  3. Release managers carry out consistency checks of the Jessie index files, and confirm to FTP masters that there are no last-minute changes to be made. RMs get a break to make more coffee.
  4. While they’re away FTP masters begin the process of relabelling Wheezy as oldstable and Jessie as stable. If an installer needs to be, er, installed as well, that happens at this point. Old builds of the installer are removed.
  5. A new suite for Stretch (Debian 9) is initialised and populated, and labelled testing.
  6. Release managers check that the newly-generated suite index files look correct and consistent with the checks made earlier in the day. Everything is signed off – both in logistical and cryptographic terms.
  7. FTP masters trigger a push of all the changes to the CD-building mirror so that production of images can begin. As each image is completed, several volunteers download and test it in as many ways as they can dream up (booting and choosing different paths through the installer to check integrity).
  8. Finally a full mirror push is triggered by FTP masters, and the finished CD images are published.
  9. Announcements are sent by the publicity team to various places, and by the release team to the developers at large.
  10. Archive maintenance scripts are re-enabled.
  11. The release team take a break for a couple of weeks before getting back into the next cycle.

During the day much of the coordination happens in the #debian-release, #debian-ftp and #debian-cd IRC channels. You’re welcome to follow along if you’re interested in the process, although we ask that you are read-only while people are still concentrating (during the Squeeze release, a steady stream of people turned up to say “congratulations!” at the most critical junctures; it’s not particularly helpful while the process is still going on). The publicity team will be tweeting and denting progress as it happens, so that makes a good overview too.

If everything goes to plan, enjoy the parties!

(Disclaimer: inaccuracies are possible since so many people are involved and there’s a lot to happen in each step; all errors and omissions are entirely mine.)

* yes yes, I am being particularly English today.


What to expect on Jessie release day is a post from: jwiltshire.org.uk | Flattr

24 April, 2015 09:38PM by Jon

hackergotchi for Michael Prokop

Michael Prokop

The #newinjessie game: new forensic packages in Debian/jessie

Repeating what I did for the last Debian release with the #newinwheezy game it’s time for the #newinjessie game:

Debian/jessie AKA Debian 8.0 includes a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/jessie stable release as compared to Debian/wheezy (and ignoring wheezy-backports):

  • ext4magic: recover deleted files from ext3 or ext4 partitions
  • libbfio1: Library to provide basic input/output abstraction
  • lime-forensics-dkms: kernel module to memory dump
  • mac-robber: collects data about allocated files in mounted filesystems
  • pff-tools: library to access various ms outlook files formats/tools to exports PAB, PST and OST files
  • ssdeep: Recursive piecewise hashing tool (note: was present in squeeze but not in wheezy)
  • volatility: advanced memory forensics framework
  • yara: help to identify and classify malwares

Join the #newinjessie game and present packages which are new in Debian/jessie.

24 April, 2015 02:54PM by mika

Richard Hartmann

Release Critical Bug report for Week 17

At the current rate, Jessie should release... tomorrow! :)

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1040 (Including 150 bugs affecting key packages)
    • Affecting Jessie: 66 (key packages: 49) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 57 (key packages: 43) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 14 bugs are tagged 'patch'. (key packages: 10) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 3 bugs are marked as done, but still affect unstable. (key packages: 0) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 40 bugs are neither tagged patch, nor marked done. (key packages: 33) Help make a first step towards resolution!
      • Affecting Jessie only: 9 (key packages: 6) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team. (key packages: 0)
        • 9 bugs are in packages that are not unblocked. (key packages: 6)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144) ???
51 178 (124+54) 323 (190+133) 189 (134+55)
52 115 (78+37) 289 (190+99) 147 (112+35)
1 93 (60+33) 287 (171+116) 140 (104+36)
2 82 (46+36) 271 (162+109) 157 (124+33)
3 25 (15+10) 249 (165+84) 172 (128+44)
4 14 (8+6) 244 (176+68) 187 (132+55)
5 2 (0+2) 224 (132+92) 175 (124+51)
6 release! 212 (129+83) 161 (109+52)
7 release+1 194 (128+66) 147 (106+41)
8 release+2 206 (144+62) 147 (96+51)
9 release+3 174 (105+69) 152 (101+51)
10 release+4 120 (72+48) 112 (82+30)
11 release+5 115 (74+41) 97 (68+29)
12 release+6 93 (47+46) 87 (71+16)
13 release+7 50 (24+26) 97 (77+20)
14 release+8 51 (32+19) ???
15 release+9 39 (32+7) 82 (49+17)
16 release+10 20 (12+8) 53 (49+4)
17 release+11 24 (19+5) 66 (57+9)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

24 April, 2015 01:08PM by Richard 'RichiH' Hartmann

Mike Gabriel

Arctica Project - New Remote (Desktop) Computing Project

This is to announce a new upcoming FLOSS project addressing the remote (desktop) computing realm in the GNU/Linux (and possibly other *nices) server world.

The new project's name will be The Arctica Project [1, 2].

In the Arctica Project, 5-6 developers from all over the world have come together to revisit the field of remote (desktop) computing and write a remote computing framework from scratch.

At the moment, there are not many solutions around that (a) are 100% Free Software, (b) work acceptable for most users and (c) also address large scale deployments and enterprise grade customers. To be honest, IMHO there is actually no such solution at all.

The Arctica Project attempts at changing this sustainably; and we are starting with it NOW.

If anyone reads this and gets curious, please join us on IRC and get in touch! If you feel like a potential contributor, we happily invite you to become one. We are open to your input. Please share it. (Thanks!)

light+love,
Mike

[1] https://github.com/ArcticaProject
[2] IRC channel #arctica on Freenode

24 April, 2015 12:30PM by sunweaver

Jonathan Wiltshire

Jessie Countdown: 1

One further contributor became a non-uploading Debian Developer in 2015, joining 11 others who achieved that status its introduction in 2010.

Non-uploading developers are really important to our project. They’re a recognition of the invaluable work that contributors do which doesn’t involve packaging, for which they may not have the skill nor inclination. Nevertheless we would be much weaker without those tireless contributions in the community, translations, bug handling – the list is much longer and covers every aspect of Debian life.

If you’re reading this and thinking “I have a sustained involvement in some aspect of Debian, and make regular contributions”, please consider chatting to those you work with and see if you’re ready to be recognised too.

(source: https://nm.debian.org/public/people#dd)


Jessie Countdown: 1 is a post from: jwiltshire.org.uk | Flattr

24 April, 2015 09:00AM by Jon

April 23, 2015

Zlatan Todorić

Internet freedom

Why did internet succeed and is growing by incredible pace? Because there were no politicians, no lawyers no economist or managers. Period.

23 April, 2015 09:48PM by Zlatan Todorić

hackergotchi for Jonathan Dowland

Jonathan Dowland

Deterministic Doom

What happens if you remove randomness from Doom?

For some reason, recently I have been thinking about Doom. This evening I was wanting to explore some behaviour in an old version of Doom and to do so, I hex-edited the binary and replaced the random number lookup table with static values.

Rather than consume system randomness, Doom has a fixed 256-value random number table from which numbers are pulled by aspects of the game logic. By replacing the whole table with a constant value, you essentially make the game entirely deterministic.

What does it play like? I tried two values, 0x00 and 0xFF. With either value, the screen "melt" effect that is used at the end of levels is replaced with a level vertical wipe: the randomness was used to offset each column. Monsters do not make different death noises at different times; only one is played for each category of monster. The bullet-based (hitscan) weapons have no spread at all: the shotgun becomes like a sniper rifle, and the chain-gun is likewise always true. You'd think this would make the super-shotgun a pretty lethal weapon, but it seems to have been nerfed: the spread pattern is integral to its function.

With 0x00, monsters never make their idle noises (breathing etc.) On the other hand, with 0xFF, they always do: so often, that each sample collides with the previous one, and you just get a sort-of monster drone. This is quite overwhelming with even a small pack of monsters.

With 0xFF, any strobing sectors (flickering lights etc.), are static. However, with 0x00, they strobe like crazy.

With 0x00, monsters seem to choose to attack much more frequently than usual. Damage seems to be worst-case. The most damaging floor type ("super hellslime"/20%) can hurt you even if you are wearing a radiation suit: There was a very low chance of it hurting whilst wearing the suit (~2.6%) each time the game checked; this is rounded up to 100%.

Various other aspects of the game become weird. A monster may always choose to use a ranged attack, regardless of how close you are. They might give up pursuing you. I've seen them walk aimlessly in circles if they are obstructed by another thing. The chance of monster in-fighting is never, or a certainty. The player is either mute, or cries in pain whenever he's hurt.

If you want to try this yourself, the easiest way is to hack the m_random.c file in the source, but you can hex-edit a binary. Look for a 256-byte sequence beginning beginning ['0x0', '0x8', '0x6d', '0xdc', '0xde', '0xf1'] and ending ['0x78', '0xa3', '0xec', '0xf9'].

23 April, 2015 08:38PM

Jonathan Wiltshire

Jessie Countdown: 2

Two years, give or take a few weeks, is roughly the time required to prepare a new stable release since Sarge in 2005 (source: http://en.wikipedia.org/wiki/Debian#Release_timeline). A Spring release has also been a pretty stable pattern during that time.

Debian often has a reputation for being very slow to release (in terms of cadence), or for freezes to hold up development for an excessive length of time, but the current pattern does lend itself well to two attributes: predictability and reliability (in terms of high-quality releases). It’s true that this means shipping with slightly older versions of major software, but you can sleep well knowing they’re thoroughly tested – and that’s really important in Debian’s core market, which tends to be highly-available services rather than desktop machines.

There’s always backports.


Jessie Countdown: 2 is a post from: jwiltshire.org.uk | Flattr

23 April, 2015 09:31AM by Jon

Noah Meyerhans

hackergotchi for Steve McIntyre

Steve McIntyre

Ready for Jessie! (aka bits from the debian-cd team)

I'm happy with the progress we've made for debian-installer and related packages for the Jessie release. We're going to end up with a release that's better in a number of ways than what we've had before.

1. Big EFI enhancements

I've already blogged a lot about the stuff I've worked on here, so I'll just summarise for now some of the improvements we've got over Wheezy.

  1. A fix for systems that (badly) dual-boot in EFI and BIOS mode such that after installing Debian you wouldn't get a sensible choice of which OS to boot (#763127).
  2. A workaround for broken EFI implementations: an option to install the grub-efi bootloader to the removable media path in case the system firmware does not load grub-efi from the correctly registered boot path. (#746662).
  3. Addition of 32-bit EFI to our i386 installation images, to support both some older systems and some brand new systems that need it. This has unfortunately stopped those i386 images from working on some of the oldest Intel-based Apple Mac machines, so we've added an extra Mac-only flavour of i386 netinst without EFI in case people need it.
  4. Significantly better support for Intel-based Apple Macs in general, to the point that installing Debian on lots of these machines should now be much easier and doesn't depend on extra third-party software such as rEFIt or rEFInd. I've massively updated the Debian wiki page at https://wiki.debian.org/MacMiniIntel with more details for specific models of Mac Mini. I'm hoping to provide similarly updated information for Mac laptops too - see below!
    Massive thanks to the lovely folks at Mythic Beasts for providing me with a range of machines to test with here!
  5. Support for mixed-mode EFI systems like the Intel Bay Trail: a 64-bit platform crippled with a 32-bit EFI firmware. I believe Jessie will be the first release of a Linux distribution to support these machines fully!

2. Openstack images

In collaboration with Thomas Goirand, we now have amd64 Openstack Jessie image builds being produced every week, and there will be an official image made to go with the Jessie release too. See http://cdimage.debian.org/cdimage/openstack/testing/ for the current image.

3. Debian-live images

As of a few weeks ago, we've also added started doing weekly builds of live Debian images for amd64 and i386, using software and configuration from the Live Systems Project. See http://cdimage.debian.org/cdimage/weekly-live-builds/ for the current weekly images. These will be produced in sync with the Jessie release too.

4. New architectures

We've added installation media for the two new architectures added in Jessie: arm64 and ppc64el.

I'm particularly proud of the arm64 images. With help from Ian Campbell, Leif Lindholm and Thomas Schmitt I've managed to make EFI-compatible CD images in an isohybrid design that means they should also work when copied directly to a USB stick. Hopefully this will help this new platform to become just as easy to install as any x86 PC is today.

Hopefully post-Jessie we'll even be able to start providing live images and openstack images for more architectures too.

More help needed yet!

First of all, we're planning to release Jessie as Debian 8 this coming Saturday (25th April). Help with testing the installation and live images as they're produced would be lovely - please join us on the #debian-cd channel on irc.debian.org and we'll co-ordinate there.

Secondly, there's an almost endless variety of machines out there. I've updated information about how Debian installation works on some of the more awkward Mac Mini machines, but we don't yet cover all the bases even there. It would be great to update the information about other machines such as the Macbook range as well - currently a lot of these pages are well out of date and won't be helpful for new users. Please test on machines if you have them, and help improve Debian's documentation here.

23 April, 2015 12:10AM

April 22, 2015

Jonathan Wiltshire

Jessie Countdown: 3

Three is the new number for continuing oldstable security support: thanks to the efforts of those behind the Long-Term Support initiative, the full stable lifecycle of a release has therefore become five years (source: https://www.debian.org/News/2014/20140424).

Where do those numbers come from? 2 years as stable, 1 year overlap support as oldstable, plus 2 new years under LTS = 5 years.

Having been successful for Squeeze (6), LTS is widely anticipated to follow a similar pattern for Wheezy (7) and Jessie (8) for the same lifetimes. Some reduction in coverage is unavoidable (for example, Squeeze only covers the most popular amd64 and i386 architectures), but for a majority of users this reduces the need to follow a three-year upgrade cycle – invaluable when you are managing hundreds or thousands of servers and can’t achieve that in a single year.


Jessie Countdown: 3 is a post from: jwiltshire.org.uk | Flattr

22 April, 2015 05:54PM by Jon

hackergotchi for Lars Wirzenius

Lars Wirzenius

Jessie release part in Helsinki

There will be a gathering of Debian people to celebrate the release of jessie this Saturday in Helsinki. For details, see the wiki page. Welcome, everyone.

22 April, 2015 04:49PM

hackergotchi for Riku Voipio

Riku Voipio

Fastest way to change running dtb

Tollef posted about using BeagleBone Black for temperature monitoring. There was a passage about patching the DTB (device tree) file:
... This needs to be compiled into a .dtb. I found the easiest way was just to drop the patched .dts into an unpacked kernel tree and then running make dtbs.
There are easier ways. For example, you can get the current device tree file generated from /proc:

apt-get install device-tree-compiler
dtc -I fs -O dts -o current.dts /proc/device-tree/
(Why /proc and not /sys ? because device tree predates /sys) Now you can just modify and build the dtb again, and install it back to where bootloader reads the dtb from:

vim current.dts
dtc -I dts -O dtb -o new.dtb current.dts
Alternative, of course, is to build a brand new mainline kernel and use the dynamic Device tree code now available.

22 April, 2015 12:51PM by Riku Voipio (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.5.000.0

A new major version 5.000 of Armadillo was released by Conrad a couple of days ago. Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

This version brings several new functions for sparse matrices, and automagically switches to 64-bit matrix indices in C++11 mode. See below for a short description of all the major changes based on the NEWS.Rd file.

This version is now on CRAN, as well as in Debian. The integration into CRAN was delayed by a few days as my testing had a shortcoming. We run full reverse-dependency checks against all 115 CRAN package depending in RcppArmadillo, and we even made full pre-release 0.4.999.1 and 0.5.000.0 shipped to the drat repo of the RcppCore GitHub organization (which was described in the previous release post). But a minor flaw in my setup made it miss how the change in indexing affected packages dfcomb and dfmta. My thanks to its maintainer Marie-Karelle Riviere for providing updates to her packages permitting this release to get onto CRAN. The testing process has been tightened and this should not happen again.

Changes in RcppArmadillo version 0.5.000.0 (2015-04-12)

  • Upgraded to Armadillo release Version 5.000 ("Ankle Biter")

    • added spsolve() for solving sparse systems of linear equations

    • added svds() for singular value decomposition of sparse matrices

    • added nonzeros() for extracting non-zero values from matrices

    • added handling of diagonal views by sparse matrices

    • expanded repmat() to handle sparse matrices

    • expanded join_rows() and join_cols() to handle sparse matrices

    • sort_index() and stable_sort_index() have been placed in the delayed operations framework for increased efficiency

    • use of 64 bit integers is automatically enabled when using a C++11 compiler

    • workaround for a bug in recent releases of Apple Xcode

    • workaround for a bug in LAPACK 3.5

Changes in RcppArmadillo version 0.4.999.1.0 (2015-04-04)

  • Upgraded to Armadillo release preview 4.999.1

  • Non-CRAN test release

Courtesy of CRANberries, there is also a diffstat report for the most recent CRAN release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 April, 2015 11:46AM

hackergotchi for Tollef Fog Heen

Tollef Fog Heen

Temperature monitoring using a Beaglebone Black and 1-wire

I've had a half-broken temperature monitoring setup at home for quite some time. It started out with a Atom-based NAS, a USB-serial adapter and a passive 1-wire adapter. It sometimes worked, then stopped working, then started when poked with a stick. Later, the NAS was moved under the stairs and I put a Beaglebone Black in its old place. The temperature monitoring thereafter never really worked, but I didn't have the time to fix it. Over the last few days, I've managed to get it working again, of course by replacing nearly all the existing components.

I'm using the DS18B20 sensors. They're about USD 1 a piece on Ebay (when buying small quantities) and seems to work quite ok.

My first task was to address the reliability problems: Dropouts and really poor performance. I thought the passive adapter was problematic, in particular with the wire lengths I'm using and I therefore wanted to replace it with something else. The BBB has GPIO support, and various blog posts suggested using that. However, I'm running Debian on my BBB which doesn't have support for DTB overrides, so I needed to patch the kernel DTB. (Apparently, DTB overrides are landing upstream, but obviously not in time for Jessie.)

I've never even looked at Device Tree before, but the structure was reasonably simple and with a sample override from bonebrews it was easy enough to come up with my patch. This uses pin 11 (yes, 11, not 13, read the bonebrews article for explanation on the numbering) on the P8 block. This needs to be compiled into a .dtb. I found the easiest way was just to drop the patched .dts into an unpacked kernel tree and then running make dtbs.

Once this works, you need to compile the w1-gpio kernel module, since Debian hasn't yet enabled that. Run make menuconfig, find it under "Device drivers", "1-wire", "1-wire bus master", build it as a module. I then had to build a full kernel to get the symversions right, then build the modules. I think there is or should be an easier way to do that, but as I cross-built it on a fast AMD64 machine, I didn't investigate too much.

Insmod-ing w1-gpio then works, but for me, it failed to detect any sensors. Reading the data sheet, it looked like a pull-up resistor on the data line was needed. I had enabled the internal pull-up, but apparently that wasn't enough, so I added a 4.7kOhm resistor between pin 3 (VDD_3V3) on P9 and pin (GPIO_45) on P8. With that in place, my sensors showed up in /sys/bus/w1/devices and you can read the values using cat.

In my case, I wanted the data to go into collectd and then to graphite. I first tried using an Exec plugin, but never got it to work properly. Using a [python plugin] worked much better and my graphite installation is now showing me temperatures.

Now I just need to add more probes around the house.

The most useful references were

In addition, various searches for DS18B20 pinout and similar, of course.

22 April, 2015 08:15AM

John Goerzen

Today I FLEW A PLANE

DJI00694

“For once you have tasted flight,
You will walk the earth with your eyes turned skyward;
For there you have been,
And there you long to return.”

– Leonardo da Vinci

There is something of a magic to flight, to piloting. I remember the first flight I ever took, after years of dreaming of flying in a plane: my grandma had bought me a plane ticket. In one of the early morning flights, I witnessed a sunrise above cumulus clouds. Although I was just 10 or so at the time, that still is a most beautiful image seared into my memory.

I have become “meh” about commercial flight over the years. The drive to the airport, the security lines, the lack of scenery at 35,000 feet. And yet, there is much more to flight than that. When I purchased what was essentially a flying camera, I saw a whole new dimension of the earth’s amazing beauty. All the photos in this post, in fact, are ones I took. I then got a RC airplane, because flying the quadcopter was really way too easy.

DJI00620

“It’s wonderful to climb the liquid mountains of the sky.
Behind me and before me is God, and I have no fears.”

– Helen Keller

Start talking to pilots, and you notice a remarkable thing: this group of people that tends to be cool and logical, methodical and precise, suddenly finds themselves using language almost spiritual. Many have told me that being a pilot brings home how much all humans have in common, the unifying fact of sharing this beautiful planet together. Many volunteer with organizations such as Angel Flight. And having been up in small planes a few times, I start to glimpse this. Flying over my home at 1000′ up, or from lake to lake in Seattle with a better view than the Space Needle, seeing places familiar and new, but from a new perspective, drives home again and again the beauty of our world, the sheer goodness of it, and the wonderful color of the humanity that inhabits it.

DJI00120

“The air up there in the clouds is very pure and fine, bracing and delicious.

And why shouldn’t it be?

It is the same the angels breathe.”

– Mark Twain

The view from 1000 feet, or 3000, is often so much more spectacular than the view from 35,000 ft as you get on a commercial flight. The flexibility is too; there are airports all over the country that smaller planes can use which the airlines never touch.

Here is one incredible video from a guy that is slightly crazy but does ground-skimming, flying just a few feet off the ground: (try skipping to 9:36)

So what comes next is something I blame slightly on my dad and younger brother. My dad helped get me interested in photography as a child, and that interest has stuck. It’s what caused me to get into quadcopters (“a flying camera for less than the price of a nice lens!”). And my younger brother started mentioning airplanes to me last year for some reason, as if he was just trying to get me interested. Eventually, it worked. I started talking to the pilots I know (I know quite a few; there seems to be a substantial overlap between amateur radio and pilots). I started researching planes, flight, and especially safety — the most important factor.

And eventually I decided I wanted to be a pilot. I’ve been studying feverishly, carrying around textbooks and notebooks in the car, around the house, and even on a plane. There is a lot to learn.

And today, I took my first flight with a flight instructor. Today I actually flew a plane for awhile. Wow! There is nothing quite like that experience. Seeing a part of the world I am familiar with from a new perspective, and then actually controlling this amazing machine — I really fail to find the words to describe it. I have put in many hours of study already, and there will be many more studying and flying, but it is absolutely worth it.

Here is one final video about one of the most unique places you can fly to in Kansas.

And a blog with lots of photos of a flight to Beaumont called “Horse on the runway”.

22 April, 2015 02:53AM by John Goerzen

April 21, 2015

Jonathan Wiltshire

Tube in a Day

For some reason, I’ve decided that gallivanting around the London Underground for the day one Saturday is a fine way to raise money for a local children’s hospice. You’d make my day by supporting us – we aren’t deducting expenses from pledges, so there’s no penalty to the charity for our travel.

We’re going to run a modified version of the Guinness-recognised Tube Challenge starting about 05:15 (modified to allow for unavoidable maintenance works; we don’t have the luxury of being able to pick a day when that’s not going to be a problem) and likely finishing about midnight.

I’m also interested to hear ideas for some kind of micro-blogging platform that we can update on the move, preferably presenting in stream format and with an Android-friendly site/app that can cope with uploading a photo smoothly. Not Twitter or Facebook; it’ll probably be a short-lived account. I don’t want to part with personal information and I want to be able to throw it away afterwards. Suggestions?


Tube in a Day is a post from: jwiltshire.org.uk | Flattr

21 April, 2015 09:27PM by Jon

hackergotchi for Julien Danjou

Julien Danjou

Gnocchi 1.0: storing metrics and resources at scale

A few months ago, I wrote a long post about what I called back then the "Gnocchi experiment". Time passed and we – me and the rest of the Gnocchi team – continued to work on that project, finalizing it.

It's with a great pleasure that we are going to release our first 1.0 version this month, roughly at the same time that the integrated OpenStack projects release their Kilo milestone. The first release candidate numbered 1.0.0rc1 has been released this morning!

The problem to solve

Before I dive into Gnocchi details, it's important to have a good view of what problems Gnocchi is trying to solve.

Most of the IT infrastructures out there consists of a set of resources. These resources have properties: some of them are simple attributes whereas others might be measurable quantities (also known as metrics).

And in this context, the cloud infrastructures make no exception. We talk about instances, volumes, networks… which are all different kind of resources. The problems that are arising with the cloud trend is the scalability of storing all this data and being able to request them later, for whatever usage.

What Gnocchi provides is a REST API that allows the user to manipulate resources (CRUD) and their attributes, while preserving the history of those resources and their attributes.

Gnocchi is fully documented and the documentation is available online. We are the first OpenStack project to require patches to integrate the documentation. We want to raise the bar, so we took a stand on that. That's part of our policy, the same way it's part of the OpenStack policy to require unit tests.

I'm not going to paraphrase the whole Gnocchi documentation, which covers things like installation (super easy), but I'll guide you through some basics of the features provided by the REST API. I will show you some example so you can have a better understanding of what you could leverage using Gnocchi!

Handling metrics

Gnocchi provides a full REST API to manipulate time-series that are called metrics. You can easily create a metric using a simple HTTP request:

POST /v1/metric HTTP/1.1
Content-Type: application/json
 
{
"archive_policy_name": "low"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a
Content-Type: application/json; charset=UTF-8
 
{
"archive_policy": {
"aggregation_methods": [
"std",
"sum",
"mean",
"count",
"max",
"median",
"min",
"95pct"
],
"back_window": 0,
"definition": [
{
"granularity": "0:00:01",
"points": 3600,
"timespan": "1:00:00"
},
{
"granularity": "0:30:00",
"points": 48,
"timespan": "1 day, 0:00:00"
}
],
"name": "low"
},
"created_by_project_id": "e8afeeb3-4ae6-4888-96f8-2fae69d24c01",
"created_by_user_id": "c10829c6-48e2-4d14-ac2b-bfba3b17216a",
"id": "387101dc-e4b1-4602-8f40-e7be9f0ed46a",
"name": null,
"resource_id": null
}


The archive_policy_name parameter defines how the measures that are being sent are going to be aggregated. You can also define archive policies using the API and specify what kind of aggregation period and granularity you want. In that case , the low archive policy keeps 1 hour of data aggregated over 1 second and 1 day of data aggregated to 30 minutes. The functions used for aggregations are the mathematical functions standard deviation, minimum, maximum, … and even 95th percentile. All of that is obviously customizable and you can create your own archive policies.

If you don't want to specify the archive policy manually for each metric, you can also create archive policy rule, that will apply a specific archive policy based on the metric name, e.g. metrics matching disk.* will be high resolution metrics so they will use the high archive policy.

It's also worth noting Gnocchi is precise up to the nanosecond and is not tied to the current time. You can manipulate and inject measures that are years old and precise to the nanosecond. You can also inject points with old timestamps (i.e. old compared to the most recent one in the timeseries) with an archive policy allowing it (see back_window parameter).

It's then possible to send measures to this metric:

POST /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1
Content-Type: application/json
 
[
{
"timestamp": "2014-10-06T14:33:57",
"value": 43.1
},
{
"timestamp": "2014-10-06T14:34:12",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 2
}
]

HTTP/1.1 204 No Content


These measures are synchronously aggregated and stored into the configured storage backend. Our most scalable storage drivers for now are either based on Swift or Ceph which are both scalable storage objects systems.

It's then possible to retrieve these values:

GET /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1
 
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
 
[
[
"2014-10-06T14:30:00.000000Z",
1800.0,
19.033333333333335
],
[
"2014-10-06T14:33:57.000000Z",
1.0,
43.1
],
[
"2014-10-06T14:34:12.000000Z",
1.0,
12.0
],
[
"2014-10-06T14:34:20.000000Z",
1.0,
2.0
]
]


As older Ceilometer users might notice here, metrics are only storing points and values, nothing fancy such as metadata anymore.

By default, values eagerly aggregated using mean are returned for all supported granularities. You can obviously specify a time range or a different aggregation function using the aggregation, start and stop query parameter.

Gnocchi also supports doing aggregation across aggregated metrics:

GET /v1/aggregation/metric?metric=65071775-52a8-4d2e-abb3-1377c2fe5c55&metric=9ccdd0d6-f56a-4bba-93dc-154980b6e69a&start=2014-10-06T14:34&aggregation=mean HTTP/1.1
 
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
 
[
[
"2014-10-06T14:34:12.000000Z",
1.0,
12.25
],
[
"2014-10-06T14:34:20.000000Z",
1.0,
11.6
]
]


This computes the mean of mean for the metric 65071775-52a8-4d2e-abb3-1377c2fe5c55 and 9ccdd0d6-f56a-4bba-93dc-154980b6e69a starting on 6th October 2014 at 14:34 UTC.

Indexing your resources

Another object and concept that Gnocchi provides is the ability to manipulate resources. There is a basic type of resource, called generic, which has very few attributes. You can extend this type to specialize it, and that's what Gnocchi does by default by providing resource types known for OpenStack such as instance, volume, network or even image.

POST /v1/resource/generic HTTP/1.1
 
Content-Type: application/json
 
{
"id": "75C44741-CC60-4033-804E-2D3098C7D2E9",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9
ETag: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"
Last-Modified: Fri, 17 Apr 2015 11:18:48 GMT
Content-Type: application/json; charset=UTF-8
 
{
"created_by_project_id": "ec181da1-25dd-4a55-aa18-109b19e7df3a",
"created_by_user_id": "4543aa2a-6ebf-4edd-9ee0-f81abe6bb742",
"ended_at": null,
"id": "75c44741-cc60-4033-804e-2d3098c7d2e9",
"metrics": {},
"project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
"revision_end": null,
"revision_start": "2015-04-17T11:18:48.696288Z",
"started_at": "2015-04-17T11:18:48.696275Z",
"type": "generic",
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}


The resource is created with the UUID provided by the user. Gnocchi handles the history of the resource, and that's what the revision_start and revision_end fields are for. They indicates the lifetime of this revision of the resource. The ETag and Last-Modified headers are also unique to this resource revision and can be used in a subsequent request using If-Match or If-Not-Match header, for example:

GET /v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9 HTTP/1.1
If-Not-Match: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"
 
HTTP/1.1 304 Not Modified


Which is useful to synchronize and update any view of the resources you might have in your application.

You can use the PATCH HTTP method to modify properties of the resource, which will create a new revision of the resource. The history of the resources are available via the REST API obviously.

The metrics properties of the resource allow you to link metrics to a resource. You can link existing metrics or create new ones dynamically:

POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
 
{
"id": "AB68DA77-FA82-4E67-ABA9-270C5A98CBCB",
"metrics": {
"temperature": {
"archive_policy_name": "low"
}
},
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/ab68da77-fa82-4e67-aba9-270c5a98cbcb
ETag: "9f64c8890989565514eb50c5517ff01816d12ff6"
Last-Modified: Fri, 17 Apr 2015 14:39:22 GMT
Content-Type: application/json; charset=UTF-8
 
{
"created_by_project_id": "cfa2ebb5-bbf9-448f-8b65-2087fbecf6ad",
"created_by_user_id": "6aadfc0a-da22-4e69-b614-4e1699d9e8eb",
"ended_at": null,
"id": "ab68da77-fa82-4e67-aba9-270c5a98cbcb",
"metrics": {
"temperature": "ad53cf29-6d23-48c5-87c1-f3bf5e8bb4a0"
},
"project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
"revision_end": null,
"revision_start": "2015-04-17T14:39:22.181615Z",
"started_at": "2015-04-17T14:39:22.181601Z",
"type": "generic",
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}


Haystack, needle? Find!

With such a system, it becomes very easy to index all your resources, meter them and retrieve this data. What's even more interesting is to query the system to find and list the resources you are interested in!

You can search for a resource based on any field, for example:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
 
{
"=": {
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}
}


That query will return a list of all resources owned by the user_id bd3a1e52-1c62-44cb-bf04-660bd88cd74d.

You can do fancier queries such as retrieving all the instances started by a user this month:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
Content-Length: 113
 
{
"and": [
{
"=": {
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}
},
{
">=": {
"started_at": "2015-04-01"
}
}
]
}


And you can even do fancier queries than the fancier ones (still following?). What if we wanted to retrieve all the instances that were on host foobar the 15th April and who had already 30 minutes of uptime? Let's ask Gnocchi to look in the history!

POST /v1/search/resource/instance?history=true HTTP/1.1
Content-Type: application/json
Content-Length: 113
 
{
"and": [
{
"=": {
"host": "foobar"
}
},
{
">=": {
"lifespan": "1 hour"
}
},
{
"<=": {
"revision_start": "2015-04-15"
}
}
 
]
}


I could also mention the fact that you can search for value in metrics. One feature that I will very likely include in Gnocchi 1.1 is the ability to search for resource whose specific metrics matches some value. For example, having the ability to search for instances whose CPU consumption was over 80% during a month.

Cherries on the cake

While Gnocchi is well integrated and based on common OpenStack technology, please do note that it is completely able to function without any other OpenStack component and is pretty straight-forward to deploy.

Gnocchi also implements a full RBAC system based on the OpenStack standard oslo.policy and which allows pretty fine grained control of permissions.

There is also some work ongoing to have HTML rendering when browsing the API using a Web browser. While still simple, we'd like to have a minimal Web interface served on top of the API for the same price!

Ceilometer alarm subsystem supports Gnocchi with the Kilo release, meaning you can use it to trigger actions when a metric value crosses some threshold. And OpenStack Heat also supports auto-scaling your instances based on Ceilometer+Gnocchi alarms.

And there are a few more API calls that I didn't talk about here, so don't hesitate to take a peek at the full documentation!

Towards Gnocchi 1.1!

Gnocchi is a different beast in the OpenStack community. It is under the umbrella of the Ceilometer program, but it's one of the first projects that is not part of the (old) integrated release. Therefore we decided to have a release schedule not directly linked to the OpenStack and we'll release more often that the rest of the old OpenStack components – probably once every 2 months or the like.

What's coming next is a close integration with Ceilometer (e.g. moving the dispatcher code from Gnocchi to Ceilometer) and probably more features as we have more requests from our users. We are also exploring different backends such as InfluxDB (storage) or MongoDB (indexer).

Stay tuned, and happy hacking!

21 April, 2015 03:00PM by Julien Danjou

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Introducing ghrr: GitHub Hosted R Repository

ghrr logo

Background

R relies on package repositories for initial installation of a package via install.packages(). A crucial second step is update.packages(): For all currently installed packages, a list of available updates is constructed or offered for either one-by-one or bulk updates. This keeps the local packages in sync with upstream, and provides for a very convenient way to obtain new features, bug fixes and other improvements. So by installing from a repository, we automatically have the ability to track the repository for updates.

Enter drat

Fairly recently, the drat package was added to the R ecosystem. It makes both aspects of package distribution easy: providing a package (if you are an author) as well as installing it (if you are a user). Now, because drat is at the same time source code (as it is also a package providing the functionality), and a repository (using what drat provides ib features), the "namespace" becomes a little cluttered.

But because a key feature of drat is the "one variable" unique identification via the GitHub, I opted to create a drat repository in the name of a new organisation: ghrr. This is a simple acronym for GitHub Hosted R Repository.

Use cases

We can outline several use case for packages in ghrr:

  • packages not published in a repo by their authors: I already use two like that:
    • fasttime, an impeccably fast parser for ISO datetimes by Simon Urbanek which was however never released into a repo by Simon, and
    • RcppR6, a very nice extension to both R6 (by Winston) and Rcpp, by Rich FitzJohn; similarly never released beyond GitHub;
  • packages possibly unsuitable for mainline repos:
    • Rblpapi is a great package by Whit Armstong and John Laing to which I have been contributing quite a bit of late. As it requires a free-to-use but not open source library and headers from Bloomberg, it will never make it to the mainline repository for R, but hosting it in ghrr is perfect as I can easily update several machines at work once I cut a new development release;
    • winsorize is a small package I needed a few weeks ago; it is spun out of robustHD but does not yet contain new code so Andreas and I are content to keep it in this drat for now;
  • packages in pre-relase mode:
    • RcppArmadillo where I announced both a release candidate before Armadillo 5.000 came out, as well as the actual RcppArmadillo 0.500.0.0 which is not (yet) on the mainline repository as two affected packages need a small update first. Users, however, can get RcppArmadillo already from the sibling Rcpp drat repo.
    • RcppToml is a new package I am currently working on implementing a toml parser based on cpptoml. It works, but it not quite ready for public announcements yet, and hence perfect for ghrr.

Going forward

ghrr is meant to be open. While anybody can open a drat repository, particularly on GitHub, it may be beneficial to somehow group packages. This is however not something that can be planned ex-ante: it may just happen if others who see similar benefits in this can in fact contribute. In that spirit, I strongly encourage pull requests.

Early on, I made my commit messages conform to a pattern of package version sha1 repourl to make code provenance of every commit very clear. Ideally, subsequent commits would conform to such a scheme, or replace it with a better one.

Some Resources

A few links to learn more about drat and ghrr:

Comments and questions via email or issue tickets are more than welcome. We hope that others find ghrr to be a useful tool for easy repository management and use via GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 April, 2015 02:34PM

hackergotchi for Olivier Berger

Olivier Berger

How to publish an HTML5+RDFa Web site from org-mode

I’m a big fan of org-mode (see previous posts), and I’ve started maintaining (sic) my professional webpage(s) with it.

But I’ve also recently tried and publish some more Semantic/Linked Data aware documents too (again, previous posts).

Ideally, I think my preferred workflow for publishing articles or documents of some importance, would be to author them in org-mode, and then publish them as HTML5 including RDFa meta-data and annotations. Instead, I’ve more frequently been doing conversions of org-mode to LaTeX, in order to submit a printable version, and later-on decided to convert the LaTeX to HTML5+RDFa…

But one of the issues is how to properly embed the RDF meta-data inside the org-mode documents, so that the syntax is both compact and expressive enough.

I doubt there’s a universal solution, given that RDF tends to be complex, and graphs may not project easilly along a mainly linear structure of an org-mode document, but anyway, there seems to be possible middle grounds that are practically good enough.

I’ve tried and implement a solution, which reuses the principles set by John Kitchin in Extending the org-mode link syntax with attributes, i.e. implementing an HTML exporter for a particular custom link type, which will convert the plist-like syntax to some RDFa constructs.

Here’s a description of the whole solution : http://www-public.telecom-sudparis.eu/~berger_o/test-org-publishing-rdfa.html

The nice thing about org-mode, and its litterate programming babel environment, is that it allows to embed the code of the links exporter inside the org document, avoiding to dissociate the converter from the document’s source, making it auto-complete.

Next step will probably be to author a paper (or convert back a “preprint” of mines) with org-mode, in order to provide Linked Research meta-data.

Stay tuned for more details, and in the meantime, I welcome any improvement to the org/babel/elisp setup.

Edit: I’ve recorded a webcast to provide a bit more details, available on YouTube : https://youtu.be/OyI3DVqllx4

21 April, 2015 11:49AM by Olivier Berger

hackergotchi for Jonathan Dowland

Jonathan Dowland

Useful script

Here's a useful shell procedure:

vigg ()
{
  git grep --color=auto -lz "$1" | xargs -r0 sh -c "vi +/\"$1\" \"\$@\" < /dev/tty"
}

git grep is a very effective way to run a recursive grep over a git repository, or part of a git repository (by default, it limits its search to the sub-tree you are currently sitting in). I quite often find myself wanting to edit every file that matched a search, and so wrote this snippet.

The /dev/tty-ugliness is to work around vi complaining that it's standard input was set to /dev/null by xargs. BSD xargs has a command, -o, which sorts this out; GNU xargs doesn't, but the manual suggests the above portable workaround. This marks the first time a BSD tool has had a feature I've wanted and the GNU equivalent doesn't.

21 April, 2015 09:19AM

Manuel A. Fernandez Montecelo

About the Debian GNU/Linux port for OpenRISC or1k

In my previous post I mentioned my involvement with the OpenRISC or1k port. It was the technical activity in which I spent most time during 2014 (Debian and otherwise, day job aside).

I thought that it would be nice to talk a bit about the port for people who don't know about it, and give an update for those who do know and care. So this post explains a bit how it came to be, details about its development, and finally the current status. It is going to be written as a rather personal account, for that matter, since I did not get involved enough in the OpenRISC community at large to learn much about its internal workings and aspects that I was not directly involved with.

There is not much information about all of this elsewhere, only bits and pieces scattered here and there, but specially not much public information at all about the development of the Debian port. There is an OpenRISC entry in the Debian wiki, but it does not contain much information yet. Hopefully, this piece will help a bit to preserve history and give an insight for future porters.

First Things First

I imagine that most people reading this post will be familiar with the terminology, but just in case, to create a new Debian port means to get a Debian system (GNU/Linux variant, in this case) to run in the OpenRISC or1k computer architecture.

Setting to one side all differences between hardware and software, and as described in their site:

“The aim of the OpenRISC project is to create free and open source computing platforms”

It is therefore a good match for the purposes of Debian and Free Software world in general.

The processor has not been produced in silicon, or not available for the masses in any case. People with the necessary know-how can download the hardware description (Verilog) and synthesise it in a FPGA, or otherwise use simulators. It is not some piece of hardware that people can purchase yet, and there are no plans to mass-produce it in the near future either.

The two people (including me) involved in this Debian port did not have the hardware, so we created the port entirely through cross-compiling from other architectures, and then compiling inside Qemu. In a sense, we were creating a Debian port for hardware that "does not [phisically] exist". The software that we built was tested by people who had hardware available in FPGA, though, so it was at least usable. I understand that people working in the arm64 port had to work similarly in the initial phases, working in the dark without access to real hardware to compile or test.

The Spark

The first time that I heard about the initiative to create the port was in late February of 2014, in a post which appeared in Linux Weekly News (sent by Paul Wise) and Slashdot. The original post announcing it was actually from late January, from Christian Svensson (blueCmd):

“Some people know that I've been working on porting Glibc and doing some toolchain work. My evil master plan was to make a Debian port, and today I'm a happy hacker indeed!

Below is a link to a screencast of me installing Debian for OpenRISC, installing python2.7 via apt-get (which you shouldn't do in or1ksim, it takes ages! (but it works!)) and running a small Python script. http://asciinema.org/a/7362

So, now, what can a Debian Hacker do when reading this? (Even if one's Hackery Level is not that high, as it is my case). And well, How Hard Can It Be? I mean, Really?

Well, in my own defence, I knew that the answer to the last two questions would be a resounding Very. But for some reason the idea grabbed me and I couldn't help but think that it would be a Really Exciting Project, and that somehow I would like to get involved. So I wrote to Christian offering my help after considering it for a few days, around mid March, and he welcomed me aboard.

The Ball Was Already Rolling

Christian had already been in contact with the people behind DebianBootstrap, and he had already created the repository http://openrisc.debian.net/ with many packages of the base system and beyond (read: packages name_version_or1k.deb available to download and install). Still nowadays the packages are not signed with proper keys, though, so use your judgement if you want to try them.

After a few weeks, I got up to speed with the status of the project and got my system working with the necessary tools. This meant basically sbuild/schroot to compile new packages, with the base system that Christian already got working, installed in a chroot, probably with the help of debootstrap, and qemu-system-or1k to simulate the system.

Only a few of the packages were different from the version in Debian, like gcc, binutils or glibc -- they had not been upstreamed yet. sbuild ran through qemu-system-or1k, so the compilation of new packages could happen "natively" (running inside Qemu) rather than cross-compiling the packages, pulling _or1k.deb packages for dependencies from the repository that he had prepared, and _all.deb packages from snapshots.debian.org.

I started by trying to get the packages that I [co-]maintain in Debian compiled for this architecture, creating the corresponding _or1k.deb. For most of them, though, I needed many dependencies compiled before I could even compile my packages.

The GNU autotools / autoreconf Problem

Since very early, many of the packages failed to build with messages such as:

Invalid configuration 'or1k-linux-gnu': machine 'or1k' not recognized
configure: error: /bin/bash ../config.sub or1k-linux-gnu failed

This means that software packages based on GNU autotools and using configure scripts need recent versions of the files config.sub and config.guess that they ship in their root directory, to be able to detect the architecture and generate the code accordingly.

This is counter-intuitive, having into account that GNU autotools were designed to help with portability. Yet, in the case of creating new Debian ports, it meant that unless upstream had very recent versions of config.{guess,sub}, it would prevent the package to compile straight away in the new architectures -- even if invoking gcc without ado would have worked without problems in most cases for native compilation.

Of course this did not only affect or1k, and there was already the autoreconf effort underway as a way to update these files automatically when building Debian packages, pushed by people porting Debian to the new architectures added in 2013/2014 (mips64el, arm64, ppc64el), which encountered the same roadblock. This affected around a thousand source packages in unstable. A Royal Pain. Also, all of their reverse dependencies (packages that depended on those to be built) could not be compiled straight away.

The bugs were not Release Critical, though (none of these architectures were officially accepted at the time), so for people not concerned with the new ports there was no big incentive to get them fixed. This problem, which conceptually is easily solvable, prevented new ports to even attempt compile vast portions of the archive straight away (cleanly, without modifications to the package or to the host system), for weeks or months.

The GNU autotools / autoreconf Solution

We tackled this problem mainly in two ways.

First, more useful for Debian in general, was to do as other porters were doing and submit bug reports and patches to Debian packages requesting them to use autoreconf, and NMUing packages (uploading changes to the archive without the official maintainers' intervention). A few NMUs were made for packages which had bug reports with patches available for a while, that were in the critical path to get many other packages compiled, and that were orphan or had almost no maintainer activity.

The people working in the other new ports, and mainly Ubuntu people which helped with some of those ports and wanted to support them, had submitted a large amount of requests since late 2013, so there was no shortage of NMUs to be made. Some porters, not being Debian Developers, could not easily get the changes applied; so I also helped a bit the porters of other architectures, specially later on before the freeze of Jessie, to get as many packages compiled in those architectures as possible.

The second way was to create dpkg-buildpackage hooks that updated unconditionally config.{guess,sub} before attempting to build the package in the local build system. This local and temporary solution allowed us in the or1k port to get many _or1k.deb packages in the experimental repository, which in turn would allow many more packages to compile. After a few weeks, I set up many sbuilds in a multi-core machine attempting to build uninterruptedly packages that were not previously built and which had their dependencies available. Every now and then (typically several times per day in peak times) I pushed the resulting _or1k.deb files to the repository, so more packages would have the necessary dependencies ready to attempt to build.

Christian was doing something similar, and by April at peak times, among the two of us, we were compiling some days more than a hundred packages -- a huge amount of packages did not need any change other than up-to-date config.{guess,sub} files. At some point, late April, Christian set up wanna-build in a few hosts to do this more elegantly and smartly than my method, and more effectively as well.

Ugly Hacks, Bugs and Shortcomings in the Toolchain and Qemu

Some packages are extremely important because many other packages need them to compile (like cmake, Qt or GTK+), and they are themselves very complex and have dependency loops. They had deeper problems than the autoreconf issue and needed some seriously dirty hacking to get them built.

To try to get as many packages compiled as possible, we sometimes compiled these important packages with some functionality disabled, disabling some binary packages (e.g. Java bindings) or specially disabling documentation (using DEB_BUILD_OPTIONS=nodoc when possible, and more aggressively when needed by removing chunks of debian/rules). I tried to use the more aggressive methods in as few packages as possible, though, about a dozen in total. We also used DEB_BUILD_OPTIONS=nocheck for speeding up compilation and avoiding build failures -- many packages' tests failed due to qemu-system-or1k not supporting multi-threading, which we could do nothing about at the time, but otherwise the packages mostly passed tests fine.

Due to bugs and shortcomings in Qemu and the toolchain --like the compiler lacking atomics, missing functionality in glibc, Qemu entering in endless loops, or programs segfaulting (especially gettext, used by many packages and causing the packages failing to build)--, we had to resort to some very creative ways or time-consuming dull work to edit debian/rules, or to create wrappers of the real programs avoiding or forcing certain options (like gcc -O0, since -O2 made buggy binaries too often).

To avoid having a mix of cleanly compiled and hacked packages in the same repository, Christian set up a two-tiered repository system -- the clean one and the dirty one. In the dirty one we dumped all of the packages that we got built, no matter how. The packages in the clean one could use packages from the dirty one to build, but they themselves were compiled without any hackery. Of course this was not a completely airtight solution, since they could contain code injected at build time from the "dirty repository" (e.g. by static linking), and perhaps other quirks. We hoped to get rid of these problems later by rebuilding all packages against clean builds of all their dependencies.

In addition, Christian also spent significant amounts of time working inside the OpenRISC community, debugging problems, testing and recompiling special versions of the toolchain that we could use to advance in our compilation of the whole archive. There were other people in the OpenRISC community implementing the necessary bits in the toolchain, but I don't know the details.

Good Progress

Olof Kindgren wrote the OpenRISC health report April 2014 (actually posted in May), explaining the status at the time of projects in the broad OpenRISC community, and talking about the software side, Debian port included. Sadly, I think that there have been no more "health reports" since then. There was also a new post in Slashdot entitled OpenRISC Gains Atomic Operations and Multicore Support shortly thereafter.

In the side of the Debian port, from time to time new versions of packages entered unstable and we started to use those newer versions. Some of them had nice fixes, like the autoreconf updates, so they did not require local modifications. In other cases, the new versions failed to build when old ones had worked (e.g. because the newer versions added support and dependencies of new versions of gnutls, systemd or other packages not yet available for or1k), and we had to repeat or create more nasty hacks to get the packages built again.

But in general, progress was very good. There were about 10k arch-dependent packages in Debian at the time, and we got about half of them compiled by the beginning of May, counting clean and dirty. And, if I recall correctly, there were around the same number of arch=all (which can be installed in any architecture, after the package is built in one of them). Counting both, it meant that systems using or1k got about 15k packages available, or 75% of the whole Debian archive (at least "main", we excluded "contrib" and "non-free"). Not bad.

By the middle to end of May, we had about 6k arch-dependent packages compiled, and 4k to go. The count of packages eventually reached ~6.6k at its peak (I think that in June/July). Many had been built with hacks and not rebuilt cleanly yet, but everything was fine until the amount of packages built plateaued.

Plateauing

There were multiple reasons for that. One of them is that after having fixed the autoreconf issue locally in some packages, new versions were uploaded to Debian without fixing that problem (in many cases there was no bug report or patch yet, so it was understandable; in other cases the requests were ignored). The wanna-build for the clean repository set up by Christian rightly considered the package out-of-date and prepared to build the more recent version, that failed. Then, other packages entering the unstable archive and build-depending on newer versions of those could not be built ("BD-Uninstallable"), until we built the newer versions of the dependencies in the dirty repository with local hacks. Consequently, the count of cleanly built packages went back-and-forth, when not backwards.

More challenging was the fact that when creating a new port, language compilers which are written in that same language need to be built for that architecture first. Sometimes it is not the compiler, but compile-time or run-time support for modules of a language are not ported yet. Obviously, as with other dependencies, large amounts of packages written in those languages are bound to remain uncompiled for a long time. As Colin Watson explained in porting Haskell's GHC to arm64 and ppc64el, untangling some of the chicken-and-egg problems of language compilers for new ports is extremely challenging.

Perl and Python are pretty much a pre-requisite of the base Debian system, and Christian got them working early on. But for example in May, 247 packages depended on r-base-dev (GNU R) for building, and 736 on ghc, and we did not have these dependencies compiled. Just counting those two, 1k source packages of the remaining 4k to 5k to be compiled for the new architecture would have to wait for a long time. Then there was Java, Mono, etc...

Even more worrying problems were the pending issues with the toolchain, like atomics in glibc, or make check failing for some packages in the clean repository built with wanna-build. Christian continued to work on the toolchain and liasing with the rest of the OpenRISC community, I continued to request more changes to the Debian archive through a few requests to use autoreconf, and pushing a few more NMUs. Though many requests were attended, I soon got negative replies/reactions and backed off a bit. In the meantime, the porters of other new architectures at the time were mostly submitting requests to support them and not NMUing much either.

Upstreaming

Things continued more or less in the same state until the end of the summer.

The new ports needed as many packages built as possible before the evaluation of which official ports to accept (in early September, I think, the final decision around the time of the freeze). Porters of the other new architectures (and maintainers, and other helpful Debian Developers) were by then more active in pushing for changes, specially remaining autoreconf issues, many of which benefited or1k. As I said before, I also kept pushing NMUs now and then, specially during summer, for packages which were not of immediate benefit for our port but helped the others (e.g. ppc64el needed updates to ltmain.sh for libtool which were not necessary for or1k, in addition to config.{guess,sub}).

In parallel in the or1k camp, there were patches that needed changes to be sent upstream, like for example Python's NumPy, that I submitted in May to the Debian package and upstream, and was uploaded to Debian in September with a new upstream release. Similar paths were followed between May and September for packages such as jemalloc, ocaml, gstreamer0.10, libgc, mesa, X.org's cf module and cmake (patch created by Christian).

In April, Christian had reached the amazing milestone of tracking and getting all of the contributors of the port of GNU binutils to assign copyright to the Free Software Foundation (FSF), all of the work was refreshed and upstreamed. In July or August, he started to gather information about the contributors of the GCC port, which had started more than a decade ago.

After that, nothing much happened (from the outside) until the end of the year, when Christian sent a message about the status of upstreaming GCC to the OpenRISC community. There was only one remaining person to assign the copyright to the FSF, but it was a blocker. In addition, there was the need to find one or more maintainers to liaise with upstream, review the patches, fix the remaining failures in the test suite and keeping the port in good shape. A few months after that and from what I could gather, the status remains the same.

Current Status, and The Future?

In terms of the Debian port, there have not been huge visible changes since the end of the summer, and not only because of the Jessie freeze.

It seems that for this effort to keep going forward and be sustainable, sorting out the issues with GCC and Glibc is essential. That means having the toolchain completely pushed upstream and in good shape, and particularly completing the copyright assignment. Debian will not accept private forks of those essential packages without a very good reason even in unofficially supported ports; and from the point of view of porters, working in the remaining not-yet-built packages with continuing problems in the toolchain is very frustrating and time-consuming.

Other than that, there is already a significant amount of software available that could run in an or1k system, so I think that overall the project has achieved a significant amount of success. Granted, KDE and LibreOffice are not available yet, neither are the tools depending on Haskell or Java. But a lot of software is available (including things high in the stack, like XFCE), and in many aspects it should provide a much more functional system that the one available in Linux (or other free software) systems in the late 1990s. If the blocking issues are sorted out in the near future, the effort needed to get a very functional port, on par with the unofficial Debian ports, should not be that big.

In my opinion, and looking at the big picture, not bad at all for an architecture whose hardware implementation is not easy to come by, and in which the port was created almost solely with simulators. That it has been possible to get this far with such meagre resources, it's an amazing feat of Free Software and Debian in particular.

As for the future, time will tell, as usual. I will try to keep you posted if there is any significant change in the future.

21 April, 2015 12:16AM by Manuel A. Fernandez Montecelo

April 20, 2015

Mark Brown

Flashing an AT91SAM9G20-EK from bare metal

Since I just had cause to do this and it was harder than it needed to be due to bitrot in the public documentation I could find I thought I’d write up how to get a modern bootloader onto older Atmel boards. These instructions are written for the AT91SAM9G20-EK though they should also apply to other Atmel boards of a similar generation.

These instructions are for booting from NAND since it’s the default thing for the board, for this J34 should be fitted to enable the chip select and J33 disconnected to disable the dataflash. If there is something broken programmed into flash then booting while holding down BP4 should cause the second stage bootloader to trash itself and ensure the ROM bootloader puts itself into recovery mode, or just removing both J33 and J34 during power on will also ensure no second stage bootloader is found.

There is a ROM bootloader but it just loads a small region from the boot media and jumps into it which isn’t enough for u-boot so there is a second stage bootloader called AT91Bootstrap. Download sources for current versions from github. If it (or a more sensibly written equivalent) is not yet merged upstream you’ll need to apply this patch to get it to build with a modern compiler, or you could use an old toolchain (which you’ll need in the next step anyway):

diff --git a/board/at91sam9g20ek/board.mk b/board/at91sam9g20ek/board.mk
index 45f59b1822a6..b8251ca2fbad 100644
--- a/board/at91sam9g20ek/board.mk
+++ b/board/at91sam9g20ek/board.mk
@@ -1,7 +1,7 @@
 CPPFLAGS += \
        -DCONFIG_AT91SAM9G20EK \
-       -mcpu=arm926ej-s
+       -mcpu=arm926ej-s -mfloat-abi=soft
 
 ASFLAGS += \
        -DCONFIG_AT91SAM9G20EK \
-       -mcpu=arm926ej-s
+       -mcpu=arm926ej-s -mfloat-abi=soft

Once that’s done you can build with:

make at91sam9g20eknf_uboot_defconfig
make CROSS_COMPILE=arm-linux-gnueabihf-

producing binaries/at91sam9g20ek-nandflashboot-uboot-${VERSION}.bin. This configuration will look for u-boot at 0x40000 in the flash so we need a u-boot binary. Unfortunately modern compilers seem to produce binaries that fail with no output. This is normally a sign that they need the ABI specifying more clearly as above but I got fed up trying to spot what was missing so I used an old CodeSourcery 2013.05 release instead, hopefully future versions of u-boot will be able to build for this target with older toolchains. Grab a recent release (I used 2015.01) and build with:

cd ${UBOOT}
make at91sam9g20ek_nandflash_defconfig
make CROSS_COMPILE=arm-linux-gnueabihf-

to get u-boot.bin.

These can then be flashed using the Atmel flashing tool SAM-BA. Start it and connect to the target (there is a Linux version, though it appears to rely on old versions of TCL/TK so if you get trouble starting it the easiest thing is to use the sacrificial Windows laptop you’ve obtained in order to run the “entertaining” flashing tools companies sometimes provide without risking a real system, or in my case your shiny new laptop that you’ve not yet installed Linux on). Start it then:

  1. Connect SAM-BA to the device following the dialog on start.
  2. Make sure you’ve selected “NandFlash” in the memory type tabs in the center of the window.
  3. Run the “Enable NandFlash” script.
  4. Run the “Erase All” script.
  5. Run the “Send Boot File” script and provide the at91bootstrap binary.
  6. Set “Send File Name” to be the u-boot binary you built earlier and “Address” to be 0x40000.
  7. Click “Send File”
  8. Press the reset button

which should result in at91bootstrap output followed by u-boot output on the serial console. A similar process works for the AT91SAM9263, there the jumper you need is J19 (sadly u-boot does not flash pictures of cute animals or forested shorelines on the screen as the default “Basic LCD Project 1.4″ firmware does, I’m not sure this “full operating system” thing is really delivering improved functionality).

20 April, 2015 04:03PM by Mark Brown

hackergotchi for Daniel Pocock

Daniel Pocock

WebRTC video from mini-DebConf Lyon, France

Thanks to the Debian France community for putting on another very successful mini-DebConf in Lyon recently.

It was a great opportunity to meet more people for the first time and share ideas.

On the first day, I gave a talk about the current status of WebRTC and the Debian WebRTC portal. You can now watch the video of the talk online.

Unfortunately, we had some wifi problems that delayed the demonstration but we did eventually see it work successfully towards the end of the talk.

20 April, 2015 12:15PM by Daniel.Pocock

Bálint Réczey

Hot upgrading Erlang applications using tools in Debian

Erlang lets you write applications supporting zero downtime by switching one live system to another running a different application version converting the application’s state on the fly to the new representation. Debian packages however can have only one installed version on a system which prevents using Erlang’s hot-upgrade feature easily.

Engineers at Yakaz (Jean-Sébastien Pédron) came up with a nice solution by creating separate directories for each application release and creating .deb packages for managing the transitions. I had to solve the same problem recently and found that the erlsvc Perl application they created needed a few patches to be usable with latest Erlang and other packages and with the changes it worked perfectly. Yakaz was not interested in accepting the patches and developing it further, but let me continue the maintenance. Please find the updated erlsvc application under my GitHub account and feel free to submit patches if you find something to fix in it.

I have also packaged erlsvc as an official Debian package and it is waiting in the NEW queue for being accepted. When it enters unstable you will have to make very little effort to make your applications hot-upgradeable on Debian!

20 April, 2015 09:58AM by Réczey Bálint

Russ Allbery

Review: The Girls from Alcyone

Review: The Girls from Alcyone, by Cary Caffrey

Series: Girls from Alcyone #1
Publisher: Tealy
Copyright: 2011
ISBN: 1-105-33727-8
Format: Kindle
Pages: 315

Sigrid is a very special genetic match born to not particularly special parents, deeply in debt in the slums of Earth. That's how she finds herself being purchased by a mercenary corporation at the age of nine, destined for a secret training program involving everything from physical conditioning to computer implants, designed to make her a weapon. Sigrid, her friend Suko, and the rest of their class are a special project of the leader of the Kimura corporation, one that's controversial even among the corporate board, and when the other mercenary companies unite against Kimura's plans, they become wanted contraband.

This sounds like it could be a tense SF thriller, but I'll make my confession at the start of the review: I had great difficulty taking this book seriously. Initially, it had me wondering what horrible alterations and mind control Kimura was going to impose on the girls, but it very quickly turned into, well, boarding school drama, with little of the menace I was expecting. Not that bullying, or the adults who ignore it to see how the girls will handle it themselves, are light-hearted material, but it was very predictable. As was the teenage crush that grows into something deeper, the revenge on the nastiest bully that the protagonist manages to not be responsible for, and the conflict between unexpectedly competent girls and an invasion of hostile mercenaries.

I'm not particularly well-read or informed about the genre, so I'm not the best person to make this comparison, but the main thing The Girls from Alcyone reminded me of was anime or manga. The mix of boarding-school interpersonal relationships, crushes and passionate love, and hypercompetent female action heroes who wear high heels and have constant narrative attention on their beauty had that feel to it. Add in the lesbian romance and the mechs (of sorts) that show up near the end of the story, and it's hard to shake the feeling that one is reading SF yuri as imagined by a North American author.

The other reason why I had a hard time taking this seriously is that it's over-the-top action sequences (it's the Empire Strikes Back rescue scene!) mixed with rather superficial characterization, with one amusing twist: female characters almost always end up being on the side of the angels. Lady Kimura, when she appears, turns into exactly the sort of mentor figure that one would expect given the rest of the story (and the immediate deference she got felt like it was lifted from anime). The villains, meanwhile, are hissable and motivated by greed or control. While there's a board showdown, there's no subtle political maneuvering, just a variety of more or less effective temper tantrums.

I found The Girls from Alcyon amusing, and even fun to read in places, but that was mostly from analyzing how closely it matched anime and laughing at how reliably it delivered characteristic tropes. It thoroughly embraces its action-hero story full of beautiful, deadly women, but it felt more like a novelization of a B-grade sci-fi TV show than serious drama. It's just not well-written or deep enough for me to enjoy it as a novel. None of the characters were particularly engaging, partly because they were so predictable. And the deeper we got into the politics behind the plot, the less believable I found any of it.

I picked this up, along with several other SFF lesbian romances, because sometimes it's nice to read a story with SFF trappings, a positive ending, and a lack of traditional gender roles. The Girls from Alcyone does have most of those things (the gender roles are tweaked but still involve a lot of men looking at beautiful women). But unless you really love anime-style high-tech mercenary boarding-school yuri, want to read it in book form, and don't mind a lot of cliches, I can't recommend it.

Followed by The Machines of Bellatrix.

Rating: 3 out of 10

20 April, 2015 04:28AM

April 19, 2015

Richard Hartmann

Release Critical Bug report for Week 16

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1031 (Including 146 bugs affecting key packages)
    • Affecting Jessie: 53 (key packages: 42) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 49 (key packages: 42) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 12 bugs are tagged 'patch'. (key packages: 9) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 3 bugs are marked as done, but still affect unstable. (key packages: 2) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 34 bugs are neither tagged patch, nor marked done. (key packages: 31) Help make a first step towards resolution!
      • Affecting Jessie only: 4 (key packages: 0) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 1 bugs are in packages that are unblocked by the release team. (key packages: 0)
        • 3 bugs are in packages that are not unblocked. (key packages: 0)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144) ???
51 178 (124+54) 323 (190+133) 189 (134+55)
52 115 (78+37) 289 (190+99) 147 (112+35)
1 93 (60+33) 287 (171+116) 140 (104+36)
2 82 (46+36) 271 (162+109) 157 (124+33)
3 25 (15+10) 249 (165+84) 172 (128+44)
4 14 (8+6) 244 (176+68) 187 (132+55)
5 2 (0+2) 224 (132+92) 175 (124+51)
6 release! 212 (129+83) 161 (109+52)
7 release+1 194 (128+66) 147 (106+41)
8 release+2 206 (144+62) 147 (96+51)
9 release+3 174 (105+69) 152 (101+51)
10 release+4 120 (72+48) 112 (82+30)
11 release+5 115 (74+41) 97 (68+29)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

19 April, 2015 09:35PM by Richard 'RichiH' Hartmann

Patrick Schoenfeld

Resources about writing puppet types and providers

When doing a lot of devops stuff with Puppet, you might get to a point, where the existing types are not enough. That point is usually reached, when a task at hand becomes extraordinary complex when trying to achieve it with the Puppet DSL. One example of such a case could be if you need to interact with a system binary a lot. In this case, writing your own puppet type might be handy.

Now where to start, if you want to write your own type?

Overview: modeling and providing types

First thing that you should know about puppet types (if you do not already): a puppet resource type consists of a type and one or more providers.

The type is a model of the resource and describes which properties (e.g. the uid of a user resource) and parameters (like the managehome parameter) a resource has. It’s a good idea to start with a rough idea of what properties you’ll be manage with your resource and what values they will accept, since the type also does the job of validation.

What actually needs to be done on the target system is what the provider is up to. There can be different providers for different implementations (e.g. a native ruby implementation or an implementation using a certain utility), different operating systems and other conditions.

A combination of a type and a matching provider is what forms a (custom) resource type.

Resources

Next I’ll show you some resources about puppet provider development, that I found useful:

Official documentation:

Actually types and resources is quiet well documented in the official documentation, although it might not get to much in the details:

Blog posts:
A hands-on tutorial in multiple parts with good explanations are the blog posts by Gary Larizza:

Books:
The probably most complete information, including explanations of the puppet resource model and it’s resource abstraction layer (RAL), can be found in the book Puppet Types and providers by Dan Bode and Nan Liu.

The puppet source:
Last but not least, it’s always worth a peek at how others did it. The puppet source contains all providers of the official puppet release, as well as the base libraries for puppet types and providers with their api documentation: https://github.com/puppetlabs/puppet/

19 April, 2015 11:29AM by Patrick Schönfeld

hackergotchi for Wouter Verhelst

Wouter Verhelst

Youn Sun Nah 5tet: Light For The People

About a decade ago, I played in the (now defunct) "Jozef Pauly ensemble", a flute choir connected to the musical academy where I was taught to play the flute. At the time, this ensemble had the habit of goin on summer trips every year; sometimes these trips were large international concert tours (like our 2001 trip to Australia), but that wasn't always the case; there have also been smaller trips, like the 2002 one to the French Ardennes.

While there, we went on a day trip to the city of Reims. As a city close to the front in the first world war, it has a museum dedicated to that subject that I remembered going to. But the fondest memory of that day was going to a park where a podium was set up, with a few stacks of fold-up chairs standing nearby. I took one and listened to the music.

That was the day when I realized that I kindof like jazz. I had come into contact with Jazz before, but it had always been something to be used as a kind of musical wallpaper; something you put on, but don't consciously listen to. Watching this woman sing, however, was a different kind of experience altogether. I'm still very fond of her rendition of "Besame Mucho".

After having listened to the concert for about two hours, they called it quits, but did tell us that there was a record which you could buy. Of course, after having enjoyed the afternoon so much, I couldn't imagine not buying it, so that happened.

Fast forward several years, in the move from my apartment above my then-office to my current apartment (just around the corner), the record got put into the wrong box, and when I unpacked things again it got lost; permanently, I thought. Since I also hadn't digitized it yet at the time, I haven't listened to it anymore in quite a while.

But that time came to an end today. The record which I thought I'd lost wasn't, it was just in a weird place, and while cleaning yesterday, I found it sitting among a bunch of old stuff that I was going to throw out. Putting on the record today made me realize again how good it really is, and I thought that I might want to see if she was still active, and if she might perhaps have made another album.

It was great to find out that not only had she made six more albums since the one I bought, she'd also become a lot more known in the Jazz world (which I must admit I don't really follow all that well), and won a number of awards.

At the time, Youn Sun Nah was just a (fairly) recent graduate from a particular Jazz school in Paris. Today, she appears to be so much more...

19 April, 2015 09:25AM

Laura Arjona

Six months selfhosting: my userop experiences

Note: In this post I mention some problems and ask questions (to myself, like “thinking aloud”). The goal is not to get answers to those questions (I suppose that I will find them soon or later in the internet, manuals and so), but to show the kind of problems and questions that arise in my selfhosting adventures, which I suppose are common to other people trying to administer a home server with some web services.

Am I an userop? Well I’m something in the middle of (GNU/Linux) user and sysadmin: I have studied computer technical engineering but most of my experience has been in helpdesk, providing support for Windows users. I’m running Debian in some LAMP boxes at work (without GUI) since 2008 or so, and in my desktops (with GUI) since 2010. I don’t code nor package, but I don’t mind trying to read code and understand it (or not). I know a bit of C, a bit of Python, of PHP, and enough Perl to open a Perl file and close it after two minutes,  understanding that it’s great, but too much for me :) I translate software, so I’m not scared to clone a repository, edit files, commit or submit a patch. I’m not scared of compiling a program (except if it’s an Android app: I try to avoid setting up the development environment just to try some translation that I made… but I built my Puma before it was the binary available for download or in F-Droid).

In conclusion, I feel more like a “GNU/Linux power user” than a “sysadmin”. Sometimes just a “user” or even a “newbie” (for example, I don’t know very well the Unix/Linux folder tree… where are the wallpapers stored? Does it depend on the desktop that I use?).

Anyway. I won’t stop my free software + free networks digital life because I don’t know many things. I bought a small server for home last September, and I wanted to try to selfhost some services, for me and for my family. I want to be a “home sysadmin” or something like that, so I joined the “userops” mailing list :)

Here you have my experiences on selfhosting/being an userop until now.

Mail

I even didn’t try to setup my mail server, because many people say it’s a pain (although nice articles were published about how to do it, for example this series in ArsTechnica) and I need a static IP which is 14€/month more to my ISP, and Gandi, the place where I rented my domain name, provides mail, and they use Debian and Roundcube, and sponsor Debian too, so I decided to trust on them.

So this is my strategy now, to try to keep mail under my control:

  • Trust my domain provider.
  • Backup my mail and keep local copies, removing sensible stuff from the server.
  • Use and spread the word about GPG encryption.
  • Try not to send photos or videos by mail, just send the link to my MediaGoblin instance (see below).

MediaGoblin

I’ve setup two MediaGoblin instances (yes, two!). I managed to do it in Debian 7 stable (I think NodeJS’ npm was not needed then), but soon later I upgraded to Jessie so now it’s even better.

I installed Nginx and PostgreSQL via apt, to use them for both instances (and probably some more software later).

One instance is public, and I use a Debian user, a PostgreSQL database, and it’s running in http://media.larjona.net
I have requested an SSL cert to Gandi but I still didn’t deployed it (lazy LArjona!!).

The other instance is private, for family photos. I didn’t know very well how much of my existing setup could reuse and how to keep both instances in case of downtimes or attack… I know more or less the concept or “chroot” but I don’t know how to deploy it in my machine. So I decided to use another Debian user, another PostgreSQL database, deploy MediaGoblin in a different folder, and create another virtual server in my Nginx to serve it. I managed to setup that virtual server to http-authenticate and to serve content via a different port, and use a self-signed SSL certificate (it’s only for family, so it does not matter). I created another (unprivileged) Debian user with a password for the nginx authentication, and gave my family the URL in the form https://mediaprivate.larjona.net:PortNumber and the user and password (mediaprivate is a string, and PortNumber is a number). I think they don’t use the instance too much, but at least I upload photos there from time to time and email the link instead of emailing the photos themselves (they don’t use GPG either…).

Upgrades

I upgraded MediaGoblin from 0.7.1 to 0.8.0 successfully, I sent a report about how I did it to the mailing list. First I upgraded the public instance, when I figure out the process, I upgraded the second instance to test my instructions, and then, I sent the report with the instructions to the mailing list.

Static site and LimeSurvey: the power of free software (with instructions)

I wanted to act as a mirror of floss2013.libresoft.es and surveys.libresoft.es since they suffer a downtime and I participated in that project (not in the sysadmin part, but in the research and content creation).

The static site floss2013.libresoft.es offered a zip with the whole website tree (since the website was licensed as AGPL), and I had access to the git repo holding the development copy of the website. So I just cloned the repository and setup another nginx virtual server in my machine, and tuned my DNS zone in Gandi website to serve floss2013.larjona.net from home. 10 minutes setup YAY! #inGitWeTrust #FreeSoftwareFTW :)

For surveys.larjona.net I had to install a LimeSurvey instance. I knew how to do it because we use LimeSurvey at work, but at home I had Nginx instead of Apache, and PostgreSQL instead of MySQL. And no PHP… I searched about how to install PHP in Nginx (I can use apt-get, nice!) and how to install LimeSurvey with Nginx and PostgreSQL (I had documentation about that, so I followed, and it worked).

For making available the data (one survey and its results, so people can login as visitor to query and get statistics), I downloaded the LimeSurvey export dataset that we were providing in the static website, followed the replication instructions (hey, I wrote them!), and they worked #oleole! (And here, dear researchers, gets demonstrated that free software and free culture really empower your research and help spreading your results).

Etherpad: not so easy, it seems!

I’m trying to install Etherpad-Lite, but I’m suffering a bit. I think I did everything ok according to some guides but I get “Bad Gateway” and these kind of errors when trying to browse with Lynx in the host:

[error] 3615#0: *24 upstream timed out (110: Connection timed out) 
while reading response header from upstream, 
client: 127.0.0.1, 
server: pad.larjona.net, 
request: "GET / HTTP/1.0", 
upstream: "http://127.0.0.1:9001/", 
host: "pad.larjona.net"

2015/04/17 20:52:56 [error] 3615#0: *24 connect() failed 
(111: Connection refused) while connecting to upstream, 
client: 127.0.0.1, 
server: pad.larjona.net, 
request: "GET / HTTP/1.0", 
upstream: "http://[::1]:9001/", 
host: "pad.larjona.net"

I’m not sure if I need to open some port in iptables, my router, or change my nginx configuration because the guides assume you’re only serving one website in the port 80 (and I have several of them, now…), or what… I’ve spent three chunks of time (maybe ~2h each?) on this, in different days, and couldn’t figure it out, so I decided to round-robin in my TODO list.

Userops thoughts

Debian brings peace of mind (for me)

On one side, maintaining a Debian box it’s quite easy, and the more software that it’s packaged, the less time that I spend installing or upgrading. I like being in stable, I’m in Jessie now (I migrated when it was frozen), but I’ll stay in stable as much as I can.

I like that I can use the software that I installed via apt-get for several services (nginx, PostgreSQL…). About the software that is not packaged (MediaGoblin, LimeSurvey, EtherPad, maybe others later), I wonder how dependencies and updates are handled. And maybe (probably) I have installed some components several times, one for each service (this sounds like a Windows box #grr).

For example MediaGoblin uses PyPump. PyPump 0.5 is packaged in Debian Jessie. MediaGoblin uses PyPump 0.7+. What if PyPump 0.7+ gets, let’s say, into Jessie-backports? Can I benefit from that?

I know that MediaGoblin upgrade instructions includes upgrading the dependencies, but what about some security patch in one dependency? Should I upgrade the pip modules periodically? How to know if some upgrade is recommended because patches a vulnerability, or it’s just new features (and maybe breaking my setup)?

This kind of things are the “peace of mind” that Debian packaging brings to me: when some piece of software is packaged, I know maybe I need to care about proper setup and configuration, but later, it’s kind-of-easy to maintain (since the Debian maintainers care about the rest). I don’t mind about cloning a repo and compiling, I mind about later, or coexistance with other program/services. I trust in the MediaGoblin community and I’m an active member (I’m not developer, but hang on IRC, follow the mailing list, etc) but for example I don’t know anything about the EtherPad project. And I don’t feel like joining the community (I’m already an active member in Debian, MediaGoblin, F-Droid, Pump.io, translator of LimeSurvey and many other small apps that I use, and in the future will use more services, like OwnCloud, XMPP…), joining the community of each software that I use is becoming not sustainable :s

Free software is more than software

I follow the userop mailing list, and it’s becoming very technical. I mostly understand the problems (which are similar to the problems that I face: how to isolate different services, how to easily-configure them, how to make them installable by average user…) But I don’t understand most of the solutions proposed, and I think that probably we need technical solutions, but in the meanwhile, some issues can be addressed not with software, but with other means: good documentation, community support, translations, beta-testers…

This is my conclusion until now. When a project is well documented, I think I can find my way to selfhost no matter if the software is packaged (or “contained”) or not. MediaGoblin, and LimeSurvey, are well documented, and the user support channels are very responsive.

I find lots of instructions that assume that you will use a whole machine for their service (and not for other things). And lots of documentation for the LAMP stack, but not for Nginx + PostgreSQL and Node instead of PHP… So, for each “particularity” of my setup, I search the internet and try to pick good sources to help me do what I wanted to do.

I’m kind of privileged

Some elements, not software related, to take into account as “pre-requisites for succeed” selfhosting services:

  • I knew what to search.
  • I knew which sites to visit from the results (arch wiki, debian wiki, stack overflow, etc: some of them were not the Top1 in the results).
  • I had time to read several sources and make my mind about what to do and how.
  • I can read, understand, and write in English.
  • I have no fear about my broken English.
  • I have no impostor syndrome.
  • I felt welcome in the FLOSS communities where I hanged out.

These aspects are not present in a lot of people. If I look around to the “computer users” that I know (mostly Windows+Android, some GNU/Linux users, some Mac OSX users, some iOS users), I find that they search things like “X does not work” or they cannot write a proper search query in English. Or they trust some random person writing a recipe in their blog, without trying first to understand what the recipe does. Other people just say “I’m not a professional sysadmin, I’ll just do what «everybody» does (aka use Google services or whatever). What if I try and I don’t succeed?”. Things like that.

We may need some technical solutions (and hackers are thinking about that, and working on that). But I feel that we need (more) a huge group of beta-testers, dogfooding people, aventurers that try the half-cooked solutions and provide successful and unsuccessful experiences, to guide the research and make software technologies advance. I’m not sure if I am an userop, but I feel part of that “vanguard force”, I want to be part of the future of free software and free networks.

Comments?

You can comment about this post on this pump.io thread.


Filed under: My experiences and opinion Tagged: Communities, Contributing to libre software, Debian, Developer motivations, English, free networks, Free Software, Freedom, innovation, MediaGoblin, Moving into free software, Project Management, selfhosting, sysadmin

19 April, 2015 12:06AM by larjona

April 18, 2015

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2015/11-16

only one week left until the jessie release. yay!

in the last weeks I didn't find many RC bugs that I could fix; still, here's the short list; nice feature: I mostly help others or could build an work done by others.

  • #669735 – dpkg-www: "dpkg-www: transition towards Apache 2.4"
    sponsor QA upload from Jean-Michel Nirgal Vourgère
  • #669777 – yocto-reader: "yocto-reader: transition towards Apache 2.4"
    sponsor NMU from Jean-Michel Nirgal Vourgère, upload to DELAYED/2
  • #669796 – w3c-linkchecker: "w3c-linkchecker: transition towards Apache 2.4"
    cherry-pick periapt's commit from 2012 (pkg-perl)
  • #764284 – testdisk: "[testdisk] after ntfs-3g upgrade, testdisk cannot be installed (Depends: error)"
    tag + sid to get it out of jessie RC bugs
  • #780629 – libibverbs1: "libibverbs1: please add Breaks: libopenmpi1.3"
    upload NMU prepared by Andreas Beckmann, adding a Breaks, upload to DELAYED/2
  • #780729 – pbuilder: "pbuilder must define PATH as in debian-policy (and as used on buildds)"
    downgrade
  • #782160 – src:chrony: "chrony: Multiple issues: CVE-2015-1821 CVE-2015-1822 CVE-2015-1853"
    sponsor maintainer upload by Joachim Wiedorn

18 April, 2015 10:52PM

hackergotchi for Neil McGovern

Neil McGovern

Taking office

Yesterday, my first term started as the Debian Project Leader. There’s been a large number of emails congratulating me, and thanks to everyone who sent those. I’d also like to thank Mehdi Dogguy and Gergely Nagy for running, and of course Lucas Nussbaum for his service over the past two years.

Lucas also did a great handover, and so (I hope!) I’m aware of most of the issues that are ongoing. As started previously, I’ll keep my daily log of activities in /srv/leader/news/  on master.debian.org.

18 April, 2015 01:36PM by Neil McGovern

Russ Allbery

Review: The Long Way to a Small, Angry Planet

Review: The Long Way to a Small, Angry Planet, by Becky Chambers

Publisher: CreateSpace
Copyright: 2014
ISBN: 1-5004-5330-7
Format: Kindle
Pages: 503

The Wayfarer is a tunneling ship: one of the small, unremarked construction ships that help build the wormhole network used for interstellar transport. It's a working ship with a crew of eight (although most people would count seven and not count the AI). They don't all like each other — particularly not the algaeist, who is remarkably unlikeable — but they're used to each other. It's not a bad life, although a more professional attention to paperwork and procedure might help them land higher-paying jobs.

That's where Rosemary Harper comes in. At the start of the book, she's joining the ship as their clerk: nervous, hopeful, uncertain, and not very experienced. But this is a way to get entirely away from her old life and, unbeknownst to the ship she's joining, her real name, identity, and anyone who would know her.

Given that introduction, I was expecting this book to be primarily about Rosemary. What is she fleeing? Why did she change her identity? How will that past come to haunt her and the crew that she joined? But that's just the first place that Chambers surprised me. This isn't that book at all. It's something much quieter, more human, more expansive, and more joyful.

For one, Chambers doesn't stick with Rosemary as a viewpoint character, either narratively or with the focus of the plot. The book may open with Rosemary and the captain, Ashby, as focal points, but that focus expands to include every member of the crew of the Wayfarer. We see each through others' eyes first, and then usually through their own, either in dialogue or directly. This is a true ensemble cast. Normally, for me, that's a drawback: large viewpoint casts tend to be either jarring or too sprawling, mixing people I want to read about with people I don't particularly care about. But Chambers avoids that almost entirely. I was occasionally a touch disappointed when the narrative focus shifted, but then I found myself engrossed in the backstory, hopes, and dreams of the next crew member, and the complex ways they interweave. Rosemary isn't the center of this story, but only because there's no single center.

It's very hard to capture in a review what makes this book so special. The closest that I can come is that I like these people. They're individual, quirky, human (even the aliens; this is from more the Star Trek tradition of alien worldbuilding), complicated, and interesting, and it's very easy to care about them. Even characters I never expected to like.

The Long Way to a Small, Angry Planet does have a plot, but it's not a fast-moving or completely coherent one. The ship tends to wander, even when the mission that gives rise to the title turns up. And there are a lot of coincidences here, which may bother you if you're reading for plot. At multiple points, the ship ends up in exactly the right place to trigger some revelation about the backstory of one of the crew members, even if the coincidence strains credulity. Similar to the algae-driven fuel system, some things one just has to shrug about and move past.

On other fronts, though, I found The Long Way to be refreshingly willing to take a hard look at SF assumptions. This is not the typical space opera: humans are a relatively minor species in this galaxy, one that made rather a mess of their planet and are now refugees. They are treated with sympathy or pity; they're not somehow more flexible, adaptable, or interesting than the rest of the galaxy. More fascinatingly to me, humans are mostly pacifists, a cultural reaction to the dire path through history that brought them to their current exile. This is set against a backdrop of a vibrant variety of alien species, several of whom are present onboard the Wayfinder. The history and background of the other species are not, sadly, as well fleshed out as the humans, but each with at least a few twists that add interest to the story.

But the true magic of this book, the thing that it has in overwhelming abundance, is heart. Not everyone in this book is a good person, but most of them are trying. I've rarely read a book full of so much empathy and willingness to reach out to others with open hands. And, even better, they're all nice in different ways. They bring their own unique personalities and approaches to their relationships, particularly the complex web of relationships that connects the crew. When bad things happen, and, despite the overall light tone, a few very bad things happen, the crew rallies like friends, or like chosen family. I have to say it again: I like these people. Usually, that's not a good sign for a book, since wholly likeable people don't generate enough drama. But this is one of the better-executed "protagonist versus nature" plots I've read. It successfully casts the difficulties of making a living at a hard and lonely and political job as the "nature" that provides the conflict.

This is a rather unusual book. It's probably best classified as space opera, but it doesn't fit the normal pattern of space opera and it doesn't have enough drama. It's not a book about changing the universe; at the end of the book, the universe is in pretty much the same shape as we found it. It's not even about the character introduced in the first pages, or really that much about her dilemma. And it's certainly not a book about winning a cunning victory against your enemies.

What it is, rather, is a book about friendships, about chosen families and how they form, about being on someone else's side, about banding together while still being yourself. It's about people making a living in a hard universe, together. It's full of heart, and I loved it.

I'm unsurprised that The Long Way to a Small, Angry Planet had to be self-published via a Kickstarter campaign to find its audience. I'm also unsurprised that, once it got out there, it proved very popular and has now been picked up by a regular publisher. It's that sort of book. I believe it's currently out of print, at least in the US, as its new publisher spins up that process, but it should be back in print by late 2015. When that happens, I recommend it to your attention. It was the most emotionally satisfying book I've read so far this year.

Rating: 9 out of 10

18 April, 2015 06:24AM

hackergotchi for Steve Kemp

Steve Kemp

skx-www upgraded to jessie

Today I upgraded my main web-host to the Jessie release of Debian GNU/Linux.

I performed the upgraded by changing wheezy to jessie in the sources.list file, then ran:

apt-get update
apt-get dist-upgrade

For some reason this didn't upgrade my kernel, which remained the 3.2.x version. That failed to boot, due to some udev/systemd issues (lots of "waiting for job: udev /dev/vda", etc, etc). To fix this I logged into my KVM-host, chrooted into the disk image (which I mounted via the use of kpartx), and installed the 3.16.x kernel, before rebooting into that.

All my websites seemed to be OK, but I made some changes regardless. (This was mostly for "neatness", using Debian packages instead of gems, and installing the attic package rather than keeping the source-install I'd made to /opt/attic.)

The only surprise was the significant upgrade of the Net::DNS perl-module. Nothing that a few minutes work didn't fix.

Now that I've upgraded the SSL-issue I had with redirections is no longer present. So it was a worthwhile thing to do.

18 April, 2015 12:00AM

April 17, 2015

hackergotchi for EvolvisForge blog

EvolvisForge blog

Tricks for using Googlemail at work

For these who similarily suffer from having to use Googlemail at work. If anyone else has more of these, please do share.

Deactivate the spamfilter

The site admins can do that. Otherwise, you will have work-relevant eMails, for example from your own OTRS system, end up in Spam (where you don’t see it, as their IMAP sucks) and deleted without asking 30 days later. (AIUI, the only way to get eMails actually deleted from Google…)

Do not use their SMTP service

Use your own outgoing MTA. This brings back the, well, not feature but should-have-been-granted-but-Google-doesn’t-do-it-anyway that, when you write to a mailing list, you also get your own messages into your own INBOX.

Calendars…

I have no solutions for this. I stopped using the Googlemail calendars because they didn’t think it a problem that, when I accept an invitation in Kontact (KDEPIM as packaged in Debian sid), the organiser of the calendar item in the sender’s calendar (for which I do not have write permissions) changes to me (so the actual meeting organiser cannot change anything afterwards) and/or calendar items get doubled. I now run a local uw-imapd (forward-ported to sid by means of a binNMU) for sent-mail folders etc. and a local iCalendar directory for calendars.

17 April, 2015 01:54PM by Thorsten Glaser

April 16, 2015

hackergotchi for Daniel Pocock

Daniel Pocock

Debian Jessie release, 100 year ANZAC anniversary

The date scheduled for the jessie release, 25 April 2015, is also ANZAC day and the 100th anniversary of the Gallipoli landings. ANZAC day is a public holiday in Australia, New Zealand and a few other places, with ceremonies remembering the sacrifices made by the armed services in all the wars.

Gallipoli itself was a great tragedy. Australian forces were not victorious. Nonetheless, it is probably the most well remembered battle from all the wars. There is even a movie, Gallipoli, starring Mel Gibson.

It is also the 97th anniversary of the liberation of Villers-Bretonneux in France. The previous day had seen the world's first tank vs tank battle between three British tanks and three German tanks. The Germans won and captured the town. At that stage, Britain didn't have the advantage of nuclear weapons, so they sent in Australians, and the town was recovered for the French. The town has a rue de Melbourne and rue Victoria and is also the site of the Australian National Memorial for the Western Front.

Its great to see that projects like Debian are able to span political and geographic boundaries and allow everybody to collaborate for the greater good. ANZAC day might be an interesting opportunity to reflect on the fact that the world hasn't always enjoyed such community.

16 April, 2015 05:48PM by Daniel.Pocock

Andrew Shadura

Power button and logind

If you have configured your laptop’s power button to act as sleep button using acpid, then installed systemd or systemd-shim and pressed the button only to find your laptop to shut down after it wakes up from sleep, set these options in /etc/systemd/logind.conf:

[Login]
HandlePowerKey=ignore
HandleSuspendKey=ignore
HandleHibernateKey=ignore
HandleLidSwitch=ignore

16 April, 2015 12:05PM

April 15, 2015

hackergotchi for Joachim Breitner

Joachim Breitner

Talk and Article on Monads for Reverse Engineering

In a recent project of mine, a tool to analyze and create files for the Ravensburger Tiptoi pen, I used two interesting Monads with good results:

  • A parser monad that, while parsing, remembers what part of the file were used for what and provides, for example, an annotated hex dump.
  • A binary writer monad that allows you to reference and write out offsets to positions in the file that are only determined “later” in the monad, using MonadFix.

As that’s quite neat, I write a blog post for the German blog funktionale-programmierung.de about it, and also held a talk at the Karlsruhe functional programmers group. If you know some German, enjoy; if not, wait until I have a reason to hold the talk in English. (As a matter of fact, I did hold the talk in English, but only spontaneously, so the text is in German only so far.)

15 April, 2015 11:32PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

HollySiz

Sometimes one stumbles upon stuff that touches one deeply. Granted, the topic of the first video from the artist I want to present you now did touch me naturally. But it made me take a closer look. This is about HollySiz. Yes, yet another French singer, but fortunately (for me) she sings mostly in English. :)

So here are the songs:

  • The Light: At first I wasn't even aware it's a music video. And the story is strong. I'm uncertain on the story of Nils Pickert did inspire the video, but it's lovely to see people getting it right. The parents job is to support their kid in finding their own identity instead of defining it for them.
  • Better Than Yesterday: In the light of The Light everything else looks antique. So what's better as a video that actually does look antique. ;)
  • Tricky Game (feat. Sianna): I somehow like this version of the song better because it contains rap. But that might be just me. A catchy beat anyway.

Like always, enjoy! And take good care of your kids if you happen to have some.

/music | permanent link | Comments: 0 | Flattr this

15 April, 2015 10:57AM by Rhonda

hackergotchi for Michal Čihař

Michal Čihař

Packaging python-gammu

After Monday release of separate Gammu and python-gammu, the obvious task was to get the new package to distributions.

First I've started with Debian packages, what was quite easy as from quite complex CMake + Python package it is now purely CMake and it was mostly about removing stuff. Soon the updated Gammu package was uploaded to experimental. Once having that ready, I've also update the backports for Ubuntu and these are available in Gammu PPA. Creating new python-gammu package was a bit harder as this is the first Python 3 compatible package I've created, but it's now ready and sitting in the NEW queue.

While working on python-gammu package, I've realized that some of the data used in testsuite are missing in the tarball. While not being critical, this is definitely not nice, so I've decided to release python-gammu 2.1 today. It also includes fixes for some corner cases found by coverity.

For openSUSE the packaging was quite easy as well, stripping out unneeded parts of Gammu package went smoothly and it's now in hardware project, SR to Factory is pending. With python-gammu it turned out to be much harder as the testsuite had failed there with some strange error coming out of libdbi. After looking deeper into it, the problem is in new return type available in Git snapshot openSUSE is shipping. Fortunately producing fix was quite easy, so next Gammu upstream will handle that properly and package in hardware project is already patched. You can now use python-python-gammu from devel:languages:python and SR to Factory is pending as well.

Filed under: Debian English Gammu python-gammu SUSE Wammu | 0 comments

15 April, 2015 10:00AM by Michal Čihař (michal@cihar.com)