October 01, 2014

hackergotchi for Matt Zimmerman

Matt Zimmerman

Join me in supporting The Ada Initiative

When I first read that Linux kernel developer Valerie Aurora would be changing careers to work full-time on behalf of women in open source communities, I never imagined it would lead so far so fast. Today, The Ada Initiative is a non-profit organization with global reach, whose programs have helped create positive change for women in a wide range of communities beyond open source. Building on this foundation, imagine how much more they can do in the next four years! That’s why I’m pledging my continuing support, and asking you to join me.

For the next 7 days, I will personally match your donations up to $4,096. My employer, Heroku (Salesforce.com), will match my donations too, so every dollar you contribute will be tripled!

My goal is that together we will raise over $12,000 toward The Ada Initiative’s 2014 fundraising drive.

Donate now

Since about 1999, I had been working in open source communities like Debian and Ubuntu, where women are vastly underrepresented even compared to the professional software industry. Like other men in these communities, I had struggled to learn what I could do to change this. Such a severe imbalance can only be addressed by systemic change, and I hardly knew where to begin. I worked to raise awareness by writing and speaking, and joined groups like Debian Women, Ubuntu Women and Geek Feminism. I worked on my own bias and behavior to avoid being part of the problem myself. But it never felt like enough, and sometimes felt completely hopeless.

Perhaps worst of all, I saw too many women burning out from trying to change the system. It was often taxing just to participate as a woman in a male-dominated community, and the extra burden of activism seemed overwhelming. They were all volunteers, doing this work in evenings and weekends around work or study, and it took a lot of time, energy and emotional reserve to deal with the backlash they faced for speaking out about sexism. Valerie Aurora and Mary Gardiner helped me to see that an activist organization with full-time staff could be part of the solution. I joined the Ada Initiative advisory board in February 2011, and the board of directors in April.

founders_laughing

Today, The Ada Initiative is making a difference not only in my community, but in my workplace as well. When I joined Heroku in 2012, none of the engineers were women, and we clearly had a lot of work to do to change that. In 2013, I attended AdaCamp SF along with my colleague Peter van Hardenberg, joining the first “allies track”, open to participants of any gender, for people who wanted to learn the skills to support the women around them. We’ve gone on to host two ally skills workshops of our own for Heroku employees, one taught by Ada Initiative staff and another by a member of our team, security engineer Leigh Honeywell. These workshops taught interested employees simple, everyday ways to take positive action to challenge sexism and create a better workplace for women. The Ada Initiative also helped us establish a policy for conference sponsorship which supports our gender diversity efforts. Today, Heroku engineering includes about 10% women and growing. The Ada Initiative’s programs are helping us to become the kind of company we want to be.leigh-eeoc-ally-skills-workshop

I attended the workshop with a group of Heroku colleagues, and it was a powerful experience to see my co-workers learning tactics to support women and intervene in sexist situations. Hearing them discuss power and privilege in the workplace, and the various “a-ha!” moments people had, were very encouraging and made me feel heard and supported.
– Leigh Honeywell

If you want to see more of these programs from The Ada Initiative, please contribute now:
Donate now


01 October, 2014 04:30PM by Matt Zimmerman

hackergotchi for Holger Levsen

Holger Levsen

20141001-lts-september-2014

My LTS September

In the beginning of September I spent quite some time fixing bugs in the Debian Security Tracker, which now, thanks to the awesome CSS from Ulrike looks really good and professional! There are still some bugs to fix and features I'd like to add, eg. the ability to in- and exclude (old)oldstable/lts/backports/nodsa/EOL everywhere. It was fun to squash #742382 #642987 #742855 #762214 #479727 #610220 #611163 and #755800!

And then I also discovered dgit, as in "I've used it for the first time". It was so great, I immediatly did a backport of it and uploaded it to wheezy-backports.

So during the last month these uploads I made to squeeze-lts:

  • DLA 56-1 for wordpress, fixing CVE-2014-2053 CVE-2014-5204 CVE-2014-5205 CVE-2014-5240 CVE-2014-5265 CVE-2014-5266
  • DLA 57-1 for libstruts1.2-java, fixing CVE-2014-0114
  • DLA 60-1 for icinga, fixing CVE-2013-7108 and CVE-2014-1878
  • DLA 61-1 for libplack-perl, fixing CVE-2014-5269
  • DLA 62-1 for nss, fixing CVE-2014-1568
  • DLA 66-1 for apache2, fixing CVE-2013-6438 CVE-2014-0118 CVE-2014-0226 CVE-2014-0231

Plus I filed #762715, asking the devscripts maintainers to 'add an --lts option to dch' and #763339 against lintian: please 'recognize "squeeze-lts" as suite'.

Here's three things you could do to contribute to Debian LTS:

Thanks to everybody supporting LTS already! :-)

01 October, 2014 09:04AM

hackergotchi for Keith Packard

Keith Packard

chromium-dri3

Chromium (the browser) and DRI3

I got a note on IRC a week ago that Chromium was crashing with DRI3.

The Google team working on Chromium eventually sent me a link to the bug report. That's secret Google stuff, so you won't be able to follow the link, even though it's a bug in a free software application when running on free software drivers.

There's a bug report in the freedesktop bugzilla which looks the same to me.

In both cases, the recommended “fix” was to switch from DRI3 back to DRI2. That's not exactly a great plan, given that DRI3 offers better security between GPU-using applications, which seems like a pretty nice thing to have when you're running random GL applications from the web.

Chromium Sandboxing

I'm not entirely sure how it works, but Chromium creates a process separate from the main browser engine to talk to the GPU. That process has very limited access to the operating system via some fancy library adventures. Presumably, the hope is that security bugs in the GL driver would be harder to leverage into a remote system exploit.

Debugging in this environment is a bit tricky as you can't simply run chromium under gdb and expect to be able to set breakpoints in the GL driver. Instead, you have to run chromium with a magic flag which causes the GPU process to pause before loading the driver so you can connect to it with gdb and debug from there, along with a flag that lets you see crashes within the gpu process and the usual flag that causes chromium to ignore the GPU black list which seems to always include the Intel driver for one reason or another:

$ chromium --gpu-startup-dialog --disable-gpu-watchdog --ignore-gpu-blacklist

Once Chromium starts up, it will print out a message telling you to attach gdb to the GPU process and send that process a SIGUSR1 to continue it. Now you can happily debug and get a stack trace when the crash occurs.

Locating the Bug

The bug manifested with a segfault at the first access to a DRI3-allocated buffer within the application. We've seen this problem in the past; whenever buffer allocation fails for some reason, the driver ignores the problem and attempts to de-reference through the (NULL) buffer pointer, causing a segfault. In this case, Chromium called glClear, which tried (and failed) to allocate a back buffer causing the i965 driver to subsequently segfault.

We should probably go fix the i965 driver to not segfault when buffer allocation fails, but that wouldn't provide a lot of additional information. What I have done is add some error messages in the DRI3 buffer allocation path which at least tell you why the buffer allocation failed. That patch has been merged to Mesa master, and should also get merged to the Mesa stable branch for the next stable release.

Once I had added the error messages, it was pretty easy to see what happened:

$ chromium --ignore-gpu-blacklist
[10618:10643:0930/200525:ERROR:nss_util.cc(856)] After loading Root Certs, loaded==false: NSS error code: -8018
libGL: pci id for fd 12: 8086:0a16, driver i965
libGL: OpenDriver: trying /local-miki/src/mesa/mesa/lib/i965_dri.so
libGL: Can't open configuration file /home/keithp/.drirc: Operation not permitted.
libGL: Can't open configuration file /home/keithp/.drirc: Operation not permitted.
libGL error: DRI3 Fence object allocation failure Operation not permitted

The first two errors were just the sandbox preventing Mesa from using my GL configuration file. I'm not sure how that's a security problem, but it shouldn't harm the driver much.

The last error is where the problem lies. In Mesa, the DRI3 implementation uses a chunk of shared memory to hold a fence object that lets Mesa know when buffers are idle without using the X connection. That shared memory segment is allocated by creating a temporary file using the O_TMPFILE flag:

fd = open("/dev/shm", O_TMPFILE|O_RDWR|O_CLOEXEC|O_EXCL, 0666);

This call “cannot fail” as /dev/shm is used by glibc for shared memory objects, and must therefore be world writable on any glibc system. However, with the Chromium sandbox enabled, it returns EPERM.

Running Without a Sandbox

Now that the bug appears to be in the sandboxing code, we can re-test with the GPU sandbox disabled:

$ chromium --ignore-gpu-blacklist --disable-gpu-sandbox

And, indeed, without the sandbox getting in the way of allocating a shared memory segment, Chromium appears happy to use the Intel driver with DRI3.

Final Thoughts

I looked briefly at the Chromium sandbox code. It looks like it needs to know intimate details of the OpenGL implementation for every possible driver it runs on; it seems to contain a fixed list of all possible files and modes that the driver will pass to open(2). That seems incredibly fragile to me, especially when used in a general Linux desktop environment. Minor changes in how the GL driver operates can easily cause the browser to stop working.

01 October, 2014 06:51AM

Vincent Sanders

It is a bad plan that admits of no modification

I find it somewhat interesting that thousands of years later that our society still uses Publilius Syrus sententiae though I imagine the tendency to leave well enough alone means such phrases stay in usage.

Marvell ARM system - Photo from Steve McIntyre
One weekend Steve McIntyre asked me if I could find a source of some of some 40mm fans for some systems with some pretty strict requirements. They needed to be long life and shift a lot of air to combat a persistent overheating issue.

I sat with him and went through the Farnell utterly hateful parametric web interface and eventually came up with a couple of options which were very expensive. Only then did I stop and ask what the actual problem was.

Marvell ARM system Original internal cooling arrangement - Photo from Steve McIntyre
Steve showed me one of the Debian ARM buildd boxes which are Marvell development machines. These systems are powerful quad core machines housed in compact steel enclosures.

There is a single 40mm fan trying to provide cooling for the entire enclosure. When the units are placed horizontally and used intermittently this proves adequate. Unfortunately when the system are arranged vertically in a rack and run at full load continuously they often overheat and have to be restarted. In addition the small high speed fans need replacing frequently as their bearings wore out quickly.

Debian ARM buildd systems - Photo from Steve McIntyreThis was obviously causing some issues for the ARM Debian ports which Steve wanted to rectify. After talking the problem through for a while we came to the conclusion we could use much larger 60mm fans to blow air directly through the top of the case onto the cpu heatsink.

Larger fans can be run much more slowly to move a similar volume of air to the smaller 40mm fans which gives a much longer service life.

Hole punch and Drilling template
Steve proceeded to order enough parts to allow us to modify all the Debian systems, this worked out cheaper than a single "special" 40mm high volume fan.

I acquired a rather large steel hole punch, I chose this tool because it produces a much superior finish to a hole cutter and this project demanded a high level of finish (not to mention I loved having a valid excuse to own and use a huge allen key!)

If we had simply been modifying a single case I would have measured and marked up by hand. With the prospect of altering at least eight I laser cut a template from plywood which Andy Simpkins took great glee in excessively annotating.

We also used the opportunity to add bolt holes to securely attach the 2.5 inch SATA drives instead of using sticky pads.

Steve and I modified a single system to begin with both to check our alignment and the efficacy of the change. We were pleasantly surprised to discover that hoiby could now repeatedly do kernel compiles with all four cores flat out which was not possible before. The measured CPU temperature, which had previously been around 90°C, did not rise above 40°C

Steve and Andy on the assembly line
Steve, Andy and I then arranged a day where we took all the remaining units out of the rack at ARM, modified and returned them. We used the facilities at the Cambridge Makespace where I am a member to do the modifications.

I broke two 3mm drill bits and dulled a 4mm bit drilling all the holes, Roger Smith was good enough to loan us the use of his "Christmas tree bit" to ream the fan hole out to 16mm so we could thread the hole punch and cut the 60mm fan aperture out.

six modified systems ready to be re-racked.
We managed to get quite an assembly line going and, in my opinion, the results look pretty professional.

It has been several months since we did this work and these systems continue to run without issue. To complete the story we can see some graphs courtesy of the DSA munin instance.

CPU load on arnold.debian.org
You can clearly see the huge drop in temperature at the end of Week 25 despite the continuously high CPU load. Also there is only a single gap in the data after the changes (these indicate crashes where data was not recorded) where before there were frequent and extensive times where the systems were simply unusable.

CPU Temperature of arnold.debian.org
One reason I continue to enjoy Debian so much is the wide variety of ways in which I can contribute not only by maintaining my packages. Sometimes this kind of work does not receive the credit it deserves and hopefully highlights a small part of the frantic paddling that goes on under the serene surface of the Debian project to keep things "just working".

01 October, 2014 01:05AM by Vincent Sanders (noreply@blogger.com)

hackergotchi for Junichi Uekawa

Junichi Uekawa

Start of fourth quarter this year.

Start of fourth quarter this year. How is everything going ?

01 October, 2014 01:04AM by Junichi Uekawa

September 30, 2014

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Qt5 in Jessie: we will release with 5.3.2

Qt 5.3.2 has entered testing a few hours ago. This will be the version of Qt we will release with Debian Jessie, and it happens to be a nice coincidence, because upstream focused in stability for the 5.3 branch.

I'll now focus in fixing as many bugs as possible and in backporting Qt5 to Wheezy.

Let me warn you: if you are an upstream for a Qt4 based project be sure to be ready to switch to Qt5. If you are a maintainer of a Qt4 based project you better start asking your upstream to be ready for it :)

30 September, 2014 05:08PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

hackergotchi for Gunnar Wolf

Gunnar Wolf

Diego Gómez: Imprisoned for sharing

I got word via the Electronic Frontier Foundation about an act of injustice happening to a person for doing... Not only what I do day to day, but what I promote and believe to be right: Sharing academic articles.

Diego is a Colombian, working towards his Masters degree on conservation and biodiversity in Costa Rica. He is now facing up to eight years imprisonment for... Sharing a scholarly article he did not author on Scribd.

Many people lack the knowledge and skills to properly set up a venue to share their articles with people they know. Many people will hope for the best and expect academic publishers to be fundamentally good, not to send legal threats just for the simple, noncommercial act of sharing knowledge. Sharing knowledge is fundamental for science to grow, for knowledge to rise. Besides, most scholarly studies are funded by public money, and as the saying goes, they should benefit the public. And the public is everybody, is all of us.

And yes, if this sounds in any way like what drove Aaron Swartz to his sad suicide early this year... It is exactly the same thing. Thankfully (although, sadly, after the sad fact), thousands of people strongly stood on Aaron's side on that demand. Please sign the EFF petition to help Diego, share this, and try to spread the word on the real world needs for Open Access mandates for academics!

Some links with further information:

30 September, 2014 02:01PM by gwolf

Russell Coker

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Debian LTS report for September

Thanks to the sponsorship of multiple companies, I have been paid to work 11 hours on Debian LTS this month.

CVE triagingI started by doing lots of triage in the security tracker (if you want to help, instructions are here) because I noticed that the dla-needed.txt list (which contains the list of packages that must be taken care of via an LTS security update) was missing quite a few packages that had open vulnerabilities in oldstable.

In the end, I pushed 23 commits to the security tracker. I won’t list the details each time but for once, it’s interesting to let you know the kind of things that this work entailed:

  • I reviewed the patches for CVE-2014-0231, CVE-2014-0226, CVE-2014-0118, CVE-2013-5704 and confirmed that they all affected the version of apache2 that we have in Squeeze. I thus added apache2 to dla-needed.txt.
  • I reviewed CVE-2014-6610 concerning asterisk and marked the version in Squeeze as not affected since the file with the vulnerability doesn’t exist in that version (this entails some checking that the specific feature is not implemented in some other file due to file reorganization or similar internal changes).
  • I reviewed CVE-2014-3596 and corrected the entry that said that is was fixed in unstable. I confirmed that the versions in squeeze was affected and added it to dla-needed.txt.
  • Same story for CVE-2012-6153 affecting commons-httpclient.
  • I reviewed CVE-2012-5351 and added a link to the upstream ticket.
  • I reviewed CVE-2014-4946 and CVE-2014-4945 for php-horde-imp/horde3, added links to upstream patches and marked the version in squeeze as unaffected since those concern javascript files that are not in the version in squeeze.
  • I reviewed CVE-2012-3155 affecting glassfish and was really annoyed by the lack of detailed information. I thus started a discussion on debian-lts to see whether this package should not be marked as unsupported security wise. It looks like we’re going to mark a single binary packages as unsupported… the one containing the application server with the vulnerabilities, the rest is still needed to build multiple java packages.
  • I reviewed many CVE on dbus, drupal6, eglibc, kde4libs, libplack-perl, mysql-5.1, ppp, squid and fckeditor and added those packages to dla-needed.txt.
  • I reviewed CVE-2011-5244 and CVE-2011-0433 concerning evince and came to the conclusion that those had already been fixed in the upload 2.30.3-2+squeeze1. I marked them as fixed.
  • I droppped graphicsmagick from dla-needed.txt because the only CVE affecting had been marked as no-dsa (meaning that we don’t estimate that a security updated is needed, usually because the problem is minor and/or that fixing it has more chances to introduce a regression than to help).
  • I filed a few bugs when those were missing: #762789 on ppp, #762444 on axis.
  • I marked a bunch of CVE concerning qemu-kvm and xen as end-of-life in Squeeze since those packages are not currently supported in Debian LTS.
  • I reviewed CVE-2012-3541 and since the whole report is not very clear I mailed the upstream author. This discussion led me to mark the bug as no-dsa as the impact seems to be limited to some information disclosure. I invited the upstream author to continue the discussion on RedHat’s bugzilla entry.

And when I say “I reviewed” it’s a simplification for this kind of process:

  • Look up for a clear explanation of the security issue, for a list of vulnerable versions, and for patches for the versions we have in Debian in the following places:
    • The Debian security tracker CVE page.
    • The associated Debian bug tracker entry (if any).
    • The description of the CVE on cve.mitre.org and the pages linked from there.
    • RedHat’s bugzilla entry for the CVE (which often implies downloading source RPM from CentOS to extract the patch they used).
    • The upstream git repository and sometimes the dedicated security pages on the upstream website.
  • When that was not enough to be conclusive for the version we have in Debian (and unfortunately, it’s often the case), download the Debian source package and look at the source code to verify if the problematic code (assuming that we can identify it based on the patch we have for newer versions) is also present in the old version that we are shipping.

CVE triaging is often almost half the work in the general process: once you know that you are affected and that you have a patch, the process to release an update is relatively straightforward (sometimes there’s still work to do to backport the patch).

Once I was over that first pass of triaging, I had already spent more than the 11 hours paid but I still took care of preparing the security update for python-django. Thorsten Alteholz had started the work but got stuck in the process of backporting the patches. Since I’m co-maintainer of the package, I took over and finished the work to release it as DLA-65-1.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

30 September, 2014 01:24PM by Raphaël Hertzog

hackergotchi for Mario Lang

Mario Lang

A simple C++11 concurrent workqueue

For a little toy project of mine (a wikipedia XML dump word counter) I wrote a little C++11 helper class to distribute work to all available CPU cores. It took me many years to overcome my fear of threading: In the past, whenever I toyed with threaded code, I ended up having a lot of deadlocks, and generally being confused. It appears that I finally have understood enough of this crazyness to be able to come up with the small helper class below.

The problem

We want to spread work amongst all available CPU cores. There are no dependencies between items in our work queue. So every thread can just pick up and process an item as soon as it is ready.

The solution

This simple implementation makes use of C++11 threading primitives, lambda functions and move semantics. The idea is simple: You provide a function at construction time which defines how to process one item of work. To pass work to the queue, simply call the function operator of the object, repeatedly. When the destructor is called (once the object reachs the end of its scope), all remaining items are processed and all background threads are joined.

The number of threads defaults to the value of std::thread::hardware_concurrency(). This appears to work at least since GCC 4.9. Earlier tests have shown that std::thread::hardware_concurrency() always returned 1. I don't know when exactly GCC (or libstdc++, actually) started to support this, but at least since GCC 4.9, it is usable. Prerequisite on Linux is a mounted /proc.

The number of maximum items per thread in the queue defaults to 1. If the queue is full, calls to the function operator will block.

So the most basic usage example is probably something like:

int main() {
  typedef std::string item_type;
  distributor<item_type> process([](item_type &item) {
    // do work
  });

  while (/* input */) process(std::move(/* item */));

  return 0;
}

That is about as simple as it can get, IMHO.

The code can be found in the GitHub project mentioned above. However, since the class template is relatively short, here it is.

#include <condition_variable>
#include <mutex>
#include <queue>
#include <stdexcept>
#include <thread>
#include <vector>

template <typename Type, typename Queue = std::queue<Type>>
class distributor: Queue, std::mutex, std::condition_variable {
  typename Queue::size_type capacity;
  bool done = false;
  std::vector<std::thread> threads;

public:
  template<typename Function>
  distributor( Function function
             , unsigned int concurrency = std::thread::hardware_concurrency()
             , typename Queue::size_type max_items_per_thread = 1
             )
  : capacity{concurrency * max_items_per_thread}
  {
    if (not concurrency)
      throw std::invalid_argument("Concurrency must be non-zero");
    if (not max_items_per_thread)
      throw std::invalid_argument("Max items per thread must be non-zero");

    for (unsigned int count {0}; count < concurrency; count += 1)
      threads.emplace_back(static_cast<void (distributor::*)(Function)>
                           (&distributor::consume), this, function);
  }

  distributor(distributor &&) = default;
  distributor &operator=(distributor &&) = delete;

  ~distributor()
  {
    {
      std::lock_guard<std::mutex> guard(*this);
      done = true;
      notify_all();
    }
    for (auto &&thread: threads) thread.join();
  }

  void operator()(Type &&value)
  {
    std::unique_lock<std::mutex> lock(*this);
    while (Queue::size() == capacity) wait(lock);
    Queue::push(std::forward<Type>(value));
    notify_one();
  }

private:
  template <typename Function>
  void consume(Function process)
  {
    std::unique_lock<std::mutex> lock(*this);
    while (true) {
      if (not Queue::empty()) {
        Type item { std::move(Queue::front()) };
        Queue::pop();
        notify_one();
        lock.unlock();
        process(item);
        lock.lock();
      } else if (done) {
        break;
      } else {
        wait(lock);
      }
    }
  }
};

If you have any comments regarding the implementation, please drop me a mail.

30 September, 2014 12:20PM by Mario Lang

hackergotchi for Francois Marier

Francois Marier

Encrypted mailing list on Debian and Ubuntu

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.

Requirements

What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder

Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/main.cf:

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/main.cf:

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps

Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist list@example.org and follow the instructions

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to LISTNAME-sendkey@example.com. You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email: francois@fmarier.org
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
    
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

30 September, 2014 05:30AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.11.3

A new release 0.11.3 of Rcpp is now on the CRAN network for GNU R, and an updated Debian package has been uploaded too.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 273 packages on CRAN depend on Rcpp for making analyses go faster and further.

This release brings a fairly large number of continued enhancements, fixes and polishing to Rcpp. These were provided by a total of seven different contributors---which is a new record as well.

See below for a detailed list of changes extracted from the NEWS file, but some highlights included in this release are

  • Several API cleanups, polishes and a pre-announced code removal
  • New InternalFunction interface, and new Timer functionality.
  • More robust functionality of Rcpp Attributes as well as a new dryRun option.
  • The Rcpp FAQ was updated, as was the main Description: in the DESCRIPTION file.
  • Rcpp.package.skeleton() can now deploy functionality from pkgKitten to create Rcpp packages that purr.

One sore point, however, is that we missed that packages using Rcpp Modules appear to require a rebuild. We are sorry for the inconvenience; this has highlighted a shortcoming in our fairly robust and extensive tests. While we test our packages against all known CRAN dependents, such tests check for the ability to compile and run freshly and not whether previously built packages still run. We intend to augment our testing in this direction to avoid a repeat occurrence of such a misfeature.

Changes in Rcpp version 0.11.3 (2014-09-27)

  • Changes in Rcpp API:

    • The deprecation of RCPP_FUNCTION_* which was announced with release 0.10.5 last year is proceeding as planned, and the file macros/preprocessor_generated.h has been removed.

    • Timer no longer records time between steps, but times from the origin. It also gains a get_timers(int) methods that creates a vector of Timer that have the same origin. This is modelled on the Rcpp11 implementation and is more useful for situations where we use timers in several threads. Timer also gains a constructor taking a nanotime_t to use as its origin, and a origin method. This can be useful for situations where the number of threads is not known in advance but we still want to track what goes on in each thread.

    • A cast to bool was removed in the vector proxy code as inconsistent behaviour between clang and g++ compilations was noticed.

    • A missing update(SEXP) method was added thanks to pull request by Omar Andres Zapata Mesa.

    • A proxy for DimNames was added.

    • A no_init option was added for Matrices and Vectors.

    • The InternalFunction class was updated to work with std::function (provided a suitable C++11 compiler is available) via a pull request by Christian Authmann.

    • A new_env() function was added to Environment.h

    • The return value of range eraser for Vectors was fixed in a pull request by Yixuan Qiu.

  • Changes in Rcpp Sugar:

    • In ifelse(), the returned NA type was corrected for operator[].

  • Changes in Rcpp Attributes:

    • Include LinkingTo in DESCRIPTION fields scanned to confirm that C++ dependencies are referenced by package.

    • Add dryRun parameter to sourceCpp.

    • Corrected issue with relative path and R chunk use for sourceCpp.

  • Changes in Rcpp Documentation:

    • The Rcpp-FAQ vignette was updated with respect to OS X issues.

    • A new entry in the Rcpp-FAQ clarifies the use of licenses.

    • Vignettes build results no longer copied to /tmp to please CRAN.

    • The Description in DESCRIPTION has been shortened.

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function will now use pkgKitten package, if available, to create a package which passes R CMD check without warnings. A new Suggests: has been added for pkgKitten.

    • The modules=TRUE case for Rcpp.package.skeleton() has been improved and now runs without complaints from R CMD check as well.

  • Changes in Rcpp unit test functions:

    • Functions from the RUnit package are now prefixed with RUnit::

    • The testRcppModule and testRcppClass sample packages now pass R CMD check --as-cran cleanly with NOTES or WARNINGS

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 September, 2014 01:39AM

September 29, 2014

hackergotchi for Marco d'Itri

Marco d'Itri

CVE-2014-6271 fix for Debian sarge, etch and lenny

Very old Debian releases like sarge (3.1), etch (4.0) and lenny (5.0) are not supported anymore by the Debian Security Team and do not get security updates. Since some of our customers still have servers running these version, I have built bash packages with the fix for CVE-2014-6271 (the "shellshock" bug) and Florian Weimer's patch which restricts the parsing of shell functions to specially named variables:

http://ftp.linux.it/pub/People/md/bash/

This work has been sponsored by my employer Seeweb, an hosting, cloud infrastructure and colocation provider.

29 September, 2014 08:51AM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Letter to Starburst magazine

I recently read a few issues of Starburst magazine which is good fun, but a brief mention of the Man Booker prize in issue 404 stoked the fires of the age old SF-versus-mainstream argument, so I wrote the following:

Dear Starburst,

I found it perplexing that, in "Brave New Words", issue 404, whilst covering the Man-Booker shortlist, Ed Fortune tried to simultaneously argue that genre readers "read broadly" yet only Howard Jacobson's novel would be of passable interest. Asides from the obvious logical contradiction he is sadly overlooking David Mitchell's critically lauded and undisputably SF&F novel "The Bone Clocks", which it turned out was also overlooked by the short-listers. Still, Jacobson's novel made it, meaning SF&F represents 16% of the shortlist. Not too bad I'd say.

All the best & keep up the good work!

As it happens I'm currently struggling through "J". I'm at around the half-way mark.

29 September, 2014 08:51AM

hackergotchi for DebConf team

DebConf team

DebConf15 dates are set, come and join us! (Posted by DebConf15 team)

At DebConf14 in Portland, Oregon, USA, next year’s DebConf team presented their conference plans and announced the conference dates: DebConf15 will take place from 15 to 22 August 2015 in Heidelberg, Germany. On the Opening Weekend on 15/16 August, we invite members of the public to participate in our wide offering of content and events, before we dive into the more technical part of the conference during following week. DebConf15 will also be preceeded by DebCamp, a time and place for teams to gather for intensive collaboration.

A set of slides from a quick show-case during the DebConf14 closing ceremony provide a quick overview of what you can expect next year. For more in-depth information, we invite you to watch the video recording of the full session, in which the team provides detailed information on the preparations so far, location and transportation to the venue at Heidelberg, the different rooms and areas at the Youth Hostel (for accommodation, hacking, talks, and social activities), details about the infrastructure that are being worked on, and the plans around the conference schedule.

We invite everyone to join us in organising this conference. There are different areas where your help could be very valuable, and we are always looking forward to your ideas. Have a look at our wiki page, join our IRC channels and subscribe to our mailing lists.

We are also contacting potential sponsors from all around the globe. If you know any organisation that could be interested, please consider handing them our sponsorship brochure or contact the fundraising team with any leads.

Let’s work together, as every year, on making the best DebConf ever!

29 September, 2014 07:40AM by DebConf Organizers

September 28, 2014

hackergotchi for Ean Schuessler

Ean Schuessler

RoboJuggy at JavaOne

A few months ago I was showing my friend Bruno Souza the work I had been doing with my childhood friend and robotics genius, David Hanson.  I had been watching what David was going through in his process of creating life-like robots with the limited industrial software available for motor control. I had suggested to David that binding motors to Blender control structures was a genuinely viable possibility. David talked with his forward looking CEO, Jong Lee, and they were gracious enough to invite me to Hong Kong to make this exciting idea a reality. Working closely the HRI team (Vytas, Gabrielos, Fabien and Davide) with David’s friend and collaborators at  OpenCog (Ben Goertzel, Mandeep, David, Jamie, Alex and Samuel) a month long creative hack-fest yielded pretty amazing results.

Bruno is an avid puppeteer, a global organizer of java user groups and creator of Juggy the Java Finch, mascot of Java users and user groups everywhere. We started talking about how cool it would be to have a robot version of Juggy. When I was in China I had spent a little time playing with Mark Tilden’s RSMedia and various versions of David’s hobby servo based emotive heads. Bruno and I did a little research into the ROS Java bindings for the Robot Operating System and decided that if we could make that part of the picture we had a great and fun idea for a JavaOne talk.

Hunting and gathering

I tracked down a fairly priced RSMedia in Alaska, Bruno put a pair of rubber Juggy puppet heads in the mail and we were on our way.
We had decided that we wanted RoboJuggy to be able to run about untethered and the new RaspberryPi B+ seemed like the perfect low power brain to make that happen. I like the Debian based Raspbian distributions but had lately started using the “netinst” Pi images. These get your Pi up and running in about 15 minutes with a nicely minimalistic install instead of a pile of dependencies you probably don’t need. I’d recommend anyone interested I’m duplicating our work to stay their journey there:

Raspbian UA Net Installer

Robots seem like an embedded application but ROS only ships packages for Ubuntu. I was pleasantly surprised that there are very good instructions for building ROS from source on the Pi. I ended up following these instructions:

Setting up ROS Hydro on the Raspberry Pi

Building from source means that all your install ends up being “isolated” (in ROS speak) and your file locations and build instructions end up being subtly current. As explained in the linked article, this process is also very time consuming. One thing I would recommend once you get past this step is to use the UNIX dd command to back up your entire SD card to a desktop. This way if you make a mess of things in later steps you can restore your install to a pristine Raspbian+ROS install. If your SD drive was on /dev/sdb you might use something like this to do the job:

sudo dd bs=4M if=/dev/sdb | gzip > /home/your_username/image`date +%d%m%y`.gz

Getting Java in the mix

Once you have your Pi all set up with minimal Raspbian and ROS you are going to want a Java VM. The Pi runs the ARM CPU so you need the corresponding version of Java. I tried getting things going initially with OpenJDK and I had some issues with that. I will work on resolving that in the future because I would like to have a 100% Free Software kit for this but since this was for JavaOne I also wanted JDK8, which isn’t available in Debian yet. So, I downloaded the Oracle JDK8 package for ARM.

Java 8 JDK for ARM

At this point you are ready to start installing the ROS Java packages. I’m pretty sure the way I did this initially is wrong but I was trying to reconcile the two install procedures for ROS Java and ROS Hydro for Raspberry Pi. I started by following these directions for ROS Java but with a few exceptions (you have to click the “install from source link” in the page to see the right stuff:

Installing ROS Java on Hydro

Now these instructions are good but this is a Pi running Debian and not an Ubuntu install. You won’t run the apt-get package commands because those tools were already installed in your earlier steps. Also, this creates its own workspace and we really want these packages all in one workspace. You can apparently “chain” workspaces in ROS but I didn’t understand this well enough to get it working so what I did was this:

> mkdir -p ~/rosjava 
> wstool init -j4 ~/rosjava/src https://raw.github.com/rosjava/rosjava/hydro/rosjava.rosinstall
> source ~/ros_catkin_ws/install_isolated/setup.bash > cd ~/rosjava # Make sure we've got all rosdeps and msg packages.
> rosdep update 
> rosdep install --from-paths src -i -y

and then copied the sources installed into ~/rosjava/src into my main ~/ros_catkin_ws/src. Once those were copied over I was able to run a standard build.

> catkin_make_isolated --install

Like the main ROS install this process will take a little while. The Java gradle builds take an especially long time. One thing I would recommend to speed up your workflow is to have an x86 Debian install (native desktop, QEMU instance, docker, whatever) and do these same “build from source” installs there. This will let you try your steps out on a much faster system before you try them out in the Pi. That can be a big time saver.

Putting together the pieces

Around this time my RSMedia had finally showed up from Alaska. At first I thought I had a broken unit because it would power up, complain about not passing system tests and then shut back down. It turns out that if you just put the D batteries in and miss the four AAs that it will kind of pretend to be working so watch for that mistake. Here is a picture of the RSMedia when it first came out of the box (sorry that its rotated, I need to fix my WordPress install):

wpid-20140911_142904.jpg

 

Other parts were starting to roll in as well. The rubber puppet heads had made their way through Brazilian customs and my Pololu Mini Maestro 24 had also shown up as well as the my servo motors and pan and tilt camera rig. I had previously bought a set of 10 motors for goofing around so I bought the pan and tilt rig by itself for about $5(!) but you can buy a complete set for around $25 from a number of EBay stores.

Complete pan and tilt rig with motors for $25

A bit more about the Pololu. This astonishing little motor controller costs about $25 and gives you control of 24 motors with an easy to use and high level serial API. It is probably also possible to control these servos directly from the Pi and eliminate this board but that will be genuinely difficult because of the real-time timing issues. For $25 this thing is a real gem and you won’t regret buying it.

Now it was time to start dissecting the RSMedia and getting control of its brain. Unfortunately a lot of great information about the RSMedia has floated away since it was in its heyday 5 years ago but there is still some solid information out there that we need to round up and preserve. A great resource is the SourceForge based website here at http://rsmediadevkit.sourceforge.net.

That site has links to a number of useful sites. You will definitely want to check out their wiki. To disassemble the RSMedia I followed their instructions. I will say, it would be smart to take more pictures as you are going because they don’t take as many as they should. I took pictures of each board and its associated connections as dismantled the unit and that helped me get things back together later.  Another important note is that if all you want to do is solder onto the control board and not replace the head then its feasible to solder the board in place without completely disassembling the unit. Here are some photos of the dis-assembly:

wpid-20140921_114742.jpg wpid-20140921_113054.jpg wpid-20140921_112619.jpg

Now I also had to start adjusting the puppet head, building an armature for the motors to control it and hooking it into the robot. I need to take some more photos of the actual armature. I like to use cardboard for this kind of stuff because it is so fast to work with and relatively strong. One trick I have also learned about cardboard is that if you get something going with it and you need it to be a little more production strength you can paint it down with fiberglass resin from your local auto store. Once it dries it becomes incredibly tough because it soaks through the fibers of the cardboard and hardens around them. You will want to do this in a well ventilated area but its a great way to build super tough prototypes.

Another prototyping trick I can suggest is using a combination of Velcro and zipties to hook things together. The result is surprisingly strong and still easy to take apart if things aren’t working out. Velcro self-adhesive pads stick to rubber like magic and that is actually how I hooked the jaw servo onto the mask. You can see me torturing its first initial connection here:

Since the puppet head had come all the way from Brazil I decided to cook some chicken hearts in the churrascaria style while I worked on them in the garage. This may sound gross but I’m telling you, you need to try it! I soaked mine in soy sauce, Sriracha and chinese cooking wine. Delicious but I digress.

wpid-20140920_191551.jpg

 

As I was eating my chicken hearts I was also connecting the pan and tilt armature onto the puppet’s jaw and eye assembly. It took me most of the evening to get all this going but by about one in the morning things were starting to look good!

I only had a few days left to hack things together before JavaOne and things were starting to get tight. I had so much to do and had also started to run into some nasty surprises with the ROS Java control software. It turns out that ROS Java is less than friendly with ROS message structures that are not “built in”. I had tried to follow the provided instructions but was not (and still have not) been able to get that working.

Using “unofficial” messages with ROS Java

I still needed to get control of the RSMedia. Doing that required the delicate operation of soldering to its control board. On the board there are a set of pins that provide a serial interface to the ARM based embedded Linux computer that controls the robot. To do that I followed these excellent instructions:

Connecting to the RSMedia Linux Console Port

After some sweaty time bent over a magnifying glass I had success:

wpid-20140921_143327.jpg

I had previously purchased the USB-TTL232 accessory described in the article from Dallas’ awesome Tanner Electronics store in Dallas. If you are a geek I would recommend that you go there and say hi to its proprietor (and walking encyclopedia of electronics knowledge) Jim Tanner.

It was very gratifying when I started a copy of minicom, set it to 115200, N, 8, 1, plugged in the serial widget to the RSMedia and booted it up. I was greeted with a clearly recognizable Linux startup and console prompt. At first I thought I had done something wrong because I couldn’t get it to respond to commands but I quickly realized I had flow control turned on. Once turned off I was able to navigate around the file system, execute commands and have some fun. A little research and I found this useful  resource which let me get all kinds of body movements going:

A collection of useful commands for the RSMedia

At this point, I had a usable set of controls for the body as well as the neck armature. I had a controller running the industry’s latest and greatest robotics framework that could run on the RSMedia without being tethered to power and I had most of a connection to Java going.  Now I just had to get all those pieces working together. The only problem is that time was running out and I only had a couple of days until my talk and still had to pack and square things away at work.

The last day was spent doing things that I wouldn’t be able to do on the road. My brother Erik (and fantastic artist) came over to help paint up the juggy head and fix the eyeball armature. He used a mix of oil paint, rubber cement which stuck to the mask beautifully.

I bought battery packs for the USB Pi power and the 6v motor control and integrated them into a box that could sit below the neck armature. I fixed up a cloth neck sleeve that could cover everything. Luckily during all this my beautiful and ever so supportive girlfriend Becca had helped me get packed or I probably wouldn’t have made it out the door.

Welcome to San Francisco

THIS ARTICLE IS STILL BEING WRITTEN

 

28 September, 2014 11:14PM by ean

hackergotchi for Jonathan Dowland

Jonathan Dowland

Puppet and filesystem mounts

Well, not long after writing my last post I've found some time to write up some of my puppet adventures, sooner than I imagined...

Outside work, I sys-admin a VPS instance that is shared by a few friends. We recently embarked in a project to migrate to a different VPS instance and I took the opportunity to revisit how we managed home directories.

I've got all the disk space allocated to the VM set up as LVM physical volumes. This has proven very useful for later expansion: we can do it all live. Each user on the VM may have one or more UNIX accounts that they use. Therefore, in the old scheme, for the jon user, we mounted an allocation of disk space at /home/jons, put the account home directories under it at e.g. /home/jons/jon, symlinked /home/jon -> /home/jons/jon for brevity, and set that as the home field in the passwd entry. This worked surprisingly well, but I was always uncomfortable with having a symlink in the home path (and some software was, too.)

For the new machine, I decided to try bind mounts. Short story: they just work. However, the mtab (and df output) can look a little cluttered, and mount order becomes quite important. To manage the set-up, I wrote a few puppet snippets. First, a convenience definition to make the actual bind-mounts a little less verbose.

define bindmount($device) {
  mount { $name:
    device  => $device,
    ensure  => mounted,
    fstype  => 'none',
    options => 'bind',
    dump    => 0,
    pass    => 2,
    require => File[$device],
  }
}

Once that was in place, we then needed to ensure that the directories to which the LV were to be mounted, and to where the user's home would be bind-mounted, actually exist; we also need to mount the underlying LV and set up the bind mount. The dependency chain is actually a graph, but with the majority of dependencies quite linear:

define bindmounthome() {
  file { ["/home/${name}s", "/home/${name}"]:
    ensure  => directory,
  } -> # depended upon by
  mount { "/home/${name}s":
    device  => "LABEL=${name}",
    ensure  => mounted,
    fstype  => 'ext4',
    options => 'defaults',
    dump    => 0,
    pass    => 2,
  } -> # depended upon by
  bindmount { "/home/${name}":
    device  => "/home/${name}s/${name}",
  }
  file { "/home/${name}s/${name}":
    ensure  => directory,
    owner   => $name,
    group   => $name,
    mode    => 0701, # 0701/drwx-----x
    require => [User[$name], Group[$name], Mount["/home/${name}s"]],
  }
}

That covers the underlying mounts and the "primary" accounts. However, the point of this exercise was to support the secondary accounts for each user. There's a bit of repetition here, and with some refactoring both this and the preceding bindmounthome definition could be a bit shorter, but I'm not sure whether that would be at the expense of legibility:

define seconduser($parent) {
  file { "/home/${name}":
    ensure => directory,
  } -> # depended upon by
  bindmount { "/home/${name}":
    device => "/home/${parent}s/${name}",
  }
  file { "/home/${parent}s/${name}":
    ensure  => directory,
    owner   => $name,
    group   => $name,
    mode    => 0701, # 0701/drwx-----x
    require => [User[$name], Group[$name], Mount["/home/${parent}s"]],
  }
}

I had to re-read the above a couple of times just now to convince myself that I hadn't missed the dependencies between the mount invocations towards the bottom, but they're there: so, puppet will always run the mount for /home/jons before /home/jons/jon. Since puppet is writing to the fstab, this means that the ordering is correct and a sequential start-up will work.

If you want anything cleverer than serialised, one-at-a-time mounting at boot, I think one would have to use something other than trusty-old fstab for the job. I'm planning to look at Systemd's mount unit type, but there's no rush as this particular host is still running sysvinit for the time being.

28 September, 2014 06:50PM

hackergotchi for Clint Adams

Clint Adams

Banana Pi is a real thing

Now that I've almost caught up with life after an extended stint on the West Coast, it's time to play.

Like Gunnar, I acquired a Banana Pi courtesy of LeMaker.

My GuruPlug (courtesy me) and my Excito B3 (courtesy the lovely people at Tor) are giving me a bit of trouble in different ways, so my intent is to decommission and give away the GuruPlug and Excito B3, leaving my DreamPlug and the Banana Pi to provide the services currently performed by the GuruPlug, Excito B3, and DreamPlug.

The Banana Pi is presently running Bananian on a 32G SDHC (Class 10) card. This is close to wheezy, and appears to have a mostly-sane default configuration, but I am not going to trust some random software downloaded off the Internet on my home network, so I need to be able to run Debian on it instead.

My preliminary belief is that the two main obstacles are Linux and U-Boot. Bananian 14.09 comes with Linux 3.4.90+ #1 SMP PREEMPT Fri Sep 12 18:13:45 CEST 2014 armv7l GNU/Linux, whatever that is, and U-Boot SPL 2014.04-10694-g2ae8b32 (Sep 03 2014 - 20:53:14). I don't yet know what the status of mainline/Debian support is.

Someone gave me a wooden cigar box to use as a case, which is not working out quite as hoped. I also found that my hack to power a 3.5" SATA drive does not work, so I'll either need to hammer on that some more or resolve to use a 2.5" drive instead.

memory:

Mem:        993700      36632     957068          0       2248      11136
-/+ buffers/cache:      23248     970452
Swap:       524284       1336     522948

cpu:

Processor       : ARMv7 Processor rev 4 (v7l)
processor       : 0
BogoMIPS        : 1192.96

processor       : 1
BogoMIPS        : 1197.05

Features        : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xc07
CPU revision    : 4

Hardware        : sun7i
Revision        : 0000
Serial          : 03c32de75055484880485278165166c9

28 September, 2014 06:13PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

What have I been up to?

It's been a little while since I've written about what I've been up to. The truth is I've been busy with moving house - and I'll write a bit more about that at another time. But asides from that there have been some bits and bobs.

I use a little tool called archivemail to tidy up old listmail (my policy is to retain 30 days of listmail for most lists). If I unsubscribe to a list, then eventually I end up with an empty mail folder corresponding to that list. I decided it would be nice to extend archivemail to delete mailboxes if, after the archiving has taken place, the mailbox is empty. Doing this properly means adding delete routines to Python's "mailbox" library, which is part of the Python standard library. I've therefore started work on a patch for Python.

Since this is an enhancement, Python would only accept a patch for Python 3. Therefore, eventually, I would also have to port archivemail from Python 2 to 3. "archivemail" is basically abandonware at the moment, and the principal Debian maintainer is MIA. There was a release critical bug filed against it, so I joined the Debian Python team to co-maintain archivemail in Debian. I've worked around the RC bug but a proper fix is still to come.

In other Debian news, I've been mostly quiet. A small patch for squishyball to get it to build on Hurd, and a temporary fix patch for lhasa to get it to build on the build daemons for all architectures (problems with the test suite). All three of lhasa, squishyball and archivemail need a little bit of love to get them into shape before the jessie freeze.

I've had plans to write up some of the more interesting technical things I've been up to at work, but with the huge successes of the School we've been so busy I haven't had time. Hopefully you can soon look forward to some of our further adventures with puppet, including evaluating Shibboleth modules, some stuff about handling user directories, bind mounts and LVM volumes and actually publishing some of our more useful internal modules; I hope we will also (soon) have some useful data to go with our experiments with Linux LXC containers versus KVM-powered virtual machines in some of our use-cases. I've also got a few bits and pieces on Systemd to write up.

28 September, 2014 05:59PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Community Data Science Workshops Post-Mortem

Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.

The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.

The workshops were all free of charge and sponsored by the UW Department of Communication, who provided space, and the eScience Institute, who provided food.

cdsw_combo_images-1The curriculum for all four session session is online:

The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.

Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.

Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.

Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.

28 September, 2014 05:02AM by Benjamin Mako Hill

September 27, 2014

hackergotchi for DebConf team

DebConf team

Wrapping up DebConf14 (Posted by Paul Wise, Donald Norwood)

The annual Debian developer meeting took place in Portland, Oregon, 23 to 31 August 2014. DebConf14 attendees participated in talks, discussions, workshops and programming sessions. Video teams captured a lot of the main talks and discussions for streaming for interactive attendees and for the Debian video archive.

Between the video, presentations, and handouts the coverage came from the attendees in blogs, posts, and project updates. We’ve gathered a few articles for your reading pleasure:

Gregor Herrmann and a few members of the Debian Perl group had an informal unofficial pkg-perl micro-sprint and were very productive.

Vincent Sanders shared an inspired gift in the form of a plaque given to Russ Allbery in thanks for his tireless work of keeping sanity in the Debian mailing lists. Pictures of the plaque and design scheme are linked in the post. Vincent also shared his experiences of the conference and hopes the organisers have recovered.

Noah Meyerhans’ adventuring to Debian by train, (Inter)netted some interesting IPv6 data for future road and railwarriors.

Hideki Yamane sent a gentle reminder for English speakers to speak more slowly.

Daniel Pocock posted of GSoC talks at DebConf14, highlights include the Java Project Dependency Builder and the WebRTC JSCommunicator.

Thomas Goirand gives us some insight into a working task list of accomplishments and projects he was able to complete at DebConf14, from the OpenStack discussion to tasksel talks, and completion of some things started last year at DebConf13.

Antonio Terceiro blogged about debci and the Debian Continuous Integration project, Ruby, Redmine, and Noosfero. His post also shares the atmosphere of being able to interact directly with peers once a year.

Stefano Zacchiroli blogged about a talk he did on debsources which now has its own HACKING file.

Juliana Louback penned: DebConf 2014 and How I Became a Debian Contributor.

Elizabeth Krumbach Joseph’s in-depth summary of DebConf14 is a great read. She discussed Debian Validation & CI, debci and the Continuous Integration project, Automated Validation in Debian using LAVA, and Outsourcing webapp maintenance.

Lucas Nussbaum by way of a blog post releases the very first version of Debian Trivia modelled after the TCP/IP Drinking Game.

François Marier’s shares additional information and further discussion on Outsourcing your webapp maintenance to Debian.

Joachim Breitner gave a talk on Haskell and Debian, created a new tool for binNMUs for Haskell packages which runs via cron job. The output is available for Haskell and for OCaml, and he still had a small amount of time to go dancing.

Jaldhar Harshad Vyas was not able to attend DebConf this year, but he did tune in to the videos made available by the video team and gives an insightful viewpoint to what was being seen.

Jérémy Bobbio posted about Reproducible builds in Debian in his recap of DebConf14. One of the topics at hand involved defining a canonical path where packages must be built and a BOF discussion on reproducible builds from where the conversation moved to discussions in both Octave and Groff. New helpers dh_fixmtimes and dh_genbuildinfo were added to BTS. The .buildinfo format has been specified on the wiki and reviewed. Lots of work is being done in the project, interested parties can help with the TODO list or join the new IRC channel #debian-reproducible on irc.debian.org.

Steve McIntyre posted a Summary from the d-i / debian-cd BoF at DC14, with some of the session video available online. Current jessie D-I needs some help with the testing on less common architectures and languages, and release scheduling could be improved. Future plans: Switching to a GUI by default for jessie, a default desktop and desktop choice, artwork, bug fixes and new architecture support. debian-cd: Things are working well. Improvement discussions are on selecting which images to make I.E. netinst, DVD, et al., debian-cd in progress with http download support, Regular live test builds, Other discussions and questions revolve around which ARM platforms to support, specially-designed images, multi-arch CDs, and cloud-init based images. There is also a call for help as the team needs help with testing, bug-handling, and translations.

Holger Levsen reports on feedback about the feedback from his LTS talk at DebConf14. LTS has been perceived well, fits a demand, and people are expecting it to continue; however, this is not without a few issues as Holger explains in greater detail the lacking gatekeeper mechanisms, and how contributions are needed from finance to uploads. In other news the security-tracker is now fixed to know about old stable. Time is short for that fix as once jessie is released the tracker will need to support stable, oldstable which will be wheezy, and oldoldstable.

Jonathan McDowell’s summary of DebConf14 includes a fair perspective of the host city and the benefits of planning of a good DebConf14 location. He also talks about the need for facetime in the Debian project as it correlates with and improves everyone’s ability to work together. DebConf14 also provided the chance to set up a hard time frame for removing older 1024 bit keys from Debian keyrings.

Steve McIntyre posted a Summary from the “State of the ARM” BoF at DebConf14 with updates on the 3 current ports armel, armhf and arm64. armel which targets the ARM EABI soft-float ARMv4t processor may eventually be going away, while armhf which targets the ARM EABI hard-float ARMv7 is doing well as the cross-distro standard. Debian is has moved to a single armmp kernel flavour using Device Tree Blobs and should be able to run on a large range of ARMv7 hardware. The arm64 port recently entered the main archive and it is hoped to release with jessie with 2 official builds hosted at ARM. There is talk of laptop development with an arm64 CPU. Buildds and hardware are mentioned with acknowledgements for donated new machines, Banana Pi boards, and software by way of ARM’s DS-5 Development Studio - free for all Debian Developers. Help is needed! Join #debian-arm on irc.debian.org and/or the debian-arm mailing list. There is an upcoming Mini-DebConf in November 2014 hosted by ARM in Cambridge, UK.

Tianon Gravi posted about the atmosphere and contrast between an average conference and a DebConf.

Joseph Bisch posted about meeting his GSOC mentors, attending and contributing to a keysigning event and did some work on debmetrics which is powering metrics.debian.net. Debmetrics provides a uniform interface for adding, updating, and viewing various metrics concerning Debian.

Harlan Lieberman-Berg’s DebConf Retrospective shared the feel of DebConf, and detailed some of the work on debugging a build failure, work with the pkg-perl team on a few uploads, and work on a javascript slowdown issue on codeeditor.

Ana Guerrero López reflected on Ten years contributing to Debian.

27 September, 2014 07:40PM by DebConf Organizers

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Laptop Mode Tools 1.66

I am pleased to announce the release of Laptop Mode Tools at version 1.66.

This release fixes an important bug in the way Laptop Mode Tools is invoked. Users, now when disable it in the config file, the tool will be disabled. Thanks to bendlas@github for narrowing it down. The GUI configuration tool has been improved, thanks to Juan. And there is a new power saving module for users with ATI Radeon cards. Thanks to M. Ziebell for submitting the patch.

Laptop Mode Tools development can be tracked @ GitHub

AddThis: 

Categories: 

Keywords: 

27 September, 2014 09:09AM by Ritesh Raj Sarraf

Niels Thykier

Lintian – Upcoming API making it easier to write correct and safe code

The upcoming version of Lintian will feature a new set of API that attempts to promote safer code. It is hardly a “ground-breaking discovery”, just a much needed feature.

The primary reason for this API is that writing safe and correct code is simply too complicated that people get it wrong (including yours truly on occasion).   The second reason is that I feel it is a waste having to repeat myself when reviewing patches for Lintian.

Fortunately, the kind of issues this kind of mistake creates are usually minor information leaks, often with no chance of exploiting it remotely without the owner reviewing the output first[0].

Part of the complexity of writing correct code originates from the fact that Lintian must assume Debian packages to be hostile until otherwise proven[1]. Consider a simplified case where we want to read a file (e.g. the copyright file):

package Lintian::cpy_check;
use strict; use warnings; use autodie;
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  # BAD: This is an example of doing it wrong
  open(my $fd, '<', $info->unpacked($filename));
  ...;
  close($fd);
  return;
}

This has two trivial vulnerabilities[2].

  1. Any part of the path (usr,usr/share, …) can be asymlink to “somewhere else” like /
    1. Problem: Access to potentially any file on the system with the credentials of the user running Lintian.  But even then, Lintian generally never write to those files and the user has to (usually manually) disclose the report before any information leak can be completed.
  2. The target path can point to a non-file.
    1. Problem: Minor inconvenience by DoS of Lintian.  Examples include a named pipe, where Lintian will get stuck until a signal kills it.


Of course, we can do this right[3]:

package Lintian::cpy_check;
use strict; use warnings; use autodie;
use Lintian::Util qw(is_ancestor_of);
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  my $root = $info->unpacked
  my $path = $info->unpacked($filename);
  if ( -f $path and is_ancestor_of($root, $path)) {
    open(my $fd, '<', $path);
    ...;
    close($fd);
  }
  return;
}

Where “is_ancestor_of” is the only available utility to assist you currently.  It hides away some 10-12 lines of code to resolve the two paths and correctly asserting that $path is (an ancestor of) $root.  Prior to Lintian 2.5.12, you would have to do that ancestor check by hand in each and every check[4].

In the new version, the correct code would look something like this:

package Lintian::cpy_check;
use strict; use warnings; use autodie;
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  my $path = $info->index_resolved_path($filename);
  if ($path and $path->is_open_ok) {
    my $fd = $path->open;
    ...;
    close($fd);
  }
  return;
}

Now, you may wonder how that promotes safer code.  At first glance, the checking code is not a lot simpler than the previous “correct” example.  However, the new code has the advantage of being safer even if you forget the checks.  The reasons are:

  1. The return value is entirely based on the “file index” of the package (think: tar vtf data.tar.gz).  At no point does it use the file system to resolve the path.  Whether your malicious package trigger an undef warning based on the return value of index_resolved_index leaks nothing about the host machine.
    1. However, it does take safe symlinks into account and resolves them for you.  If you ask for ‘foo/bar’ and ‘foo’ is a symlink to ‘baz’ and ‘baz/bar’ exists in the package, you will get ‘baz/bar’.  If ‘baz/bar’ happens to be a symlink, then it is resolved as well.
    2. Bonus: You are much more likely to trigger the undef warning during regular testing, since it also happens if the file is simply missing.
  2. If you attempt to call “$path->open” without calling “$path->is_open_ok” first, Lintian can now validate the call for you and stop it on unsafe actions.

It also has the advantage of centralising the code for asserting safe access, so bugs in it only needs to be fixed in one place.  Of course, it is still possible to write unsafe code.  But at least, the new API is safer by default and (hopefully) more convenient to use.

 

[0] Lintian.debian.org being the primary exception here.

[1] This is in contrast to e.g. piuparts, which very much trusts its input packages by handing the package root access (albeit chroot’ed, but still).

[2] And also a bug.  Not all binary packages have a copyright – instead some while have a symlink to another package.

[3] The code is hand-typed into the blog without prior testing (not even compile testing it).  The code may be subject to typos, brown-paper-bag bugs etc. which are all disclaimed (of course).

[4] Fun fact, our documented example for doing it “correctly” prior to implementing is_ancestor_of was in fact not correct.  It used the root path in a regex (without quoting the root path) – fortunately, it just broke lintian when your TMPDIR / LINTIAN_LAB contained certain regex meta-characters (which is pretty rare).


27 September, 2014 07:08AM by Niels Thykier

September 26, 2014

Richard Hartmann

Release Critical Bug report for Week 39

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1393
    • Affecting Jessie: 408 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 360 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 50 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 290 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

26 September, 2014 08:45PM by Richard 'RichiH' Hartmann

hackergotchi for Steve Kemp

Steve Kemp

Next week I shall be mostly in Kraków

Next week my wife and I shall be mostly visiting Poland, and spending a week in Kraków.

It has been a while since I've had a non-Helsinki-based holiday, so I'm looking forward to the trip.

In other news I've been rationalising DNS entries and domain names recently, all being well this zone should be served by Amazon shortly, subject to the usual combination of TTLs and resolution-puns.

26 September, 2014 05:20PM

Jakub Wilk

Pet peeves: debhelper build-dependencies (redux)

$ zcat Sources.gz | grep -o -E 'debhelper [(]>= 9[.][0-9]{,7}([^0-9)][^)]*)?[)]' | sort | uniq -c | sort -rn
    338 debhelper (>= 9.0.0)
     70 debhelper (>= 9.0)
     18 debhelper (>= 9.0.0~)
     10 debhelper (>= 9.0~)
      2 debhelper (>= 9.2)
      1 debhelper (>= 9.2~)
      1 debhelper (>= 9.0.50~)

Is it a way to protest against the current debhelper's version scheme?

26 September, 2014 12:05PM

hackergotchi for Holger Levsen

Holger Levsen

20140925-reproducible-builds

Reproducible builds? I never did any - manually :)

I've never done a reproducible build attempt of any package, manually, ever. But what I've done now is setting up reproducible builds on jenkins.debian.net which will build hundreds or thousands of packages, hopefully reproducibly, regularily in the future. Thanks to Lunar's and many other peoples work, this was actually rather easy. If you want to do this manually, it should take you just a few minutes to setup a suitable build environment.

So three days ago when I wasn't exactly bored I decided that it was a good moment to implement some reproducible build jobs on jenkins.d.n, and so I gave it a try and two hours later the basic implementation was working, and then it was an evening and morning of fine tuning until I was mostly satisfied. Since then there has been some polishing, but the basic setup is done and has been working since.

What's the result? One job, reproducible_setup will just create a suitable environment for pbuilding reproducible packages as documented so well on the Debian wiki. And as that job runs 3.5 minutes only (to debootstrap from scratch), it's run daily.

And then there are currently 16 other jobs, which test reproducible builds in different areas: d-i, core, some six major desktops and some selected desktop applications, some security + privacy related packages, some build chains we have in Debian, libreoffice and X.org. Most of these jobs run several hours, but luckily not days. And they discover packages which still fail to build reproducibly, which already has caused some bugs to be filed, eg. #762732 "libdebian-installer: please do not write timestamps in Doxygen generated documentation".

So this is the output from testing the reproducibilty of all debian-installer packages: 72 packages were successfully built reproducibly, while 6 packages failed to do so. I was quite impressed by these numbers as AFAIK noone tried to build d-i reproducibly before.

72 packages successfully built reproducibly: userdevfs user-setup usb-discover udpkg tzsetup rootskel rootskel-gtk rescue preseed pkgsel partman-xfs partman-target partman-partitioning partman-nbd partman-multipath partman-md partman-lvm partman-jfs partman-iscsi partman-ext3 partman-efi partman-crypto partman-btrfs partman-basicmethods partman-basicfilesystems partman-base partman-auto partman-auto-raid partman-auto-lvm partman-auto-crypto partconf os-prober oldsys-preseed nobootloader network-console netcfg net-retriever mountmedia mklibs media-retriever mdcfg main-menu lvmcfg lowmem localechooser live-installer lilo-installer kickseed kernel-wedge kbd-chooser iso-scan installation-report installation-locale hw-detect grub-installer finish-install efi-reader dh-di debian-installer-utils debian-installer-netboot-images debian-installer-launcher clock-setup choose-mirror cdrom-retriever cdrom-detect cdrom-checker cdebconf-terminal cdebconf-entropy bterm-unifont base-installer apt-setup anna 
6 packages failed to built reproducibly: win32-loader libdebian-installer debootstrap console-setup cdebconf busybox 

What's also impressive: all packages for the newly introduced Cinnamon Desktop build reproducibly from the start!

The jenkins setup is configured via just three small files:

That's it and that's enough to keep several cores busy for days. :-) But as each job only takes a few hours each is scheduled twice a month and more jobs and packages shall be added in future (with some heuristics to schedule known good packages less often...)

I guess it's an appropriate opportunity to say "many thanks to Profitbricks", who have been donating the powerful virtual machine jenkins.debian.net is running on since October 2012. I also want to say "many many thanks to Helmut" (Grohne) who has recently joined me in maintaining this jenkins setup. And then I'd like to thank "the KGB trio" (Gregor, Tincho and Dam!) for providing those KGB bots on IRC, which are very helpful for providing notifications on IRC channels and last but not least thanks to everybody who contributed so that reproducible builds got this far! Keep up the jolly good work!

And If you happen to know failing packages not included in job-cfg/reproducible.yaml I'd like to hear about those, so they'll get regularily tested and appear on the radar, until finally bugs are filed, fixed and migrated to stable. So one day all binary packages in Debian stable will be build reproducibly. An important step on this road is probably to have this defined as an release goal for Jessie+1. And then for jessie+1 hopefully the first 10k packages will build reproducibly? Or whooping 23k maybe? ;-) And maybe release jessie+2 with 100%?!? We will see! Even Jessie already has quite some packages (someone needs to count them...) which build reproducibly with just modified dpkg(-dev) and debhelper packages alone...

So let's fix all the bugs! That said, an easier start for most of you is probably the list of useful things you (yes, you!) can do! :-)

Oh, and last but surely not least in my book: many thanks too to the nice people hosting me so friendly in the last days! Keep on rockin'!

26 September, 2014 10:34AM

Petter Reinholdtsen

How to test Debian Edu Jessie despite some fatal problems with the installer

The Debian Edu / Skolelinux project provide a Linux solution for schools, including a powerful desktop with education software, a central server providing web pages, user database, user home directories, central login and PXE boot of both clients without disk and the installation to install Debian Edu on machines with disk (and a few other services perhaps to small to mention here). We in the Debian Edu team are currently working on the Jessie based version, trying to get everything in shape before the freeze, to avoid having to maintain our own package repository in the future. The current status can be seen on the Debian wiki, and there is still heaps of work left. Some fatal problems block testing, breaking the installer, but it is possible to work around these to get anyway. Here is a recipe on how to get the installation limping along.

First, download the test ISO via ftp, http or rsync (use ftp.skolelinux.org::cd-edu-testing-nolocal-netinst/debian-edu-amd64-i386-NETINST-1.iso). The ISO build was broken on Tuesday, so we do not get a new ISO every 12 hours or so, but thankfully the ISO we already got we are able to install with some tweaking.

When you get to the Debian Edu profile question, go to tty2 (use Alt-Ctrl-F2), run

nano /usr/bin/edu-eatmydata-install

and add 'exit 0' as the second line, disabling the eatmydata optimization. Return to the installation, select the profile you want and continue. Without this change, exim4-config will fail to install due to a known bug in eatmydata.

When you get the grub question at the end, answer /dev/sda (or if this do not work, figure out what your correct value would be. All my test machines need /dev/sda, so I have no advice if it do not fit your need.

If you installed a profile including a graphical desktop, log in as root after the initial boot from hard drive, and install the education-desktop-XXX metapackage. XXX can be kde, gnome, lxde, xfce or mate. If you want several desktop options, install more than one metapackage. Once this is done, reboot and you should have a working graphical login screen. This workaround should no longer be needed once the education-tasks package version 1.801 enter testing in two days.

I believe the ISO build will start working on two days when the new tasksel package enter testing and Steve McIntyre get a chance to update the debian-cd git repository. The eatmydata, grub and desktop issues are already fixed in unstable and testing, and should show up on the ISO as soon as the ISO build start working again. Well the eatmydata optimization is really just disabled. The proper fix require an upload by the eatmydata maintainer applying the patch provided in bug #702711. The rest have proper fixes in unstable.

I hope this get you going with the installation testing, as we are quickly running out of time trying to get our Jessie based installation ready before the distribution freeze in a month.

26 September, 2014 10:20AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

R and Docker

r and docker talk picture by @mediafly

Earlier this evening I gave a short talk about R and Docker at the September Meetup of the Docker Chicago group.

Thanks to Karl Grzeszczak for setting the meeting, and for providing a pretty thorough intro talk regarding CoreOS and Docker.

My slides are now up on my presentations page.

26 September, 2014 02:57AM

September 25, 2014

hackergotchi for Steve Kemp

Steve Kemp

Today I mostly removed python

Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.

It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.

I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!

So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:

~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release

That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.

So that left three hosts that had a legitimate use for Python:

  • My mail-host runs offlineimap
    • So I purged it.
    • I replaced it with isync.
  • My host-machine runs KVM guests, via qemu-kvm.
    • qemu-kvm depends on Python solely for the script /usr/bin/kvm_stat.
    • I'm not pleased about that but will tolerate it for now.
  • The final host was my ex-mercurial host.
    • Since I've switched to git I just removed tha package.

So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.

My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.

25 September, 2014 07:11PM

hackergotchi for Aigars Mahinovs

Aigars Mahinovs

Distributing third party applications via Docker?

Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while.

For me the topic really hit at Debconf 14 where Linus voiced his frustrations with app distribution problems and also some of that was touched by Valve. Looking back we can see passionate discussions and interesting ideas on the subject from systemd developers (another) and Gnome developers (part2 and part3).

After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about.

There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on Docker. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes.

Lets play a simple scenario on how third party applications should work on Linux.

Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after #8214 is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them.

The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all.

Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process.

On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows.

For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package.

After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If this Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead.

There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality.

Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd?

P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of System V jumping all the way to System D and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.

25 September, 2014 06:54PM by aigarius

hackergotchi for Jan Wagner

Jan Wagner

Redis HA with Redis Sentinel and VIP

For an actual project we decided to use Redis for some reasons. As there is availability a critical part, we discovered that Redis Sentinel can monitor Redis and handle an automatic master failover to a available slave.

Setting up the Redis replication was straight forward and even setting up Sentinel. Please keep in mind, if you configure Redis to require an authentication password, you even need to provide that for the replication process (masterauth) and for the Sentinel connection (auth-pass).

The more interesting part is, how to migrate over the clients to the new master in case of a failover process. While Redis Sentinel could also be used as configuration provider, we decided not to use this feature, as the application needs to request the actual master node from Redis Sentinel much often, which will maybe a performance impact.
The first idea was to use some kind of VRRP, implemented into keepalived or something like this. The problem with such a solution is, you need to notify the VRRP process when a redis failover is in progress.
Well, Redis Sentinel has a configuration option called 'sentinel client-reconfig-script':

# When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
# 
# The following arguments are passed to the script:
#
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
#
# <state> is currently always "failover"
# <role> is either "leader" or "observer"
# 
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected slave
# (now a master).
#
# This script should be resistant to multiple invocations.

This looks pretty good and as there is provided a <role>, I thought it would be a good idea to just call a script which evaluates this value and based on it's return, it adds the VIP to the local network interface, when we get 'leader' and removes it when we get 'observer'. It turned out that this was not working as <role> didn't returned 'leader' when the local redis instance got master and 'observer' when got slave in any case. This was pretty annoying and I was short before giving up.
Fortunately I stumpled upon a (maybe) chinese post about Redis Sentinal, where was done the same like I did. On the second look I recognized that the decision was made on ${6} which is <to-ip>, nothing more then the new IP of the Redis master instance. So I rewrote my tiny shell script and after some other pitfalls this strategy worked out well.

Some notes about convergence. Actually it takes round about 6-7 seconds to have the VIP migrated over to the new node after Redis Sentinel notifies a broken master. This is not the best performance, but as we expect this happen not so often, we need to design the application using our Redis setup to handle this (hopefully) rare scenario.

25 September, 2014 05:56PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

#bananapi → On how compressed files should be used

I am among the lucky people who got back home from DebConf with a brand new computer: a Banana Pi. Despite the name similarity, it is not affiliated with the very well known Raspberry Pi, although it is a very comparable (although much better) machine: A dual-core ARM A7 system with 1GB RAM, several more on-board connectors, and same form-factor.

I have not yet been able to get it to boot, even from the images distributed on their site (although I cannot complain, I have not devoted more than a hour or so to the process!), but I do have a gripe on how the images are distributed.

I downloaded some images to play with: Bananian, Raspbian, a Scratch distribution, and Lubuntu. I know I have a long way to learn in order to contribute to Debian's ARM port, but if I can learn by doing... ☻

So, what is my gripe? That the three images are downloaded as archive files:

  1. 0 gwolf@mosca『9』~/Download/banana$ ls -hl bananian-latest.zip \
  2. > Lubuntu_For_BananaPi_v3.1.1.tgz Raspbian_For_BananaPi_v3.1.tgz \
  3. > Scratch_For_BananaPi_v1.0.tgz
  4. -rw-r--r-- 1 gwolf gwolf 222M Sep 25 09:52 bananian-latest.zip
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz
  6. -rw-r--r-- 1 gwolf gwolf 1.3G Sep 25 10:01 Raspbian_For_BananaPi_v3.1.tgz
  7. -rw-r--r-- 1 gwolf gwolf 1.2G Sep 25 10:05 Scratch_For_BananaPi_v1.0.tgz

Now... that is quite an odd way to distribute image files! Specially when looking at their contents:

  1. 0 gwolf@mosca『14』~/Download/banana$ unzip -l bananian-latest.zip
  2. Archive: bananian-latest.zip
  3. Length Date Time Name
  4. --------- ---------- ----- ----
  5. 2032664576 2014-09-17 15:29 bananian-1409.img
  6. --------- -------
  7. 2032664576 1 file
  8. 0 gwolf@mosca『15』~/Download/banana$ for i in Lubuntu_For_BananaPi_v3.1.1.tgz \
  9. > Raspbian_For_BananaPi_v3.1.tgz Scratch_For_BananaPi_v1.0.tgz
  10. > do tar tzvf $i; done
  11. -rw-rw-r-- bananapi/bananapi 3670016000 2014-08-06 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img
  12. -rwxrwxr-x bananapi/bananapi 3670016000 2014-08-08 04:30 Raspbian_For_BananaPi_v3_1.img
  13. -rw------- bananapi/bananapi 3980394496 2014-05-27 01:54 Scratch_For_BananaPi_v1_0.img

And what is bad about them? That they force me to either have heaps of disk space available (2GB or 4GB for each image) or to spend valuable time extracting before recording the image each time.

Why not just compressing the image file without archiving it? That is,

  1. 0 gwolf@mosca『7』~/Download/banana$ tar xzf Lubuntu_For_BananaPi_v3.1.1.tgz
  2. 0 gwolf@mosca『8』~/Download/banana$ xz Lubuntu_1404_For_BananaPi_v3_1_1.img
  3. 0 gwolf@mosca『9』~/Download/banana$ ls -hl Lubun*
  4. -rw-r--r-- 1 gwolf gwolf 606M Aug 6 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img.xz
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz

Now, wouldn't we need to decompress said files as well? Yes, but thanks to the magic of shell redirections, we can just do it on the fly. That is, instead of having 3×4GB+1×2GB files sitting on my hard drive, I just need to have several files ranging between 145M and I guess ~1GB. Then, it's as easy as doing:

  1. 0 gwolf@mosca『8』~/Download/banana$ dd if=<(xzcat bananian-1409.img.xz) of=/dev/sdd

And the result should be the same: A fresh new card with Bananian ready to fly. Right, right, people using these files need to have xz installed on their systems, but... As it stands now, I can suppose current prospective users of a Banana Pi won't fret about facing a standard Unix tool!

(Yes, I'll forward this rant to the Banana people, it's not just bashing on my blog :-P )

[update] Several people (thanks!) have contacted me stating that I use a bashism: The <(…) construct is specific to Bash. If you want to do this with any other shell, it can be done with a simple pipe:

  1. $ xzcat bananian-1409.img.xz | dd of=/dev/sdd

That allows for less piping to be done on the kernel, and is portable between different shells. Also, a possibility would be:

  1. $ xzcat bananian-1409.img.xz > /dev/sdd

Although that might not be desirable, as it avoids the block-by-block nature of dd. I'm not sure if it makes a realdifference, but it's worth saying :)

And yes, some alternatives for not unarchiving the file — Here in the blog, an anon commenter suggests (respectively, for zip and .tar.gz files):

  1. $ dd if=<(unzip -p bananian-latest.zip) of=/dev/sdd
  2. $ dd if=<(tar -xOf Lubuntu_For_BananaPi_v3.1.1.tgz) of=/dev/sdd

And a commenter by IRC suggests:

  1. $ paxtar -xOaf Raspbian_For_BananaPi_v3.1.tgz Raspbian_For_BananaPi_v3_1.img | sudo dd bs=262144 of=/dev/

Thanks!

25 September, 2014 04:37PM by gwolf

Julian Andres Klode

Petter Reinholdtsen

Suddenly I am the new upstream of the lsdvd command line tool

I use the lsdvd tool to handle my fairly large DVD collection. It is a nice command line tool to get details about a DVD, like title, tracks, track length, etc, in XML, Perl or human readable format. But lsdvd have not seen any new development since 2006 and had a few irritating bugs affecting its use with some DVDs. Upstream seemed to be dead, and in January I sent a small probe asking for a version control repository for the project, without any reply. But I use it regularly and would like to get an updated version into Debian. So two weeks ago I tried harder to get in touch with the project admin, and after getting a reply from him explaining that he was no longer interested in the project, I asked if I could take over. And yesterday, I became project admin.

I've been in touch with a Gentoo developer and the Debian maintainer interested in joining forces to maintain the upstream project, and I hope we can get a new release out fairly quickly, collecting the patches spread around on the internet into on place. I've added the relevant Debian patches to the freshly created git repository, and expect the Gentoo patches to make it too. If you got a DVD collection and care about command line tools, check out the git source and join the project mailing list. :)

25 September, 2014 09:20AM

Mike Hommey

So, hum, bash…

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for “echo“, “xterm” and “echo this is a test” are left as an exercise to the reader.

Update: Another thing that bugs me: Why is this feature even enabled in posix mode? (the mode you get from bash --posix, or, more importantly, when running bash as sh) After all, export -f is a bashism.

25 September, 2014 07:43AM by glandium

Russ Allbery

Review: Turn the Ship Around!

Review: Turn the Ship Around!, by L. David Marquet

Publisher: Portfolio
Copyright: 2012
ISBN: 1-101-62369-1
Format: Kindle
Pages: 272

Turn the Ship Around! (yes, complete with the irritating exclamation point in the title) is marketed to the business and management non-fiction market, which is clogged with books claiming to provide simple techniques to be a great manager or fix an organization. If you're like me, this is a huge turn-off. The presentation of the books is usually just shy of the click-bait pablum of self-help books. Many of the books are written by famous managers best known for doing horrible things to their staff (*cough* Jack Welch). It's hard to get away from the feeling that this entire class of books is an ocean of bromides covering a small core of outright evil.

This book is not like that, and Marquet is not one of those managers. It can seem that way at times: it is presented in the format that caters to short attention span, with summaries of primary points at the end of every short chapter and occasionally annoying questions sprinkled throughout. I'm capable of generalizing information to my own life without being prompted by study questions, thanks. But that's just form. The core of this book is a surprisingly compelling story of Marquet's attempt to introduce a novel management approach into one of the most conservative and top-down of organizations: a US Navy nuclear submarine.

I read this book as an individual employee, and someone who has no desire to ever be a manager. But I recently changed jobs and significantly disrupted my life because of a sequence of really horrible management decisions, so I have strong opinions about, at least, the type of management that's bad for employees. A colleague at my former employer recommended this book to me while talking about the management errors that were going on around us. It did such a good job of reinforcing my personal biases that I feel like I should mention that as a disclaimer. When one agrees with a book this thoroughly, one may not have sufficient distance from it to see the places where its arguments are flawed.

At the start of the book, Marquet is assigned to take over as captain of a nuclear submarine that's struggling. It had a below-par performance rating, poor morale, and the worst re-enlistment rate in the fleet, and was not advancing officers and crew to higher ranks at anywhere near the level of other submarines. Marquet brought to this assignment some long-standing discomfort with the normal top-down decision-making processes in the Navy, and decided to try something entirely different: a program of radical empowerment, bottom-up decision-making, and pushing responsibility as far down the chain of command as possible. The result (as you might expect given that you can read a book about it) was one of the best-performing submarines in the fleet, with retention and promotion rates well above average.

There's a lot in here about delegated decision-making and individual empowerment, but Turn the Ship Around! isn't only about that. Those are old (if often ignored) rules of thumb about how to manage properly. I think the most valuable part of this book is where Marquet talks frankly about his own thought patterns, his own mistakes, and the places where he had to change his behavior and attitude in order to make his strategy successful. It's one thing to say that individuals should be empowered; it's quite another to stop empowering them (which is still a top-down strategy) and start allowing them to be responsible. To extend trust and relinquish control, even though you're the one who will ultimately be held responsible for the performance of the people reporting to you. One of the problems with books like this is that they focus on how easy the techniques presented in the book are. Marquet does a more honest job in showing how difficult they are. His approach was not complex, but it was emotionally quite difficult, even though he was already biased in favor of it.

The control, hierarchy, and authority parts of the book are the most memorable, but Marquet also talks about, and shows through specific examples from his command, some accompanying principles that are equally important. If everyone in an organization can make decisions, everyone has to understand the basis for making those decisions and understand the shared goals, which requires considerable communication and open discussion (particularly compared to a Navy ideal of an expert and taciturn captain). It requires giving people the space to be wrong, and requires empowering people to correct each other without blame. (There's a nice bit in here about the power of deliberate action, and while Marquet's presentation is more directly applicable to the sorts of physical actions taken in a submarine, I was immediately reminded of code review.) Marquet also has some interesting things to say about the power of, for lack of a better term, esprit de corps, how to create it, and the surprising power of acting like you have it until you actually develop it.

As mentioned, this book is very closely in alignment with my own biases, so I'm not exactly an impartial reviewer. But I found it fascinating the degree to which the management situation I left was the exact opposite of the techniques presented in this book in nearly every respect. I found it quite inspiring during my transition period, and there are bits of it that I want to read again to keep some of the techniques and approaches fresh in my mind.

There is a fair bit of self-help-style packaging and layout here, some of which I found irritating. If, like me, you don't like that style of book, you'll have to wade through a bit of it. I would have much preferred a more traditional narrative story from which I could draw my own conclusions. But it's more of a narrative than most books of this sort, and Marquet is humble enough to show his own thought processes, tensions, and mistakes, which adds a great deal to the presentation. I'm not sure how directly helpful this would be for a manager, since I've never been in that role, but it gave me a lot to think about when analyzing successful and unsuccessful work environments.

Rating: 8 out of 10

25 September, 2014 03:16AM

September 24, 2014

Laura Arjona

10 short steps to contribute translations to free software for Android

This small guide assumes that you know how to create a public repository with git (or other version control system). Maybe some projects use other VCS, Subversion or whatever; the process would be similar although the commands will be different of course.

If you don’t want to use any VCS, you can just download the corresponding file, translate it, and send it by email or to the BTS of the project, but the commands required are very easy and you’ll see soon that using git (or any VCS) is quite comfortable and less scary than what it seems.

So, you were going to recommend a nice app that you use or found in F-Droid to your friend, but she does not understand English. Why not translating the app for her? And for everybody? It’s a job that can be done in 15 minutes or so (Android apps have very short strings, few menus, and so). Let’s go!

1.- Search the app in the F-Droid website

You can do it going to the URL: https://f-droid.org/repository/browse/?fdfilter=wordofappname

Example: https://f-droid.org/repository/browse/?fdfilter=pomodoro

Then, open the details of the app, and learn where’s the source code.

2.- Clone the source code

If you have an account in that forge, fork/clone the project into your account, and then, clone your fork/clone in local.

If you haven’t got an account in that forge, clone the project in local.

git clone URLofTheProjectOrYourClone

3.- In local, create a new branch, and checkout to it.

cd nameofrepo

git checkout -b Spanish

4.- Then, copy the “/res/values” folder into “res/values-XX” folder (where XX is your language code)

cp ./res/values /res/values-es -R

5.- Translate

Edit the “strings.xml” file that is in the “res/values-XX” folder, and change the English strings to your language (respect the XML format).

6.- Translate other files, or delete them

If there are more files in that folder (e.g. “arrays.xml”), review them to know if they have “translatable” strings. If yes, translate them. If not, delete the files.

7.- Commit

When you are finished, commit your changes:

git add res/values-es/*

git commit -a

(Message can be “Spanish translation” or so)

8.- Push your changes to your public repo

If you didn’t create a public clone of the repo in your forge, create a public repo and push your local stuff into there.

git push --all

9.- Request a merge to the original repo

(Using the web interface of the forge, if it is the same for the original repo and your clone, or sending an email or creating an issue and providing the URL of your repo). For example, open a new issue in the project’s BTS

Title: Spanish translation available for merging

Body: Hi everybody.

Thanks for your work in "nameofapp".

I have completed a Spanish translation, it's available for review/merge in the Spanish branch of my repo:

https://urlofyourclone

Best regards

10.- Congratulations!

Translations are new features, and having a new feature in your app for free is a great thing, so probably the app developer(s) will merge your translation soon.

Share your joy with your friends, so they begin to use the app you translated, and maybe become translators too!

Comments?

You can comment on this post in this pump.io thread.


Filed under: Tools, Writings (translations) Tagged: Android, Contributing to libre software, English, Free Software, libre software, translations

24 September, 2014 11:14PM by larjona

Julian Andres Klode

APT 1.1~exp3 released to experimental: First step to sandboxed fetcher methods

Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.

There are three things that we looked at:

  1. Reducing privileges by setting a new user and group
  2. chroot()
  3. seccomp-bpf sandbox

Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.

If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.

Known issues:

  • We drop all groups except the primary gid of the user
  • copy breaks if that group has no read access to the files

We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.


Filed under: Uncategorized

24 September, 2014 09:06PM by Julian Andres Klode

Vincent Sanders

I wanted to go to Portland because it's a really good book town.

Plane at Heathrow terminal 5 taking me to America for Debconf 14Patti Smith is right, more than any other US city I have visited, Portland feels different. Although living in Cambridge, which sometimes feels like where books were invented, might give me a warped sense of a place.

Jo McIntyre getting on the tram at PDX
I have visited Portland a few times previously and I feel comfortable every time I arrive at PDX. Sure the place still suffers from the american obsession with the car but similar to New York you can rely on public transport to get about.

On this occasion my visit was for the Debian Conference which i was excited to attend having missed the previous one in Switzerland. This time the conference has changed its format to being 10 days long and mixing the developer time in with the more formal sessions.

The opening session gave Steve McIntyre and myself the opportunity to present a small token of our appreciation to Russ. The keynote speakers that afternoon were all very interesting both Stefano Zacchiroli and Gabriella Coleman giving food for thought on two very different subjects.

The sponsored accomodation rooms were plesent
Several conferences in the past have experienced issues with sponsored accommodation and food, I am very pleased to report that both were very good this time. The room I was in had a small kitchen area, en-suite bathroom, desks and most importantly comfortable beds.

Andy and Patty in the Ondine dining area
The food provision was in the form of a buffet in the Ondine facility. The menu was not greatly varied but catered to all requirements including vegetarian and gluten free diets.

Neil, Rob, Jo, Steve , Neil, Daniel and Andy dining under the planes
Some of us went on a visit to the Evergreen air and space museum to look at some rare aircraft and rockets. I can thoroughly recommend a visit if you are in the area.

These are just the highlights of the week though, the time in the hack-labs was productive with several practical achievements Including:
- Uploading new packages reducing the bug count
- Sorting out getting an updated key into the Debian keyring.

Overall I had a thoroughly enjoyable time and got a lot out of the conference this year. The new format suited me surprisingly well and as usual the social side was as valuable as the practical.

I hope the organisers have recovered enough to appreciate just how good a job they did and not get hung up on the small number of things that went wrong when the majority of things went perfectly to plan.

24 September, 2014 09:37AM by Vincent Sanders (noreply@blogger.com)

Russell Coker

Cheap 3G Data in Australia

The Request

I was asked for advice about cheap 3G data plans. One of the people who asked me has a friend with no home Internet access, the friend wants access but doesn’t want to pay too much. I don’t know whether the person in question can’t use ADSL/Cable (maybe they are about to move house) or whether they just don’t want to pay for it.

3G data in urban areas in Australia is fast enough for most Internet use. But it’s not good for online games or VOIP. It’s also not very useful for Youtube and other online video. There is a variety of 3G speed testing apps for Android phones and there are presumably similar apps for the iPhone. Before signing up for 3G at home it’s probably best to get a friend who’s on the network in question to test Internet speed at your house, it would be annoying to sign up for an annual contract and then discover that your home is in a 3G dead spot.

Cheapest Offers

The best offer at the moment for moderate data use seems to be Amaysim with 10G for $99.90 and an expiry time of 365 days [1]. 10G in a year isn’t a lot, but it’s pre-paid so the user can buy another 10G of data whenever they want. At the moment $10 for 1G of data in a month and $20 for 2G of data in a month seem to be common offerings for 3G data in Australia. If you use exactly 1G per month then Amaysim isn’t any better than a number of other telcos, but if your usage varies (as it does with most people) then spreading the data use over several months offers significant savings without the need to save big downloads for the last day of the month.

For more serious Internet use Virgin has pre-paid offerings of 6G for $30 and 12G for $40 which has to be used in a month [2]. Anyone who uses an average of more than 3G per month will get better value from the Virgin offers.

If anyone knows of cheaper options than Amaysim and Virgin then please let me know.

Better Coverage

Both Amaysim and Virgin use the Optus network which covers urban areas quite well. I used Virgin a few years ago (and presume that it has only improved since then) and my wife uses Amaysim now. I haven’t had any great problems with either telco. If you need better coverage than the Optus network provides then Telstra is the only option. Telstra have a number of prepaid offers, the most interesting is $100 for 10G of data that expires in 90 days [3].

That Telstra offer is the same price as the Amaysim offer and only slightly more expensive than Virgin if you average 3.3G per month. It’s a really good deal if you average 3.3G per month as you can expect it to be faster and have better coverage.

Which One to Choose?

I think that the best option for someone who is initially connecting their home via 3g is to start with Amaysim. Amaysim is the cheapest for small usage and they have an Amaysim Android app and web page for tracking usage. After using a few gig of data on Amaysim it should be possible to determine which plan is going to be most economical in the long term.

Connecting to the Internet

To get the best speed you need a 4G AKA LTE connection. But given that 3G speed is great enough to use expensive amounts of data it doesn’t seem necessary to me. I’ve done a lot of work over the Internet with 3G from Virgin, Kogan, Aldi, and Telechoice and haven’t felt a need to pay for anything faster.

I think that the best thing to do is to use an old phone running Android 2.3 or iOS 4.3 as a Wifi access point. The cost of a dedicated 3G Wifi AP is enough to significantly change the economics of such Internet access and most people have access to old smart phones.

24 September, 2014 07:06AM by etbe

hackergotchi for Matthew Garrett

Matthew Garrett

My free software will respect users or it will be bullshit

I had dinner with a friend this evening and ended up discussing the FSF's four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories - someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don't fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world - nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of "Free software inherently respects users", but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there's no meaningful way for users to verify that the code has not been modified - and there's a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We're left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument - the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don't make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire - the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That's why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don't leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you're doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I'm less angry, I'm solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments

24 September, 2014 06:59AM

Robert Collins

what-poles-for-the-tent

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.


24 September, 2014 05:11AM

Russ Allbery

Review: 2014 Hugos: Short Story Nominees

Review: 2014 Hugos: Short Story Nominees, edited by Loncon 3

Publisher: Loncon 3
Copyright: 2014
Format: Kindle

This is a bit of a weird "book review," since this is not a book. Rather, it's the collection of Hugo-nominated short stories for the 2014 Hugos (given for works published in 2013) at Loncon 3, the 2014 Worldcon. As such, the "editor" is the pool of attendees and supporting members who chose to nominate works, all of which had been previously edited by other editors in their original publication.

This is also not something that someone else can acquire; if you were not a supporting or attending member, you didn't get the voting packet. But I believe all of the stories here are available on-line for free in some form, a short search away.

"If You Were a Dinosaur, My Love" by Rachel Swirsky: The most common complaint about this story is that it's not really a story, and I have to agree. It's a word image of an alternate world in which the narrator's love is a human-sized dinosaur, starting with some surreal humor and then slowly shifting tone as it reveals the horrible event that's happened to the narrator's actual love, and that's sparked the wish for her love to have claws and teeth. It's reasonably good at what it's trying to do, but I wanted more of a story. The narrator's imagination didn't do much for me. (5)

"The Ink Readers of Doi Saket" by Thomas Olde Heuvelt: At least for me, this story suffered from being put in the context of a Hugo nominee. It's an okay enough story about a Thai village downstream from a ritual that involves floating wishes down the river, often with offerings in the improvised small boats. The background of the story is somewhat cynical: the villagers make some of the wishes come true, sort of, while happily collecting the offerings and trying to spread the idea that the wishes with better offerings are more likely to come true. The protagonist follows a familiar twist: he actually can make wishes come true, maybe, but is very innocent about his role in the world.

This is not a bad story, although stories written by people with western-sounding names about non-western customs worry me, and there were a few descriptions and approaches here (such as the nickname translations in footnotes and the villager archetypes) that made my teeth itch. But it is not a story that belongs on the Hugo nomination slate, at least in my opinion. It's either cute or mildly irritating, depending on one's mood when one meets it, not horribly original, and very forgettable. (5)

"Selkie Stories Are for Losers" by Sofia Samatar: I really liked this story for much of its length. It features a couple of young, blunt, and bitter women, and focuses on the players in the typical selkie story that don't get much attention. The selkie's story is one of captivity or freedom; her lover's story is the inverse, the captor or the lover. But I don't recall a story about the children before, and I think Samatar got the tone right. It has the bitterness of divorce and abandonment mixed with the disillusionment of fantasy turned into pain.

My problem with this story is the ending, or rather, the conclusion, since the story doesn't so much end as stop. There's a closing paragraph that gives some hint of the shape to come, but it gave me almost no closure, and it didn't answer any of the emotional questions that the rest of the story raised for me. I wanted something more, some sort of epiphany or clearer determination. (7)

"The Water That Falls on You from Nowhere" by John Chu: This was by far my favorite of the nominees, which is convenient since it won. I thought it was the only nominee that felt in the class of stories I would expect to win a Hugo.

I think this story needs one important caveat up front. The key conceit of the story is that, in this world, water falls on you out of nowhere if you tell any sort of lie. It does not explore the practical impact on that concept for the broader world. That didn't bother me; for some reason, I wasn't really expecting it to do so. But it did bother several other people I've seen comment on this story. They were quite frustrated that the idea was used primarily to shape a personal and family emotional dilemma, not to explore the impact on the world. So, go into this with the right expectations: if you want world-building or deep exploration of a change in physical laws, you will want a different story.

This story, instead, is a beautiful gem about honesty in relationships, about communication about very hard things and very emotional things, about coming out, about trusting people, and about understanding people. I thought it was beautiful. If you read Captain Awkward, or other discussion of how to deal with difficult families and the damage they cause to relationships, seek this one out. It surprised me, and delighted me, and made me cry in places, and I loved the ending. It's more fantasy than science fiction, and it uses the conceit as a trigger for a story about people instead of a story about worlds and technology, but I'm still very happy to see it win. (9)

Rating: 7 out of 10

24 September, 2014 03:46AM

September 23, 2014

hackergotchi for Steve Kemp

Steve Kemp

Waiting for features upstream

I (grudgingly) use the Calibre e-book management software to handle my collection of books, and copy them over to my kindle-toy.

One thing that has always bothered me was the fact that when books are imported their ratings are too. If I receive a small sample of ebooks from a friend their ratings are added to my collections.

I've always regarded ratings as things personal to me, rather than attributes of a book itself; as my tastes might not match yours, and vice-versa.

On that basis the last time I was importing a small number of books and getting annoyed at having to manually reset all the imported ratings I decided to do something about it. I started hacking and put together a simple Calibre plugin to automatically zero ratings when books are imported to the collection (i.e. set the rating to be zero).

Sadly this work wasn't painless, despite the small size, as an unfortunate bug in Calibre meant my plugin method wasn't called. Happily Kovid Goyal helped me work through the problem, and he committed a fix that will be in the next Calibre release. For the moment I'm using today's git-snapshot and it works well.

Similarly I've recently started using extended file attributes to store metadata on my desktop system. Unfortunately the GNU findutils package doesn't allow you to do the obvious thing:

$ find ~/foo -xattr user.comment
/home/skx/foo/bar/t.txt
/home/skx/foo/bar/xc.txt
/home/skx/foo/bar/x.txt

There are several xattr patches floating around, but I had to bundle my own in debian/patches to get support for finding files that have particular attribute names.

Maybe one day extended attributes will be taken seriously. (rsync, cp, etc will preserve them. I'm hazy on the compatibility with tar, but most things seem to be working.)

23 September, 2014 08:42PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Can printing be so hard‽

Dear lazyweb,

I am tired of finding how to get my users to happily print again. Please help.

Details follow.

Several years ago, I configured our Institute's server to provide easy, nifty printing support for all of our users. Using Samba+CUPS, I automatically provided drivers to Windows client machines, integration with our network user scheme (allowing for groups authorization — That means, you can only print in your designated printer), flexible printer management (i.e. I can change printers on the server side without the users even noticing — Great when we get new hardware or printers get sent to repairs!)...

Then, this year the people in charge of client machines in the institute decided to finally ditch WinXP licenses and migrate to Windows 7. Sweet! How can it hurt?

Oh, it can hurt. Terribly.

Windows 7 uses a different driver model, and after quite a bit of hair loss, I was not able to convince Samba to deliver drivers to Win7 (FWIW, I think we are mostly using 64 bit versions). Not only that, it also barfs when we try to install drivers manually and print to a share. And of course, it barfs in the least useful way, so it took me quite a bit of debugging and Web reading to find out it was not only my fault.

So, many people have told me that Samba (or rather, Windows-type networking) is no longer regarded as a good idea for printing. The future is here, and it's called IPP. And it is simpler, because Windows can talk directly with CUPS! Not only that, CUPS allows me to set valid users+groups to each printer. So, what's there to lose?

Besides time, that is. It took me some more hair pulling to find out that Windows 7 is shipped by default (at least in the version I'm using) with the Internet Printing Server feature disabled. Duh. OK, enable it, and... Ta-da! It works with CUPS! Joy, happiness!

Only that... It works only when I use it with no authentication.

Windows has an open issue, with its corresponding hotfix even, because Win7 and 2008 fail to provide user credentials to print servers...

So, yes, I can provide site-wide printing capabilities, but I still cannot provide per-user or per-group authorization and accounting, which are needed here.

I cannot believe this issue cannot be solved under Windows 7, several years after it hit the market. Or am I just too blunt and cannot find an obvious solution?

Dear lazyweb, I did my homework. Please help me!

23 September, 2014 06:23PM by gwolf

Enrico Zini

pressure

Pressure

I've just stumbled on this bit that seems relevant to me:

Insist on using objective criteria

The final step is to use mutually agreed and objective criteria for evaluating the candidate solutions. During this stage they encourage openness and surrender to principle not pressure.

http://www.wikisummaries.org/Getting_to_Yes

I find the concept of "pressure" very relevant, and I like the idea of discussions being guided by content rather than pressure.

I'm exploring the idea of filing under this concept of "pressure" most of the things described in code of conducts, and I'm toying with looking at gender or race issues from the point of view of making people surrender to pressure.

In that context, most code of conducts seem to be giving a partial definition of "pressure". I've been uncomfortable at DebConf this year, because the conference PG12 code of conduct would cause me trouble for talking about what lessons can Debian learn from consent culture in BDSM communities, but it would still allow situations in which people would have to yield to pressure, as long as the pressure was done avoiding the behaviours blacklisted by the CoC.

Pressure could be the phrase "you are wrong" without further explanation, spoken by someone with more reputation than I have in a project. It could be someone with the time for writing ten emails a day discussing with someone with barely the time to write one. It could be someone using elaborate English discussing with someone who needs to look up every other word in a dictionary. It could be just ignoring emails from people who have issues different than mine.

I like the idea of having "please do not use pressure to bring your issues forward" written somewhere, rather than spend time blacklisting all possible ways of pressuring people.

I love how the Diversity Statement is elegantly getting all this where it says: «We welcome contributions from everyone as long as they interact constructively with our community.»

However, I also find it hard not to fall back to using pressure, even just for self-preservation: I have often found myself in the situation of having the responsibility to get a job done, and not having the time or emotional resources to even read the emails I get about the subject. All my life I've seen people in such a situation yell "shut up and let me work!", and I feel a burning thirst for other kinds of role models.

A CoC saying "do not use pressure" would not help me much here, but being around people who do that, learning to notice when and how they do it, and knowing that I could learn from them, that certainly would.

If you can link to examples, I'd like to add them here.

23 September, 2014 02:18PM

Dariusz Dwornikowski

debrfstats software for RFS statistics

Last time I told that I would release software I used to make RFS stats plots. You can find it in my github repo - github.com/tdi/debrfstats.

The software contains small class to get data needed to generate plots, as well as for doing some simple bug analysis. The software also contains an R script to make plots from a CSV file. For now debrfstats uses SOAP interface to Debbugs but I am now working on adding a UDD data source.

The software is written in Python 2 (SOAPpy does not come in 3 flavour), some usage examples are in the main.py file in the repository.

If you have any questions or wishes for debrfstats do not hesitate to contact me.

23 September, 2014 09:35AM by Dariusz Dwornikowski

hackergotchi for Keith Packard

Keith Packard

easymega-118k

Neil Anderson Flies EasyMega to 118k' At BALLS 23

Altus Metrum would like to congratulate Neil Anderson and Steve Cutonilli on the success the two stage rocket, “A Money Pit”, which flew on Saturday the 20th of September on an N5800 booster followed by an N1560 sustainer.

“A Money Pit” used two Altus Metrum EasyMega flight computers in the sustainer, each one configured to light the sustainer motor and deploy the drogue and main parachutes.

Safely Staged After a 7 Second Coast

After the booster burned out, the rocket coasted for 7 seconds to 250m/s, at which point EasyMega was programmed to light the sustainer. As a back-up, a timer was set to light the sustainer 8 seconds after the booster burn-out. In both cases, the sustainer ignition would have been inhibited if the rocket had tilted more than 20° from vertical. During the coast, the rocket flew from 736m to 3151m, with speed going from 422m/s down to 250m/s.

This long coast, made safe by EasyMega's quaternion-based tilt sensor, allowed this flight to reach a spectacular altitude.

Apogee Determined by Accelerometer

Above 100k', the MS5607 barometric sensor is out of range. However, as you can see from the graph, the barometric sensor continued to return useful data. EasyMega doesn't expect that to work, and automatically switched to accelerometer-only apogee determination mode.

Because off-vertical flight will under-estimate the time to apogee when using only an accelerometer, the EasyMega boards were programmed to wait for 10 seconds after apogee before deploying the drogue parachute. That turned out to be just about right; the graph shows the barometric data leveling off right as the apogee charges fired.

Fast Descent in Thin Air

Even with the drogue safely fired at apogee, the descent rate rose to over 200m/s in the rarefied air of the upper atmosphere. With increasing air density, the airframe slowed to 30m/s when the main parachute charge fired at 2000m. The larger main chute slowed the descent further to about 16m/s for landing.

23 September, 2014 04:33AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.4.450.1.0

Continuing with his standard pace of approximately one new version per month, Conrad released a new minor release of Armadillo a few days ago. As before, I had created a GitHub-only pre-release which was tested against all eighty-seven (!!) CRAN dependents of our RcppArmadillo package and then uploaded RcppArmadillo 0.4.450.0 to CRAN.

The CRAN maintainers pointed out that under the R-development release, a NOTE was issued concerning the C-library's rand() call. This is a pretty new NOTE, but it means using the (sometimes poor quality) rand() generator is now a no-no. Now, Armadillo being as robustly engineered as it is offers a new random number generator based on C++11 as well as a fallback generator for those unfortunate enough to live with an older C++98 compiler. (I would like to note here that I find Conrad's continued support for both C++11, offering very useful modern language idioms, as well as the fallback code for continued deployment and usage by those constrained in their choice of compilers rather exemplary --- because contrary to what some people may claim, it is not a matter of one or the other. C++ always was, and continues to be, a multi-paradigm language which can be supported easily by several standard. But I digress...)

In any event, one cannot argue with CRAN about their prescription of a C++98 compiler. So Conrad and I discussed this over email, and came up with a scheme where a user-package (such as RcppArmadillo) can provide an alternate generator which Armadillo then deploys. I implemented a first solution which was then altered / reflected by Conrad in a revised version 4.450.1 of Armadillo. I packaged, and now uploaded, that version as RcppArmadillo 0.4.450.1.0 to both CRAN and into Debian.

Besides the RNG change already discussed, this release brings a few smaller changes from the Armadillo side. These are detailed below in the extract from the NEWS file. On the RcppArmadillo side, we now have support for pkgKitten which is both very exciting and likely the topic of another blog post with an example of creating an RcppArmadillo package that purrs. In the process, I overhauled and polished how new packages are created by RcppArmadillo.package.skeleton(). An upcoming blog post may provide an example.

Changes in RcppArmadillo version 0.4.450.1.0 (2014-09-21)

  • Upgraded to Armadillo release Version 4.450.1 (Spring Hill Fort)

    • faster handling of matrix transposes within compound expressions

    • expanded symmatu()/symmatl() to optionally disable taking the complex conjugate of elements

    • expanded sort_index() to handle complex vectors

    • expanded the gmm_diag class with functions to generate random samples

  • A new random-number implementation for Armadillo uses the RNG from R as a fallback (when C++11 is not selected so the C++11-based RNG is unavailable) which avoids using the older C++98-based std::rand

  • The RcppArmadillo.package.skeleton() function was updated to only set an "Imports:" for Rcpp, but not RcppArmadillo which (as a template library) needs only LinkingTo:

  • The RcppArmadillo.package.skeleton() function will now prefer pkgKitten::kitten() over package.skeleton() in order to create a working package which passes R CMD check.

  • The pkgKitten package is now a Suggests:

  • A manual page was added to provide documentation for the functions provided by the skeleton package.

  • A small update was made to the package manual page.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 September, 2014 03:00AM

September 22, 2014

hackergotchi for Gunnar Wolf

Gunnar Wolf

One month later: How is the set of Debian keyrings faring?

OK, it's almost one month since we (the keyring-maintainers) gave our talk at DebConf14; how are we faring regarding key transitions since then? You can compare the numbers (the graphs, really) to those in our DC14 presentation.

Since the presentation, we have had two keyring pushes:

First of all, the Non-uploading keyring is all fine: As it was quite recently created, and as it is much smaller than our other keyrings, it has no weak (1024 bit) keys. It briefly had one in 2010-2011, but it's long been replaced.

Second, the Maintainers keyring: In late July we had 222 maintainers (170 with >=2048 bit keys, 52 with weak keys). By the end of August we had 221: 172 and 49 respectively, and by September 18 we had 221: 175 and 46.

As for the Uploading developers, in late July we had 1002 uploading developers (481 with >=2048 bit keys, 521 with weak keys). By the end of August we had 1002: 512 and 490 respectively, and by September 18 we had 999: 531 and 468.

Please note that these numbers do not say directly that six DMs or that 50 uploading DDs moved to stronger keys, as you'd have to factor in new people being added, keys migrating between different keyrings (mostly DM⇒DD), and people retiring from the project; you can get the detailed information looking at the public copy of our Git repository, particularly of its changelog.

And where does that put us?

Of course, I'm very happy to see that the lines in our largest keyring have already crossed. We now have more people with >=2048 bit keys. And there was a lot of work to do this processing done! But that still means... That in order not to lock a large proportion of Debian Developers and Maintainers out of the project, we have a real lot of work to do. We would like to keep the replacement slope high (because, remember, in January 1st we will remove all small keys from the keyring).

And yes, we are willing to do the work. But we need you to push us for it: We need you to get a new key created, to gather enough (two!) DD signatures in it, and to request a key replacement via RT.

So, by all means: Do keep us busy!

AttachmentSize
Debian Developers (uploading)266.66 KB
Debian Developers (non-uploading)204.17 KB
Debian Maintainers296.73 KB

22 September, 2014 06:13PM by gwolf

hackergotchi for Konstantinos Margaritis

Konstantinos Margaritis

EfikaMX updated wheezy and jessie images available

A while ago, I promised to some people in powerdeveloper.org forum that I would provide bootable armhf images for wheezy but most importantly for jessie with an updated kernel. After a delay -I did have the images ready and working, but had to clean them up a bit- I decided to publish them here first.

So, here are the images:

http://freevec.org/files/efikamx-wheezy-armhf-20140921.img.xz (559MB)
http://freevec.org/files/efikamx-jessie-armhf-20140921.img.xz (635MB)

22 September, 2014 05:38PM by markos

September 21, 2014

hackergotchi for Joachim Breitner

Joachim Breitner

Using my Kobo eBook reader as an external eInk monitor

I have an office with a nice large window, but more often than not I have to close the shades to be able to see something on my screen. Even worse: There were so many nice and sunny days where I would have loved to take my laptop outside and work there, but it (a Thinkpad T430s) is simply not usable in bright sun. I have seen those nice eInk based eBook readers, who are clearer the brighter they are. That’s what I want for my laptop, and I am willing to sacrifice color and a bit of usability due to latency for being able to work in the bright daylight!

So while I was in Portland for DebConf14 (where I guess I felt a bit more like tinkering than otherwise) I bought a Kobo Aura HD. I chose this device because it has a resolution similar to my laptop (1440×1080) and I have seen reports from people running their own software on it, including completely separate systems such as Debian or Android.

This week, I was able to play around with it. It was indeed simple to tinker with: You can simply copy a tarball to it which is then extracted over the root file system. There are plenty of instructions online, but I found it easier to take them as inspiration and do it my way – with basic Linux knowledge that’s possible. This way, I extended the system boot script with a hook to a file on the internal SD card, and this file then runs the telnetd daemon that comes with the device’s busybox installation. Then I just have to make the device go online and telnet onto it. From there it is a pretty normal Linux system, albeit without an X server, using the framebuffer directly.

I even found an existing project providing a VNC client implementation for this and other devices, and pretty soon I could see my laptop screen on the Kobo. Black and white worked fine, but colors and greyscales, including all anti-aliased fonts, were quite broken. After some analysis I concluded that it was confusing the bit pattern of the pixels. Luckily kvncclient shares that code with koreader, which worked fine on my device, so I could copy some files and settings from there et voilá: I now have an eInk monitor for my laptop. As a matter of fact, I am writing this text with my Kobo sitting on top of the folded-back laptop screen!

I did some minor adjustments to my laptop:

  • I changed the screen size to match the Kobo’s resolution. Using xrandr’s --panning option this is possible even though my real screen is only 900 pixels high.
  • I disabled the cursor-blink where possible. In general, screen updates should be avoided, so I hide my taffybar (which has a CPU usage monitor) and text is best written at the very end of the line (and not before a, say, </p>).
  • My terminal windows are now black-on-white.
  • I had to increase my font-size a bit (the kobo has quite a high DPI), and color is not helpful (so :set syntax=off in vim).

All this is still very manual (going online with the kobo, finding its IP address, logging in via telnet, killing the Kobo's normal main program, starting x11vnc, finding my ip address, starting the vnc client, doing the adjustments mentioned above), so I need to automate it a bit. Unfortunately, there is no canonical way to extend the Kobo by your own application: The Kobo developers made their device quite open, but stopped short from actually encouraging extensions, so people have created many weird ways to start programs on the Kobo – dedicated start menus, background programs observing when the regular Kobo app opens a specific file, complete replacements for the system. I am considering to simply run an SSH server on the device and drive the whole process from the laptop. I’ll keep you up-to-date.

A dream for the future would be to turn the kobo into a USB monitor and simply connect it to any computer, where it then shows up as a new external monitor. I wonder if there is a standard for USB monitors, and if it is simple enough (but I doubt it).

A word about the kobo development scene: It seems to be quite active and healthy, and a number of interesting applications are provided for it. But unfortunately it all happens on a web forum, and they use it not only for discussion, but also as a wiki, a release page, a bug tracker, a feature request list and as a support line – often on one single thread with dozens of posts. This makes it quite hard to find relevant information and decide whether it is still up-to-date. Unfortunately, you cannot really do without it. The PDF viewer that comes with the kobo is barely okish (e.g. no crop functionality), so installing, say, koreader is a must if you read more PDFs than actual ebooks. And then you have to deal with the how-to-start-it problem.

That reminds me: I need to find a decent RSS reader for the kobo, or possibly a good RSS-to-epub converter that I can run automatically. Any suggestions?

PS and related to this project: Thanks to Kathey!

21 September, 2014 08:11PM by Joachim Breitner (mail@joachim-breitner.de)

Dariusz Dwornikowski

statistics of RFS bugs and sponsoring process

For some days I have been working on statistics of the sponsoring process in Debian. I find this to be one of the most important things that Debian has to attract and enable new contributions. It is important to know how this process works, whether we need more sponsors, how effective is the sponsoring and what are the timings connected to it.

How I did this ?

I have used Debbugs SOAP interface to get all bugs that are filed against sponsorship-requests pseudo package. SOAP gives a little bit of overhead because it needs to download a complete list of bugs for the sponsorship-requests package, and then process them according to given date ranges. The same information can be easily extracted from the UDD database in the future, it will be faster because SQL is better when working with date ranges than python obviously.

The most problematic part was getting the "real done date" of a particular bug, and frankly most of my time I have spent on writing a rather dirty and complicated script. The script gets a log for a particular bug number and returns a "real done date". I have published a proof of concept in a previous post..

What I measured ?

RFSs is a queue, and in every queue one is interested in a mean time to get processed. In this case I called the metric global MTTGS (mean time to get sponsored). This is a metric that gives the overall performance insight in RFS queue. Time to get sponsored (TTGS) for a bug is a number of days that passed between filing an RFS bug and closing it (bug was sponsored). Mean time to get sponsored is calculated as a sum of TTGSs of all bugs divided by number of bugs (in a given period of time). Global MTTGS is MTTGS calculated for a period of time 2012-1-1 until today().

Besides MTTGS I have also measured typical bug related metrics:

  • number of bugs closed in a given day,
  • number of bugs opened in a given day,
  • number of bugs with status open in a given day,
  • number of bugs with status closed in a given day.

Plots and graphs

Below is a plot of global MTTGS vs. time (click for a larger image).

mttgs plot

As you can see, the trend is roughly exponential and MTTGS tends to settle around 60 days at the end of the year 2013. This does not mean that your package will wait 60 days on average nowadays to get sponsored. I remind that this is a global MTTGS, so even if the MTTGS of last month was very low, the global MTTGS would decrease just slightly. It gives, however, a good glance in performance of the process. Even that more packages are filed for sponsoring (see next graphs) now, than in the beginning of the epoch, the sponsoring rate is high enough to flatten the global MTTGS, and with time maybe decrease it.

The image below (click for a larger one) shows how many bugs reside in a queue with status open or closed (calculated for each day). For closed we have an almost linear function, so each day more or less the same amount of bugs are closed and they increase the pool of bugs with status closed. For bugs with status open the interesting part begins around May 2012 after the system is saturated or gets popular. It can be interpreted as a plot of how many bugs reside in the queue, the important part is that it is stable and does not show clear increasing trend.

open done plot

The last plot shows arrival and departure rate of bugs from RFS queue, i.e. how many bugs are opened and closed each day. The interesting part here are the maxima. Let's look at them.

opened closed plot

Maximal number of opened bugs (21) was on 2012-05-06. As it appears it was a bunch upload of RFSs for tryton-modules-*..

  706953  RFS: tryton-modules-account-stock-anglo-saxon/2.8.0-1 
  706954  RFS: tryton-modules-purchase-shipment-cost/2.8.0-1 
  706948  RFS: tryton-modules-production/2.8.0-1 
  706969  RFS: tryton-modules-account-fr/2.8.0-1 
  706946  RFS: tryton-modules-project-invoice/2.8.0-1 
  706950  RFS: tryton-modules-stock-supply-production/2.8.0-1 
  706942  RFS: tryton-modules-product-attribute/2.8.0-1 
  706957  RFS: tryton-modules-stock-lot/2.8.0-1 
  706958  RFS: tryton-modules-carrier-weight/2.8.0-1 
  706941  RFS: tryton-modules-stock-supply-forecast/2.8.0-1 
  706955  RFS: tryton-modules-product-measurements/2.8.0-1 
  706952  RFS: tryton-modules-carrier-percentage/2.8.0-1 
  706949  RFS: tryton-modules-account-asset/2.8.0-1 
  706904  RFS: chinese-checkers/0.4-1 
  706944  RFS: tryton-modules-stock-split/2.8.0-1 
  706981  RFS: distcc/3.1-6 
  706945  RFS: tryton-modules-sale-supply/2.8.0-1 
  706959  RFS: tryton-modules-carrier/2.8.0-1 
  706951  RFS: tryton-modules-sale-shipment-cost/2.8.0-1 
  706943  RFS: tryton-modules-account-stock-continental/2.8.0-1 
  706956  RFS: tryton-modules-sale-supply-drop-shipment/2.8.0-1

Maximum number of closed bugs (18) was on 2013-09-24, and as you probably guessed right also tryton modules had impact on that.

  706953  RFS: tryton-modules-account-stock-anglo-saxon/2.8.0-1 
  706954  RFS: tryton-modules-purchase-shipment-cost/2.8.0-1 
  706948  RFS: tryton-modules-production/2.8.0-1 
  706969  RFS: tryton-modules-account-fr/2.8.0-1 
  706946  RFS: tryton-modules-project-invoice/2.8.0-1 
  706950  RFS: tryton-modules-stock-supply-production/2.8.0-1 
  706942  RFS: tryton-modules-product-attribute/2.8.0-1 
  706958  RFS: tryton-modules-carrier-weight/2.8.0-1 
  706941  RFS: tryton-modules-stock-supply-forecast/2.8.0-1 
  706955  RFS: tryton-modules-product-measurements/2.8.0-1 
  706952  RFS: tryton-modules-carrier-percentage/2.8.0-1 
  706949  RFS: tryton-modules-account-asset/2.8.0-1 
  706944  RFS: tryton-modules-stock-split/2.8.0-1 
  706959  RFS: tryton-modules-carrier/2.8.0-1 
  723991  RFS: mapserver/6.4.0-2 
  706951  RFS: tryton-modules-sale-shipment-cost/2.8.0-1 
  706943  RFS: tryton-modules-account-stock-continental/2.8.0-1 
  706956  RFS: tryton-modules-sale-supply-drop-shipment/2.8.0-1

The software

Most of the software was written in Python. Graphs were generated in R. After a code cleanup I will publish a complete solution on my github account, free to use by everybody. If you would like to see another statistics, please let me know, I can create them if the data provides sufficient information.

21 September, 2014 02:21PM by Dariusz Dwornikowski

hackergotchi for Konstantinos Margaritis

Konstantinos Margaritis

VSX port added to Eigen!

Being the SIMD fanatic that I am, a few years ago I did the PowerPC Altivec and ARM NEON port for the Eigen linear algebra library, one of the best and most popular libraries -and most ported.

Recently I thought it would be a good idea to extend both ports to 64-bit, and it would also help me with the SIMD book, using VSX in one case and ARMv8 NEON (or Advanced SIMD as ARM likes to call it) in the latter. ARMv8 hardware is a bit scarce at the moment, so I thought I'd start with VSX. Being in Debian, I have access to a number of porterboxes in several architectures, and luckily one of those was a Power7 (with VSX) running ppc64. So I started the porting -or rather extending the code- to use VSX in the 64-bit doubles case. Unluckily, I could not test anything because Debian kernels do not have VSX enabled in wheezy -which is what the porterbox is running and enabling it is a non-option(#758620). So, running VSX code would turn out to be quite hard.

21 September, 2014 01:03PM by markos

September 20, 2014

Laura Arjona

Happy Software Freedom Day!

Today we celebrate the day of free software (each year, a saturday around mid-September) More info at softwarefreedomday.org

There are no public events in Madrid, but I’m going to try to hack and write a bit more this weekend, as my personal celebration.

In this blog post you can find some of my very very recent activities on free software, and my plans for this weekend of celebration!

Debian

Children distros aka Derivatives

I had the translation/update of the page www.debian.org/misc/children-distros pending since long time. It’s a long page, and I was not sure what was better: if picking up the too-outdated last translation, and review it carefully in order to update it, or starting from scratch. I decided to reuse the last translation (thanks Luis Uribe!) and after some days dedicating my commuting time on it, finally, yesterday evening I finished it at home. Now it’s in the review queue, and I hope in 10 days or so it will be uploaded.

In the meantime, I have learned a bit about the Debian Derivatives subproject and  census, I have watched the Derivatives Panel at DebConf13, and had a look at the bug #723069 about keeping the children-distros page up to date.

So now that I’m liberated about this translation, I’m going to put some time in keeping up to date the original English page (I’m part of the www and publicity team, so I think it makes sense). My goal is to review at least one Debian derivative each two days, and when I finish the list, start again. I can update the wiki myself, and for the www, I’ll send patches against #723069, unless I’m told to do it other way.

BTW, wouldn’t be nice to mark web/wiki pages as “RFH” the same as packages?, so other people can easily decide to put some time on them, and make http://www.debian.org even more awesome! Or make them appear in the how-can-i-help reminders :)  Mmm maybe it’s just a matter of filing a bug and tagging it as “gift”? I think no, because nobody has the package “www.debian.org” installed in their system… I’ll talk with the maintainer about this.

New Member process

I promised myself to try to work a bit more in Debian during the summer and September, and if everything goes well, try to apply to the new member process in October.

I wanted to read all the documentation first, and one challenge is to review/update the translations of www.debian.org/devel/join folder. This way, both myself and the Spanish speaking community benefit from the effort. Yesterday I translated one of those pending pages and I hope during the weekend I can translate/update the rest. When I finish that, I’ll keep reading the other documentation.

DebConf15

This summer I was invited to join the DebConf15 organization team and pick up tasks in the publicity area. I was very happy to join, I’m not sure at all that I can go to DebConf15 in Heidelberg (Germany), in fact I’m quite sure I will not go since mid-August is the only opportunity to visit family who lives far away, but anyway, there are things that we can do before DebConf15 and I can contribute.

For now, I attended last Monday to the meeting at IRC, and I’m finishing a short blogpost about the DebConf14 talk presenting DebConf15, that will be published in the DebConf15 blog.

Android, F-Droid

I keep on trying to spread the word about F-Droid and the free software available for Android, last week some of my friends updated Kontalk to the 3.0.b1 version (I had updated at the beginning of September) and they liked that now, the images are sent encrypted as well as the text messages :)

Some friends also liked the 2048 game, since it can be played offline, without ads, and so.

I decided to spend some time this weekend contributing translations to the Android apps that I use.

A long pending issue is to try to put workforce in the F-Droid project itself so apps descriptions are internationalized (the program is fully translatable, but the categories of apps and the descriptions themselves, are not). This is a complicated issue, it requires to take some design decisions, and later, of course, the implementation. I cannot do it alone, and I cannot do it in the short time. But today I have filed a bug report (#35) so maybe I find other people able to help.

Jabber/XMPP and the “RedesLibres” chatroom

Since several months I’ve been using more often my Jabber/XMPP account to join the chatroom redeslibres@salas.mijabber.es

I meet there some people that I follow in Pump.io (for example, the people that write in the Comunícate Libremente or Lignux blogs) and we talk about pump.io, free software, free services, and other things. I feel very comfortable there, it’s nice to have a Spanish speaking group inside the Free Software community, and I’m also learning a bit about XMPP (I’ve tried a lot of desktop and Android clients, just for fun!), free networks, and so.

So today I wanted to publicly thank you everybody in that chatroom, that welcomed me so well :)

Thank you, free software friends

And, by extension, I want to thank you all the people that work and have fun in the Free Software communities, in the projects where I contribute or others. They (we) hack to make the world better, and to allow others join to this beautiful challenge that is making machines do what their (final) users wants.

Comments?

You can comment on this post in this Pump.io thread.


Filed under: My experiences and opinion Tagged: Android, Communities, Contributing to libre software, Debian, English, F-Droid, federation, Free Software, Freedom, internationalization, libre software, localization, translations

20 September, 2014 09:58AM by larjona

Francesca Ciceri

Four Ways to Forgiveness

"I have seen a picture," Havzhiva went on.
The Chosen was impassive; he might or might not know the word. "Lines and colors made with earth on earth may hold knowledge in them. All knowledge is local, all truth is partial," Havzhiva said with an easy, colloquial dignity that he knew was an imitation of his mother, the Heir of the Sun, talking to foreign merchants. "No truth can make another truth untrue. All knowledge is a part of the whole knowledge. A true line, a true color. Once you have seen the larger patttern, you cannot go back to seeing the part as the whole.

I've just finished to read "Four Ways to Forgiveness" by U.K Le Guin.
It deeply resonated within me, it's still there doing its magic in my brain, lingering in the corners of my mind, tickling my view of reality, humming with the beauty of ideas you didn't knew were inside you till you've seen them written on paper.
And then, you know they were there all along, you just didn't know how to make them into words.
Le Guin knows how to do it, wonderfully.

I loved the whole book, but the last two stories were eye-openers.
Thanks Enrico for suggesting me this one, thanks dkg for having introduced me to Le Guin's books (with another fantastic book: The Left Hand of Darkness).

20 September, 2014 08:20AM

September 19, 2014

Dariusz Dwornikowski

getting real "done date" of a bug from Debian BTS

As I wrote in my last post currently, SOAP interface, nor Ultimate Debian Database do not provide a date when a given bug was closed (done date). It is quite hard to calculate statistics on a bug tracker when you do not know when a bug was closed (!!).

Done date of bug can be found in its log. The log itself can be downloaded by SOAP method get_bug_log but the processing of it is quite complicated. The same comes to web scrapping of a BTS's web interface. Fortunatelly the web interface gives a possibility to download a log in an mbox format.

Below is a script that extracts the done date of a bug from its log in mbox format. It uses requests to download the mbox and caches the result in ~/.cache/rfs_bugs, which you need to create. It performs different checks:

  1. Check existence of a header e.g. Received: (at 657783-done) by bugs.debian.org; 29 Jan 2012 13:27:42 +0000
  2. Check for header CC: NUMBER-close|done
  3. Check for header TO: NUMBER-close|done
  4. Check for Close: NUMBER in body.

The code is below:

import requests
from datetime import datetime
import mailbox
import re
import os
import tempfile

def get_done_date(bug_num):

    CACHE_DIR = os.path.expanduser("~") + "/.cache/rfs_bugs/"

    def get_from_cache():
        if os.path.exists("{}{}".format(CACHE_DIR, bug_num)):
            with open("{}{}".format(CACHE_DIR, bug_num)) as f:
                return datetime.strptime(f.readlines()[0].rstrip(), "%Y-%m-%d").date()
        else:
            return None

    done_date = get_from_cache()

    if done_date is not None:
        return done_date
    else:
        r = requests.get("https://bugs.debian.org/cgi-bin/bugreport.cgi?mbox=yes;bug={};mboxstatus=yes".format(self._num))
        d = try_header(r.text)
        if d is None:
            d = try_cc(r.text)
        if d is None:
            d = try_body(r.text)
        if d is not None:
            with open("{}{}".format(CACHE_DIR, bug_num), "w") as f:
                f.write("{}".format(d.date()))
        else:
            return None
        return d.date()

    def try_body(text):
        reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d{1,2}\s\w\w\w\s\d\d\d\d)"
        handle, name = tempfile.mkstemp()
        with open(name, "w") as f:
            f.write(text.encode('latin-1'))
        mbox = mailbox.mbox(name)
        for i in mbox.items():
            if i[1].is_multipart():
                for m in i[1].get_payload():
                    if "close" in str(m) or "done" in str(m):
                        try:
                            result = re.search(reg, i[1]['Received'])
                            return datetime.strptime(result.group(1), "%d %b %Y")
                        except:
                            return None
            else:
                if "close" in i[1].get_payload() or "done" in i[1].get_payload():
                    try:
                        result = re.search(reg, i[1]['Received'])
                        return datetime.strptime(result.group(1), "%d %b %Y")
                    except:
                        return None
        return None



    def try_header(text):
        reg = "Received:\s\(at\s\d\d\d\d\d\d-(close|done)\)\s+by.+"
        try:
            result = re.search(reg, r.text)
            line = result.group(0)
            reg2 = "\d{1,2}\s\w\w\w\s\d\d\d\d"
            result = re.search(reg2, line)
            d = datetime.strptime(result.group(0), "%d %b %Y")
            return d
        except:
            return None

    def try_cc(text):
        reg = "\(at\s.+\)\s+by\sbugs\.debian\.org;\s(\d{1,2}\s\w\w\w\s\d\d\d\d)"
        handle, name = tempfile.mkstemp()
        with open(name, "w") as f:
            f.write(text.encode('latin-1'))
        mbox = mailbox.mbox(name)
        for i in mbox.items():
            if ('CC' in i[1] and "done" in i[1]['CC']) or ('To' in i[1] and "done" in i[1]['To']):
                try:
                    result = re.search(reg, i[1]['Received'])
                    return datetime.strptime(result.group(1), "%d %b %Y")
                except:
                    return None

if __name__ == "__main__":
    print get_done_date(752210)

PS: I hope that the script will be not needed in the near future, as Don Armstrong plans a new BTS database, a Debconf14 video is here.

19 September, 2014 07:17AM by Dariusz Dwornikowski

hackergotchi for Daniel Pocock

Daniel Pocock

reSIProcate migration from SVN to Git completed

This week, the reSIProcate project completed the move from SVN to Git.

With many people using the SIP stack in both open source and commercial projects, the migration was carefully planned and tested over an extended period of time. Hopefully some of the experience from this migration can help other projects too.

Previous SVN committers were tracked down using my script for matching emails to Github accounts. This also allowed us to see their recent commits on other projects and see how they want their name and email address represented when their previous commits in SVN were mapped to Git commits.

For about a year, the sync2git script had been run hourly from cron to maintain an official mirror of the project in Github. This allowed people to test it and it also allowed us to start using some Github features like travis-CI.org before officially moving to Git.

At the cut-over, the SVN directories were made read-only, sync2git was run one last time and then people were advised they could commit in Git.

Documentation has also been created to help people get started quickly sharing patches as Github pull requests if they haven't used this facility before.

19 September, 2014 06:47AM by Daniel.Pocock

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Docker PostgreSQL Foreign Data Wrapper

For the tl;dr: Docker FDW is a thing. Star it, hack it, try it out. File bugs, be happy. If you want to see what it's like to read, there's some example SQL down below.

The question is first, what the heck is a PostgreSQL Foreign Data Wrapper? PostgreSQL Foreign Data Wrappers are plugins that allow C libraries to provide an adaptor for PostgreSQL to talk to an external database.

Some folks have used this to wrap stuff like MongoDB, which I always found to be hilarous (and an epic hack).

Enter Multicorn

During my time at PyGotham, I saw a talk from Wes Chow about something called Multicorn. He was showing off some really neat plugins, such as the git revision history of CPython, and parsed logfiles from some stuff over at Chartbeat. This basically blew my mind.

All throughout the talk I was coming up with all sorts of things that I wanted to do -- this whole library is basically exactly what I've been dreaming about for years. I've always wanted to provide a SQL-like interface into querying API data, joining data cross-API using common crosswalks, such as using Capitol Words to query for Legislators, and use the bioguide ids to JOIN against the congress api to get their Twitter account names.

My first shot was to Multicorn the new Open Civic Data API I was working on, chuckled and put it aside as a really awesome hack.

Enter Docker

It wasn't until tianon connected the dots for me and suggested a Docker FDW did I get really excited. Cue a few hours of hacking, and I'm proud to say -- here's Docker FDW.

This lets us ask all sorts of really interesting questions out of the API, and might even help folks writing webapps avoid adding too much Docker-aware logic. Abstractions can be fun!

Setting it up

I'm going to assume you have a working Multicorn, PostgreSQL and Docker setup (including adding the postgres user to the docker group)

So, now let's pop open a psql session. Create a database (I called mine dockerfdw, but it can be anything), and let's create some tables.

Before we create the tables, we need to let PostgreSQL know where our objects are. This takes a name for the server, and the Python importable path to our FDW.

CREATE SERVER docker_containers FOREIGN DATA WRAPPER multicorn options (
    wrapper 'dockerfdw.wrappers.containers.ContainerFdw');

CREATE SERVER docker_image FOREIGN DATA WRAPPER multicorn options (
    wrapper 'dockerfdw.wrappers.images.ImageFdw');

Now that we have the server in place, we can tell PostgreSQL to create a table backed by the FDW by creating a foreign table. I won't go too much into the syntax here, but you might also note that we pass in some options - these are passed to the constructor of the FDW, letting us set stuff like the Docker host.

CREATE foreign table docker_containers (
    "id"          TEXT,
    "image"       TEXT,
    "name"        TEXT,
    "names"       TEXT[],
    "privileged"  BOOLEAN,
    "ip"          TEXT,
    "bridge"      TEXT,
    "running"     BOOLEAN,
    "pid"         INT,
    "exit_code"   INT,
    "command"     TEXT[]
) server docker_containers options (
    host 'unix:///run/docker.sock'
);


CREATE foreign table docker_images (
    "id"              TEXT,
    "architecture"    TEXT,
    "author"          TEXT,
    "comment"         TEXT,
    "parent"          TEXT,
    "tags"            TEXT[]
) server docker_image options (
    host 'unix:///run/docker.sock'
);

And, now that we have tables in place, we can try to learn something about the Docker containers. Let's start with something fun - a join from containers to images, showing all image tag names, the container names and the ip of the container (if it has one!).

SELECT docker_containers.ip, docker_containers.names, docker_images.tags
  FROM docker_containers
  RIGHT JOIN docker_images
  ON docker_containers.image=docker_images.id;
     ip      |            names            |                  tags                   
-------------+-----------------------------+-----------------------------------------
             |                             | {ruby:latest}
             |                             | {paultag/vcs-mirror:latest}
             | {/de-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ny-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ar-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
 172.17.0.47 | {/ms-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
 172.17.0.46 | {/nc-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/ia-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/az-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/oh-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/va-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
 172.17.0.41 | {/wa-openstates-to-ocd}     | {sunlightlabs/scrapers-us-state:latest}
             | {/jovial_poincare}          | {<none>:<none>}
             | {/jolly_goldstine}          | {<none>:<none>}
             | {/cranky_torvalds}          | {<none>:<none>}
             | {/backstabbing_wilson}      | {<none>:<none>}
             | {/desperate_hoover}         | {<none>:<none>}
             | {/backstabbing_ardinghelli} | {<none>:<none>}
             | {/cocky_feynman}            | {<none>:<none>}
             |                             | {paultag/postgres:latest}
             |                             | {debian:testing}
             |                             | {paultag/crank:latest}
             |                             | {<none>:<none>}
             |                             | {<none>:<none>}
             | {/stupefied_fermat}         | {hackerschool/doorbot:latest}
             | {/focused_euclid}           | {debian:unstable}
             | {/focused_babbage}          | {debian:unstable}
             | {/clever_torvalds}          | {debian:unstable}
             | {/stoic_tesla}              | {debian:unstable}
             | {/evil_torvalds}            | {debian:unstable}
             | {/foo}                      | {debian:unstable}
(31 rows)

OK, let's see if we can bring this to the next level now. I finally got around to implementing INSERT and DELETE operations, which turned out to be pretty simple to do. Check this out:

DELETE FROM docker_containers;
DELETE 1

This will do a stop + kill after a 10 second hang behind the scenes. It's actually a lot of fun to spawn up a container and terminate it from PostgreSQL.

INSERT INTO docker_containers (name, image) VALUES ('hello', 'debian:unstable') RETURNING id;
                                id                                
------------------------------------------------------------------
 0a903dcf5ae10ee1923064e25ab0f46e0debd513f54860beb44b2a187643ff05

INSERT 0 1
(1 row)

Spawning containers works too - this is still very immature and not super practical, but I figure while I'm showing off, I might as well go all the way.

SELECT ip FROM docker_containers WHERE id='0a903dcf5ae10ee1923064e25ab0f46e0debd513f54860beb44b2a187643ff05';
     ip      
-------------
 172.17.0.12
(1 row)

Success! This is just a taste of what's to come, so please feel free to hack on Docker FDW, tweet me @paultag, file bugs / feature requests. It's currently a bit of a hack, and it's something that I think has long-term potential after some work goes into making sure that this is a rock solid interface to the Docker API.

19 September, 2014 01:49AM by Paul Tagliamonte