July 24, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Oli Warner: Building a kiosk computer with Ubuntu 14.04 and Chrome

Single-purpose kiosk computing might seem scary and industrial but thanks to cheap hardware and Ubuntu, it's an increasingly popular idea. I'm going to show you how and it's only going to take a few minutes to get to something usable.

Hopefully we'll do better than the image on the right.

We're going to be running a very light stack of X, Openbox and the Google Chrome web browser to load a specified website. The website could be local files on the kiosk or remote. It could be interactive or just an advertising roll. The options are endless.

The whole thing takes less than 2GB of disk space and can run on 512MB of RAM.

Step 1: Installing Ubuntu Server

I'm picking the Server flavour of Ubuntu for this. It's all the nuts-and-bolts of regular Ubuntu without installing a load of flabby graphical applications that we're never ever going to use.

It's free for download. I would suggest 64bit if your hardware supports it and I'm going with the latest LTS (14.04 at the time of writing). Sidebar: If you've never tested your kiosk's hardware in Ubuntu before it might be worth download the Desktop Live USB, burning it and checking everything works.

Just follow the installation instructions. Burn it to a USB stick, boot the kiosk to it and go through. I just accepted the defaults and when asked:

  • Set my username to user and set an hard-to-guess, strong password.
  • Enabled automatic updates
  • At the end when tasksel ran, opted to install the SSH server task so I could SSH in from a client that supported copy and paste!

After you reboot, you should be looking at a Ubuntu 14.04 LTS ubuntu tty1 login prompt. You can either SSH in (assuming you're networked and you installed the SSH server task) or just log in.

The installer auto-configures an ethernet connection (if one exists) so I'm going to assume you already have a network connection. If you don't or want to change to wireless, this is the point where you'd want to use nmcli to add and enable your connection. It'll go something like this:

sudo apt install network-manager
sudo nmcli dev wifi con <SSID> password <password>

Later releases should have nmtui which will make this easier but until then you always have man nmcli :)

Step 2: Install all the things

We obviously need a bit of extra software to get up and running but we can keep this fairly compact. We need to install:

  • X (the display server) and some scripts to launch it
  • A lightweight window manager to enable Chrome to go fullscreen
  • Google Chrome

We'll start by adding the Google-maintained repository for Chrome:

sudo add-apt-repository 'deb http://dl.google.com/linux/chrome/deb/ stable main'
wget -qO- https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

Then update our packages list and install:

sudo apt update
sudo apt install --no-install-recommends xorg openbox google-chrome-stable

If you omit --no-install-recommends you will pull in hundreds of megabytes of extra packages that would normally make life easier but in a kiosk scenario, only serve as bloat.

Step 3: Loading the browser on boot

I know we've only been going for about five minutes but we're almost done. We just need two little scripts.

Run sudoedit /opt/kiosk.sh first. This is going to be what loads Chrome once X has started. It also needs to wipe the Chrome profile so that between loads you aren't persisting stuff. This in incredibly important for kiosk computing because you never want a user to be able to affect the next user. We want them to start with a clean environment every time. Here's where I've got to:


xset -dpms
xset s off
openbox-session &

while true; do
  rm -rf ~/.{config,cache}/google-chrome/
  google-chrome --kiosk --no-first-run  'http://thepcspy.com'

When you're done there, Control+X to exit and run sudo chmod +x /opt/kiosk.sh to make the script executable. Then we can move onto starting X (and loading kiosk.sh).

Run sudoedit /etc/init/kiosk.conf and this time fill it with:

start on (filesystem and stopped udevtrigger)
stop on runlevel [06]

console output
emits starting-x


exec sudo -u user startx /etc/X11/Xsession /opt/kiosk.sh --

Replace user with your username. Exit, Control+X, save and we're done. To give it a quick test, just run sudo start kiosk (or reboot) and it should all come up.

One last problem to fix is the amount of garbage it prints to screen on boot and also that DPMS (the powersaving standard) will kick in after a few minutes and turn the screen off if there's been no input. We can fix both by running sudoedit /etc/default/grub and changing the first lines to:

GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash consoleblank=0"

Save and exit that and run sudo update-grub before rebooting.
The monitor should remain on indefinitely.

Final step: The boring things...

Technically speaking we're done; we have a kiosk and we're probably sipping on a Martini. I know, I know, it's not even midday, we're just that good... But there are extra things to consider before we let a grubby member of the public play with this machine:

  • Can users break it? Open keyboard access is generally a no-no. If they need a keyboard, physically disable keys so they only have what they need. I would disable all the F* keys along with Control, Alt, Super... If they have a standard mouse, right click will let them open links in new windows and tabs and OMG this is a nightmare. You need to limit user-input.

  • Can it break itself? Does the website you're loading have anything that's going to try and open new windows/tabs/etc? Does it ask for any sort of input that you aren't allowing users? Perhaps a better question to ask is Can it fix itself? Consider a mechanism for rebooting that doesn't involve a phone call to you.

  • Is it physically secure? Hide and secure the computer. Lock the BIOS. Ensure no access to USB ports (fill them if you have to). Disable recovery mode. Password protect Grub and make sure it stays hidden (especially with open keyboard access).

  • Is it network secure? SSH is the major ingress vector here so follow some basic tips: so at the very least move it to another port, only allow key-based authentication, install fail2ban and make sure fail2ban is telling you about failed logins.

  • What if Chrome is hacked directly? What if somebody exploited Chrome and had command-level access as user? Well first of all, you can try to stop that happening with AppArmor (should still apply) but you might also want to change things around so that the user running X and the browser doesn't have sudo access. I'd do that by adding a new user and changing the two scripts accordingly.

  • How are you maintaining it? Automatic updates are great but what if that breaks everything? How will you access it in the field to maintain it if (for example) the network dies or there's a hardware failure? This is aimed more at the digital signage people than simple kiosks but it's something to consider.

You can mitigate a lot of the security issues by having no live network (just displaying local files) but this obviously comes at the cost of maintenance. There's no one good answer for that.

Photo credit: allegr0/Candace

24 July, 2014 10:36AM

Martin Pitt: vim config for Markdown+LaTeX pandoc editing

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item]{}<Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement

autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

24 July, 2014 09:38AM

hackergotchi for TurnKey Linux

TurnKey Linux

hackergotchi for Ubuntu developers

Ubuntu developers

Dustin Kirkland: Improving Random Seeds in Ubuntu 14.04 LTS Cloud Instances

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!

Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1


Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!

Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!


Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.

  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!

Stay safe out there!

24 July, 2014 02:15AM by Dustin Kirkland (noreply@blogger.com)

July 23, 2014

hackergotchi for Grml developers

Grml developers

Michael Prokop: Book Review: The Docker Book

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. :) (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

23 July, 2014 08:16PM

hackergotchi for Ubuntu developers

Ubuntu developers

Matthew Helmke: Open Source Resources Sale

I don’t usually post sales links, but this sale by InformIT involves my two Ubuntu books along with several others that I know my friends in the open source world would be interested in.

Save 40% on recommend titles in the InformIT OpenSource Resource Center. The sale ends August 8th.

23 July, 2014 05:40PM

Michael Hall: Why do you contribute to open source?

It seems a fairly common, straight forward question.  You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”.  These are all excellent reasons for creating or improving something.  But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached.  When I ask “Why do you contribute to open source”, I’m asking why you give it away.

takemyworkThis question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.

If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing  the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on.  It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen.  Even the most permissive licenses require attribution, something that tells everybody who made it.

Now let’s flip that question around:  Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it?  I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?

We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process.  But human recognition is still what matters most.  Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.

If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.

23 July, 2014 12:00PM

Andrew Pollock: [tech] Going solar

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

23 July, 2014 05:36AM

Serge Hallyn: rsync.net feature: subuids

The problem: Some time ago, I had a server “in the wild” from which I
wanted some data backed up to my rsync.net account. I didn’t want to
put sensitive credentials on this server in case it got compromised.

The awesome admins at rsync.net pointed out their subuid feature. For
no extra charge, they’ll give you another uid, which can have its own
ssh keys, whose home directory is symbolically linked under your main
uid’s home directory. So the server can rsync backups to the subuid,
and if it is compromised, attackers cannot get at any info which didn’t
originate from that server anyway.

Very nice.

23 July, 2014 04:02AM

July 22, 2014

hackergotchi for Tails


Tails 1.1 is out

Tails, The Amnesic Incognito Live System, version 1.1, is out.

All users must upgrade as soon as possible: this release fixes numerous security issues.


Notable user-visible changes include:

  • Rebase on Debian Wheezy

    • Upgrade literally thousands of packages.
    • Migrate to GNOME3 fallback mode.
    • Install LibreOffice instead of OpenOffice.
  • Major new features

    • UEFI boot support, which should make Tails boot on modern hardware and Mac computers.
    • Replace the Windows XP camouflage with a Windows 8 camouflage.
    • Bring back VirtualBox guest modules, installed from Wheezy backports. Full functionality is only available when using the 32-bit kernel.
  • Security fixes

    • Fix write access to boot medium via udisks (ticket #6172).
    • Upgrade the web browser to 24.7.0esr-0+tails1~bpo70+1 (Firefox 24.7.0esr + Iceweasel patches + Torbrowser patches).
    • Upgrade to Linux 3.14.12-1 (fixes CVE-2014-4699).
    • Make persistent file permissions safer (ticket #7443).
  • Bugfixes

    • Fix quick search in Tails Greeter's Other languages window (Closes: ticket #5387)
  • Minor improvements

    • Don't install Gobby 0.4 anymore. Gobby 0.5 has been available in Debian since Squeeze, now is a good time to drop the obsolete 0.4 implementation.
    • Require a bit less free memory before checking for upgrades with Tails Upgrader. The general goal is to avoid displaying "Not enough memory available to check for upgrades" too often due to over-cautious memory requirements checked in the wrapper.
    • Whisperback now sanitizes attached logs better with respect to DMI data, IPv6 addresses, and serial numbers (ticket #6797, ticket #6798, ticket #6804).
    • Install the BookletImposer PDF imposition toolkit.

See the online Changelog for technical details.

Known issues

  • Users of persistence must log in at least once with persistence enabled read-write after upgrading to 1.1 to see their settings updated.

  • Upgrading from ISO, from Tails 1.1~rc1, Tails 1.0.1 or earlier, is a bit more complicated than usual. Either follow the instructions to upgrade from ISO. Or, burn a DVD, start Tails from it, and use "Clone and Upgrade".

  • The automatic upgrade from Tails 1.1~rc1 is a bit more complicated than usual. Either follow the instructions to apply the automatic upgrade. Or, do a full upgrade.

  • A persistent volume created with Tails 1.1~beta1 cannot be used with Tails 1.1 or later. Worse, trying this may freeze Tails Greeter.

  • Tails 1.1 does not start in some virtualization environments, such as QEMU 0.11.1 and VirtualBox 4.2. This can be corrected by upgrading to QEMU 1.0 or VirtualBox 4.3, or newer (ticket #7232).

  • The web browser's JavaScript performance may be severely degraded (ticket #7127). Please let us know if you are experiencing this to a level where it is problematic.

  • Longstanding known issues.

I want to try it or to upgrade!

Go to the download page.

As no software is ever perfect, we maintain a list of problems that affects the last release of Tails.

What's coming up?

The next Tails release is scheduled for September 2.

Have a look to our roadmap to see where we are heading to.

Would you want to help? There are many ways you can contribute to Tails. If you want to help, come talk to us!

How to upgrade from ISO?

These steps allow you to upgrade a device installed with Tails Installer from Tails 1.0.1, Tails 1.1~beta1 or earlier, to Tails 1.1.

  1. Start Tails from another DVD, USB stick, or SD card, than the device that you want to upgrade.

  2. Set an administration password.

  3. Run this command in a Root Terminal to install the latest version of Tails Installer:

    echo "deb http://deb.tails.boum.org/ 1.1 main" \
        > /etc/apt/sources.list.d/tails-upgrade.list && \
        apt-get update && \
        apt-get install liveusb-creator
  4. Follow the usual instructions to upgrade from ISO, except the first step.

How to automatically upgrade from Tails 1.1~rc1?

These steps allow you to automatically upgrade a device installed with Tails Installer from Tails 1.1~rc1 to Tails 1.1.

  1. Start Tails 1.1~rc1 from the device you want to upgrade.

  2. Set an administration password.

  3. Run this command in a Terminal to apply the automatic upgrade:

    echo 'TAILS_CHANNEL="stable-fixed"' | sudo tee --append /etc/os-release && \
      cd / && tails-upgrade-frontend-wrapper

22 July, 2014 07:45PM

On 0days, exploits and disclosure

A recent tweet from Exodus Intel (a company based in Austin, Texas) generated quite some noise on the Internet:

"We're happy to see that TAILS 1.1 is being released tomorrow. Our multiple RCE/de-anonymization zero-days are still effective. #tails #tor"

Tails ships a lot of software, from the Linux kernel to a fully functional desktop, including a web browser and a lot of other programs. Tails also adds a bit of custom software on top of this.

Security issues are discovered every month in a few of these programs. Some people report such vulnerabilities, and then they get fixed: This is the power of free and open source software. Others don't disclose them, but run lucrative businesses by weaponizing and selling them instead. This is not new and comes as no surprise.

We were not contacted by Exodus Intel prior to their tweet. In fact, a more irritated version of this text was ready when we finally received an email from them. They informed us that they would provide us with a report within a week. We're told they won't disclose these vulnerabilities publicly before we have corrected it, and Tails users have had a chance to upgrade. We think that this is the right process to responsibly disclose vulnerabilities, and we're really looking forward to read this report.

Being fully aware of this kind of threat, we're continously working on improving Tails' security in depth. Among other tasks, we're working on a tight integration of AppArmor in Tails, kernel and web browser hardening as well as sandboxing, just to name a few examples.

We are happy about every contribution which protects our users further from de-anonymization and helps them to protect their private data, investigations, and their lives. If you are a security researcher, please audit Tails, Debian, Tor or any other piece of software we ship. To report or discuss vulnerabilities you discover, please get in touch with us by sending email to tails@boum.org.

Anybody wanting to contribute to Tails to help defend privacy, please join us!

22 July, 2014 07:40PM

hackergotchi for TurnKey Linux

TurnKey Linux

The closest you can get to perfectly secure Bitcoin transactions (without doing them in your head)

@pa2013 helpfully posted Alon's BitKey announcement from last week to the Bitcoin Reddit, which sparked an interesting discussion regarding whether or not you can safely trust BitKey to perform air-gapped transactions. I started responding there but my comment got so long I decided to turn it into a blog post.

Air-gaps are not enough

As the astute commenters on Reddit correctly pointed out, just because you are using an offline air-gapped computer doesn't make you safe:

For example an offline computer can have a tainted random number generator, modified to only feed you addresses that the attacker can sweep at a later point in time.

I agree 100%. There are many ways a tainted air-gapped system can betray you, including smuggling out your secret keys via covert channel (e.g., USB keys, high frequency sound, covert activation of Bluetooth/wifi chipset, etc.)

The good news is that:

  1. Even if you assume BitKey is evil you can still use it to perform a highly secure Bitcoin transactions. Details in the "If I tell you I'll have to kill you" section below.
  2. Most of the attacks against air-gapped systems are hard to hide if you build your own image from source.

The bad news is that:

  1. Most people won't build from source.

  2. Without deterministic builds you can't tell if the system image you are using is a faithful representation of source code.

    A deterministic build means that everyone that builds from source always get exactly the same binary output, bit for bit.

  3. You can't trust RNGs without deterministic builds. A properly designed "evil" RNG looks just like a "good" RNG. Just by observing the output it is possible to prove that an RNG is insecure but absolutely impossible to prove that it is secure.

Random Number Generators are the perfect hiding place for a backdoor

The makes RNGs the perfect place to hide backdoors. I'd bet money that RNG-level backdoors are where intelligence agencies like the NSA are focusing their efforts to compromise Internet security.

For this reason I personally don't trust RNGs at all when the stakes are high. Any RNG, not just the one in BitKey.


Even if you audit the source code that the RNG is being compiled from, you still have to trust that the compiler is translating source code faithfully, and worse this turns out to be a recursive problem that was recognized was recognized waaaay back:


A solution you don't have to trust is better than one you do

In its current form BitKey is a swiss army knife of handy Bitcoin tools which you could use to implement any workflow. What's interesting is that this includes at least one workflow which don't require you to trust BitKey. I call it the "If I tell you I'll have to kill you" workflow.

But first, we need to recognize that there is an inescapable trade off between convenience and security and since the risk is proportional to the value of your wallet it doesn't make sense to enforce a specific trade off. We want to help make the most paranoid usage model practical for day to day use but at the same time, we want to create tools that let the user decide. For low value wallets maybe you're willing to trade off some security for better usability.

On the flip side, as someone who uses BitKey to perform very high security transactions routinely, once you get the hang of it's not too much trouble to go a bit overboard and sleep well at night. Better safe than sorry.

If I tell you I'll have to kill you

It turns out you can create secure Bitcoin transactions offline without having to trust the system performing the transaction. Do that and you can mostly dispense with having to trust third parties.

This is a good thing because trusted third parties are a security hole:


Instead of trusting the solution you just have to trust the security protocol and its underlying assumptions, which you can verify yourself.

The trick is:

  1. Don't use the RNG. Provide your own entropy. Use a dice!
  2. Assume BitKey is evil. Work around that by enforcing a strict flow of information to prevent it from betraying you.

For example, let's say there are two computers: BLUE and RED.

I'm calling this the "If I tell you I'll have to kill you" model because once you give BitKey access to the secret keys in your wallet you assume it will try anything to smuggle them out back to the attacker and to prevent that you will have to quarantine BitKey, get the signed transaction out, then kill it.

Now let me translate how that works in practice.

BLUE is a regular Internet connected PC, running a watch wallet (e.g., BitKey in cold-online mode, or Ubuntu running Electrum in watch-only mode). Connected to BLUE PC is a BLUE usb drive.

RED PC is an air-gapped PC that has no hard drive, NIC, Wifi, Bluetooth, sound card, etc. It only has a keyboard, monitor and USB port.

Next to RED is a RED usb drive. It is NOT plugged into RED. (yet)

On BLUE you create an unsigned transaction and save it to a BLUE usb drive.

On RED you boot BitKey into RAM (e.g., in cold-offline mode). You then plug in the BLUE usb drive and copy over the unsigned transaction into RAM. Then you unplug the BLUE usb drive.

At this point RED has the unsigned transaction in RAM but it can't sign it yet because it doesn't have access to the wallet.

So you plug into RED the RED usb drive that contains the wallet. You sign the transaction. You encode the JSON of the signed transaction as a QRcode. You read the QRcode with your phone. Verify that the inputs and outputs are correct. You broadcast the signed transaction to the Blockchain from your phone.

Then you reboot the airgapped computer and leave it turned off for a few minutes to take sure the wallet has been wiped from RAM.

The only thing coming out of RED is the QRcode for the signed transaction and you can verify that with a very simple app on a separate device like your phone.

It's not perfect security, because an evil BitKey might conspire with an evil phone by creating an evil QRcode that sends your Bitcoins to another address or leak your private key.

But it's as close as you can get without doing the transaction in your head, and BitKey has all the tools to let you do that.

Areas for improvement

  • Improve usability by adding a self-documenting wizard:

    Improve usability and reduce the potential for human error by adding a wizard mode in which BitKey guides you step by step in performing secure air-gapped Bitcoin transactions.
  • Port BitKey to work on the Raspberry Pi:

    I recently bought a few Raspberry Pis for this purpose. A $35 air-gap running BitKey on super cheap open hardware woud not only be cheap and practical it would also prevent us from having to trust our PCs / laptops not to be compromised at the hardware level. On a typical laptop / PC there are way too many places for bad stuff to hide, though I expect the truly paranoid will wrap their Raspberry Pi's in tinfoil just in case.

    Also, I think this would be a great opportunity to get TurnKey in general working on the Raspberry Pi.

How deterministic builds fit into the puzzle

Deterministic builds are another way around the problem of having to trust third parties. As seen above, we can get very good security without them, but only by assuming the system we are using is already compromised and limiting how the poison can spread.

But for many applications that just isn't practical. Often you need a two way information flow (e.g., privacy applications) and there are too many ways for a malicious system to betray you.

Full system deterministic builds are going to be essential for those usage scenarios. It's not a silver bullet but unless everyone's build system is compromised, you can at least rely on the translation of source code to binary is faithful.

This improves security dramatically because:

  1. With deterministic builds you don't have to trust us not to insert backdoors into the binary version of BitKey.

    I trust myself not to do that but coming from a military security background I can easily emphasize with anyone that doesn't.

  2. You also don't have to trust us to be capable of resisting attacks by Advanced Persistent Threats (AKA super hackers) that might figure out how to worm their way into our development systems.

    Personally, I believe it is unwise to expect any single individual or organization to resist truly determined attacks. If the NSA wants into your system they are going to get in.

The problems with deterministic builds are:

  • You still need to audit millions of lines of source code.
  • We don't have full-system deterministic builds yet. Nobody does. That's something a lot of people in the free software world are working on though.

22 July, 2014 05:27PM by Liraz Siri

hackergotchi for Ubuntu developers

Ubuntu developers

Mattia Migliorini: Going multilingual: welcome Italian!

Those of you who follow this blog since some time know for sure that the preferred language is English (a little number of posts in the early stages are an exception). Things are changing though.

It’s not that difficult to understand: if you go on it.deshack.net you can see this website in Italian. I’ve been thinking about giving a big change to this little place in the web for a while, as I want it to become more than a simple blog. I am working on a new theme for business websites, but I’ll let you know when it’s time. In the mean time, don’t be amazed if you see some small changes here.


The main language will remain the English. You will find all the Italian content on it.deshack.net, as said before. Old posts will be translated only if someone asks.

Now it’s time for me to ask something to you: do you think this is an interesting change? Let me know with a comment!

22 July, 2014 05:21PM

Ubuntu Kernel Team: Kernel Team Meeting Minutes – July 22, 2014

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20140722 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16-rc6 and officially uploaded
to the archive. We (as in apw) has also completed a hurculean config
review for Utopic and administered the appropriate changes. Please test
and let us know your results.
Important upcoming dates:
Thurs Jul 24 – 14.04.1 (~2 days away)
Thurs Aug 07 – 12.04.5 (~2 weeks away)
Thurs Aug 21 – Utopic Feature Freeze (~4 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 22):

  • Lucid – Released
  • Precise – Released
  • Saucy – Released
  • Trusty – Released

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html


    14.04.1 cycle: 29-Jun through 07-Aug
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    cycle: 08-Aug through 29-Aug
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.

Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

22 July, 2014 05:12PM

Lubuntu Blog: Box for Qt

Box's evolution continues ahead. Due to the Qt development, the main theme for Lubuntu must grow a bit more to cover more apps, devices and, of course, environments. Now it's Qt, the sub-system for the next Lubuntu desktop, but this will allow its use for KDE5 and Plasma Next. For now it's just a project, but the Dolphin file manager looks fine! Note: this is under heavy development, no

22 July, 2014 04:11PM by Rafael Laguna (noreply@blogger.com)

hackergotchi for Kali Linux

Kali Linux

Kali Linux 1.0.8 Released with EFI Boot Support

GARD Pro Not Registered

The long awaited Kali Linux USB EFI boot support feature has been added to our binary ISO builds, which has prompted this early Kali Linux 1.0.8 release. This new feature simplifies getting Kali installed and running on more recent hardware which requires EFI as well as various Apple Macbooks Air and Retina models. Besides the addition of EFI support, there is a whole array of tool updates and fixes that have accumulated over the past couple of months.

As this new release focuses almost entirely on the EFI capable ISO image, Offensive Security won’t be releasing additional ARM or VMWare images with 1.0.8. As usual, you don’t need to re-download Kali if you’ve got it installed, and apt-get update && apt-get dist-upgrade should do the job.

Shameless Plug for Our Free Kali Dojo


Finally, this release comes a couple of weeks before the 2014 Black Hat and Defcon security conferences in Las Vegas. If you’re attending these conferences, don’t forget to join our one day, free Kali Linux Dojo workshop, where we will be teaching and demonstrating the awesome stuff you can do with the Kali Linux Distribution. It’s going to be intensive and hands on, so you’ll need to bring some stuff with you if you attend. We expect this to be one of our most engaging and interesting events ever!

Kali Linux, a Penetration Testing Platform

While keeping an up-to-date toolset is most definitely an important part of any security distribution, much of our resources are also spent on building, testing and fixing useful features for individuals in the Security and Forensics fields. Building on our ever-growing list of such features, we can now happily say that the Kali image is a EFI Bootable ISO Hybrid image that supports Live USB Encrypted Persistence with LUKS Nuke support, out of the box. Yippie!

GARD Pro Not Registered

22 July, 2014 02:48PM by muts

hackergotchi for Ubuntu developers

Ubuntu developers

Rick Spencer: The Community Team

So, given Jono’s departure a few weeks back, I bet a lot of folks have been wondering about the Canonical Community Team. For a little background, the community team reports into the Ubuntu Engineering division of Canonical, which means that they all report into me. We have not been idle, and this post is to discuss a bit about the Community Team going forward.

What has Stayed the Same?

First, we have made some changes to the structure of the community team itself. However, one thing did not change. I kept the community team reporting directly into me, VP of Engineering, Ubuntu. I decided to do this so that there is a direct line to me for any community concerns that have been raised to anyone on the community team.

I had a call with the Community Council a couple of weeks ago to discuss the community team and get feedback about how it is functioning and how things could be improved going forward. I laid out the following for the team.

First, there were three key things that I think that I wanted the Community Team to continue to focus on:
  • Continue to create and run innovative programs to facilitate ever more community contributions and growing the community.
  • Continue to provide good advice to me and the rest of Canonical regarding how to be the best community members we can be, given our privileged positions of being paid to work within that community.
  • Continue to assist with outward communication from Canonical to the community regarding plans, project status, and changes to those.
The Community Council was very engaged in discussing how this all works and should work in the future, as well as other goals and responsibilities for the community team.

What Has Changed?

In setting up the team, I had some realizations. First, there was no longer just one “Community Manager”. When the project was young and Canonical was small, we had only one, and the team slowly grew. However, the Team is now four people dedicated to just the Community Team, and there are others who spend almost all of their time working on Community Team projects.

Secondly, while individuals on the team had been hired to have specific roles in the community, every one of them had branched out to tackle new challenges as needed.

Thirdly, there is no longer just one “Community Spokesperson”. Everyone in Ubuntu Engineering can and should speak to/for Canonical and to/for the Ubuntu Community in the right contexts.
So, we made some small, but I think important changes to the Community Team.

First, we created the role Community Team Manager. Notice the important inclusion of the word Team”. This person’s job is not to “manage the community”, but rather to organize and lead the rest of the community team members. This includes things like project planning, HR responsibilities, strategic planning and everything else entailed in being a good line manager. After a rather competitive interview process, with some strong candidates, one person clearly rose to the top as the best candidate. So, I would like formally introduce David Planella (lp, g+) as the Community Team Manager!

Second, I change the other job titles from their rather specific titles to just “Community Manager” in order to reflect the reality that everyone on the community team is responsible for the whole community. So that means, Michael Hall (lp, g+), Daniel Holbach (lp, g+), and Nicholas Skaggs (lp, g+), are all now “Community Manager”.

What's Next?

This is a very strong team, and a really good group of people. I know them each personally, and have a lot of confidence in each of them personally. Combined as a team, they are amazing. I am excited to see what comes next.

In light of these changes, the most common question I get is, “Who do I talk to if I have a question or concern?” The answer to that is “anyone.” It’s understandable if you feel the most comfortable talking to someone on the community team, so please feel free to find David, Michael, Daniel, or Nicholas online and ask their question. There are, of course, other stalwarts like Alan Pope (lp, g+) and Oliver Grawert (lp, g+) who seem to be always online :) By which, I mean to say that while the Community Managers are here to serve the Ubuntu Community, I hope that anyone in Ubuntu Engineering considers their role in the Ubuntu Community to include working with anyone else in the Ubuntu Community :)

Want talk directly to the community team today? Easy, join their Ubuntu on Air Q&A Session at 15 UTC :)

Finally, please note that I love to be "interrupted" by questions from community members :) The best way to get in touch with me is on freenode, where I go by rickspencer3. Otherwise, I am also on g+, and of course there is this blog :)

22 July, 2014 12:56PM by Rick Spencer (noreply@blogger.com)

Lubuntu Blog: Box support for MATE

The Box theme support continues growing, covering more and more environments. Now we're celebrating that the MATE desktop environment, a GTK3 fork of the traditional Gnome2, will have its own Ubuntu flavour, named Ubuntu MATE Remix. Once tested, I noticed I missed something familiar, our beloved Lubuntu spirit on it. So here begins the (experimental) theme support. It'll be available to download

22 July, 2014 10:24AM by Rafael Laguna (noreply@blogger.com)

Martin Pitt: autopkgtest 3.2: CLI cleanup, shell command tests, click improvements

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

22 July, 2014 06:16AM

hackergotchi for TurnKey Linux

TurnKey Linux

Creating a screencast on Linux with xvidcap: a free open source screencasting tool

Yesterday I wrote about my screencast production adventures. For a screencast demo I was working on I explored all the FLOSS screencasting tools I could find including RecordMyDesktop, and Istanbul. They all suck by varying degrees but xvidcap, though it doesn't look like much, definitely sucked the least.

If the binary package for your distribution crashes and burns try building xvidcap from the sources on Sourceforge (not the same as the *.orig tarball from the Ubuntu package for some reason). That usually produces something usable.

How to use xvidcap

For quick and dirty screencasts xvidcap can capture video and audio at the same time, but if you want to go the extra mile I recommend capturing the video and "narrating" it separately. You'll probably do a better job that way because you'll be able to focus on each step separately. Also, this way you can edit the video and cut out cruft, speed up boring parts, etc.

The quick and dirty method is to just shoot your screencast in one take, audio and all, encode to the proper format and you're done.

The higher quality alternative is to shoot the screencast in separate scenes, edit the best takes, stitch them together and then play back the video while recording in audacity.

If you're narrating separately you'll need sync the audio and video. This is achieved by editing in audacity to line up the sound track (according to the timeline/frames) with the video for key scenes and then stiching the video and audio together (than can be done in Avidemux). Usually what this means is you just delete just enough of the silence between the beginning of the recording and when you started playing the video and speaking into your microphone.

Configuring xvidcap

You can configure xvidcap from the command line or via the GUI (right click on the filename and select preferences). I've found a combination works best. I configure the most common parameters such as resolution in the command line (e.g., with a wrapper script) and the rest in the preferences dialog.

It took me a while to figure out which codecs to use for screen capture. xvidcap supports multiple container formats (e.g., AVI, MP4, FLV) and multiple codecs but I didn't really understand at the time what difference it made.

The obvious solution was to capture in the highest possible quality and worry about optimizing bitrates at a later stage. Unfortunately, I couldn't capture the screen at a high FPS with most of lossless options (PNG, XWD, FFV1). I suspect this may be due to a bottleneck somewhere in my system.

After much experimentation I decided to capture with Flash Video Screen (flashsv) and use ffmpeg to re-encode it into a different lossless codec Avidemux could work with. Flash Video Screen is lossless and I could capture at a high FPS for most things (lower FPS with lots of on screen movement).

A good alternative is to capture with MP4 video. It isn't lossless even at 100% quality but it's very good, and there there are no issues with the capture speed.

Related blog posts:

22 July, 2014 05:20AM by Liraz Siri

My TTS sleep hack: a hi-tech cure for insomnia

For as long as I can remember myself I've had trouble falling asleep. I think there might be a genetic component to it because there seems to be a history of insomnia on my mother's side of the family. If you've never had this problem, consider yourself lucky. Even mild insomnia can royally screw with your quality of life. Actually I think that's an understatement considering the incompatibility insomnia can induce with the normal rhythms of society. Insomnia can royally screw your life.

Especially if like me you don't tolerate sleep deficits very well. Some of my friends seem fine getting 4-5 hours of sleep on a week night, racking up a mild sleep deficit they make up for over the weekend. Not me. I doesn't take a lot of sleep deprivation to turn me into a zombie. A shell of my usual well-rested self. Physically slow, tired, poorly motivated, somewhat stupid (and haunted by an insatiable hunger for BRaaiins...)

Cruelly enough, my insomnia seems to be self aware and often downright malicious. The more I want to sleep, the more I need it (e.g., early meeting next day) the more difficult sleeping becomes. It's a vicious self re-inforcing feedback loop.

I'm not taking this lying down!

Over the years I've tried various interventions, with varying levels of success. Supplements such as Melatonin and Valerian root. Meditation. Polyphasic sleep patterns. An alarm application on my smartphone that uses the acceleration sensor to track my sleep cycles and figure out when would be the best time to wake me up within a specified time window.

Back in my mandatory military service days I was even sent to one of those sleep laboratories where they hook you up to a surprisingly uncomfortable array of scientific-looking instruments and monitor what happens when you try to fall asleep cocooned by a tangle of restricting wires in a strange hospital bed. That was fun.

Interestingly enough one the most effective countermeasures was to stop fighting evolution and go all natural. Go to sleep early. Avoid bright artificial lighting before bedtime. Rise with the dawn, letting natural sunlight keep my circadian clock in sync. Unfortunately, I'm a bit of night owl and find myself most productive working when normal people in my timezone are fast asleep. It's an old habit I'm not ready to give up yet.

Lucky for me a few months ago I stumbled upon the first full proof solution I've found to my sleeping woes.

Text to speech: the best thing since sliced bread

These days I do most of my reading on my phone using Moonreader+ in combination with the Ivona text-to-speech engine. The British Amy voice is my favorite. But my love affair with text to speech began a few years ago with the Kindle 3 (later renamed the Kindle Keyboard). Sadly, this was the last version of the eInk Kindles with Text to Speech as Amazon have inexplicably discontinued my favorite feature.

When I first got it, the first thing I was excited about was the novelty of the eInk display. Also, I had already heard that it was running Linux inside, and was enthusiastic about hacking into the little device.

It took me a while to figure out that was what really revolutionary about the Kindle was how I could use its text to speech engine to read books to me reading while I was doing various mindless chores (e.g., laundry, cooking, cleaning). Suddenly I had a lot more time for "reading". At first, "reading" this way was a bit uncomfortable and I found myself drifting off. Gradually though I got used to it and was comfortably following what was being read at the fastest Kindle speed.

I didn't realize at the time was how much of an impact this cheap, unassuming little device would have on my life. Besides living up to its name and fully re-kindling my interest in book reading that is. I probably read more books in the 6 months after getting my Kindle than in the preceding decade. Fact is, I was a bit of a book worm as a kid but found myself reading fewer and fewer full length books as a busy adult.

Thanks to the Kindle I cut all the fast food out of my information diet. I stopped watching television, and reduced to maybe couple of hours a week the time spent reading online feeds. My attention span has been miraculously rehabilitated, after years of erosion by Internet quick-fixes. This just sort of happened, without any conscious planning or effort.

If the value I was getting out of my Kindle stopped there I would still be an incredibly passionate fan, but then by accident I discovered something extremely interesting and totally unexpected. Turns out I can use text to speech to induce sleep faster and more reliably than pretty much anything else I've ever tried so far. Think of it as the adult, hi-tech version of bedtime story telling.

Text to speech as a sleeping aid

I'm lying in bed, in the dark, with my eyes closed, listening to a text-to-speech engine whispering through the earphones at full speed. 5 minutes ago I had started reading at full volume, but I have been gradually reducing the volume until now I can hear just the faintest discernible whisper.

I concentrate to follow the voice but then after what seems like just a few minutes I realize I must have dazed off because I no longer understand. I've lost track of the plot, and the meaning of the prose seems to be mixing with dream logic.

In the dark I fumble for the media pause button on my bluetooth headset. The flow of whispers stops. I put the headset aside and slide back immediately to sleep. I'm not fully awake at this point so this part is easy.

When I wake up I usually have to go back at least 20 minutes worth of reading to reach a part of the text I can remember. If I'm in a philosophical mood, I wonder about the time gap. I never remember falling asleep. Do I not remember the text in the gap because I never experienced it in the first place? Or did I experience and comprehend the text and it just never reached long term memory for some reason? Or maybe a mix of both?

It doesn't really seem to matter what I'm reading, though I think I may have fallen asleep fastest reading Charles Darwin's Origin of Species. Darwin's verbose scientific prose was a bit tiring to follow even when fully awake. Hmmm...

22 July, 2014 05:05AM by Liraz Siri

hackergotchi for Ubuntu developers

Ubuntu developers

Andrew Pollock: [debian] Day 174: Kindergarten, startup stuff, tennis

I picked up Zoe from Sarah this morning and dropped her at Kindergarten. Traffic seemed particularly bad this morning, or I'm just out of practice.

I spent the day powering through the last two parts of the registration block of my real estate licence training. I've got one more piece of assessment to do, and then it should be done. The rest is all dead-tree written stuff that I have to mail off to get marked.

Zoe's doing tennis this term as her extra-curricular activity, and it's on a Tuesday afternoon after Kindergarten at the tennis court next door.

I'm not sure what proportion of the class is continuing on from previous terms, and so how far behind the eight ball Zoe will be, but she seemed to do okay today, and she seemed to enjoy it. Megan's in the class too, and that didn't seem to result in too much cross-distraction.

After that, we came home and just pottered around for a bit and then Zoe watched some TV until Sarah came to pick her up.

22 July, 2014 01:23AM

July 21, 2014

The Fridge: Ubuntu Weekly Newsletter Issue 375

21 July, 2014 11:42PM

Jonathan Riddell: Barcelona, such a beautiful horizon

KDE Project:

When life gives you a sunny beach to live on, make a mojito and go for a swim. Since KDE has an office that all KDE developer are welcome to use in Barcelona I decided to move to Barcelona until I get bored. So far there's an interesting language or two, hot weather to help my fragile head and water polo in the sky. Do drop by next time you're in town.

Plasma 5 Release Party Drinks

Also new poll for Plasma 5. What's your favourite feature?

21 July, 2014 07:22PM

Elizabeth K. Joseph: The Official Ubuntu Book, 8th Edition now available!

This past spring I had the great opportunity to work with Matthew Helmke, José Antonio Rey and Debra Williams of Pearson on the 8th edition of The Official Ubuntu Book.

Official Ubuntu Book, 8th Edition

In addition to the obvious task of updating content, one of our most important tasks was working to “future proof” the book more by doing rewrites in a way that would make sure the content of the book was going to be useful until the next Long Term Support release, in 2016. This meant a fair amount of content refactoring, less specifics when it came to members of teams and lots of goodies for folks looking to become power users of Unity.

Quoting the product page from Pearson:

The Official Ubuntu Book, Eighth Edition, has been extensively updated with a single goal: to make running today’s Ubuntu even more pleasant and productive for you. It’s the ideal one-stop knowledge source for Ubuntu novices, those upgrading from older versions or other Linux distributions, and anyone moving toward power-user status.

Its expert authors focus on what you need to know most about installation, applications, media, administration, software applications, and much more. You’ll discover powerful Unity desktop improvements that make Ubuntu even friendlier and more convenient. You’ll also connect with the amazing Ubuntu community and the incredible resources it offers you.

Huge thanks to all my collaborators on this project. It was a lot of fun to work them and I already have plans to work with all three of them on other projects in the future.

So go pick up a copy! As my first published book, I’d be thrilled to sign it for you if you bring it to an event I’m at, upcoming events include:

And of course, monthly Ubuntu Hours and Debian Dinners in San Francisco.

21 July, 2014 04:21PM

Michael Hall: When is a fork not a fork?

Technically a fork is any instance of a codebase being copied and developed independently of its parent.  But when we use the word it usually encompasses far more than that. Usually when we talk about a fork we mean splitting the community around a project, just as much as splitting the code itself. Communities are not like code, however, they don’t always split in consistent or predictable ways. Nor are all forks the same, and both the reasons behind a fork, and the way it is done, will have an effect on whether and how the community around it will split.

There are, by my observation, three different kinds of forks that can be distinguished by their intent and method.  These can be neatly labeled as Convergent, Divergent and Emergent forks.

Convergent Forks

Most often when we talk about forks in open source, we’re talking about convergent forks. A convergent fork is one that shares the same goals as its parent, seeks to recruit the same developers, and wants to be used by the same users. Convergent forks tend to happen when a significant portion of the parent project’s developers are dissatisfied with the management or processes around the project, but otherwise happy with the direction of its development. The ultimate goal of a convergent fork is to take the place of the parent project.

Because they aim to take the place of the parent project, convergent forks must split the community in order to be successful. The community they need already exists, both the developers and the users, around the parent project, so that is their natural source when starting their own community.

Divergent Forks

Less common that convergent forks, but still well known by everybody in open source, are the divergent forks.  These forks are made by developers who are not happy with the direction of a project’s development, even if they are generally satisfied with its management.  The purpose of a divergent fork is to create something different from the parent, with different goals and most often different communities as well. Because they are creating a different product, they will usually be targeting a different group of users, one that was not well served by the parent project.  They will, however, quite often target many of the same developers as the parent project, because most of the technology and many of the features will remain the same, as a result of their shared code history.

Divergent forks will usually split a community, but to a much smaller extent than a convergent fork, because they do not aim to replace the parent for the entire community. Instead they often focus more on recruiting those users who were not served well, or not served at all, by the existing project, and will grown a new community largely from sources other than the parent community.

Emergent Forks

Emergent forks are not technically forks in the code sense, but rather new projects with new code, but which share the same goals and targets the same users as an existing project.  Most of us know these as NIH, or “Not Invented Here”, projects. They come into being on their own, instead of splitting from an existing source, but with the intention of replacing an existing project for all or part of an existing user community. Emergent forks are not the result of dissatisfaction with either the management or direction of an existing project, but most often a dissatisfaction with the technology being used, or fundamental design decisions that can’t be easily undone with the existing code.

Because they share the same goals as an existing project, these forks will usually result in a split of the user community around an existing project, unless they differ enough in features that they can targets users not already being served by those projects. However, because they do not share much code or technology with the existing project, they most often grow their own community of developers, rather than splitting them from the existing project as well.

All of these kinds of forks are common enough that we in the open source community can easily name several examples of them. But they are all quite different in important ways. Some, while forks in the literal sense, can almost be considered new projects in a community sense.  Others are not forks of code at all, yet result in splitting an existing community none the less. Many of these forks will fail to gain traction, in fact most of them will, but some will succeed and surpass those that came before them. All of them play a role in keeping the wider open source economy flourishing, even though we may not like them when they affect a community we’ve been involved in building.

21 July, 2014 08:00AM

hackergotchi for TurnKey Linux

TurnKey Linux

Screencast production: Lessons learned from the making of my first screencasting


The following post summarizes the lessons I learned from my first serious Linux screencasting attempt, which was also my first foray into the world of open source audio video editing.

The first thing you need to know about screencast production, is that like pretty much anything worth doing, doing at a high level of quality is harder than it looks.

This is especially true if you are using only free tools. One of the main problems I faced is that audio video editing has relatively weak free software coverage. That isn't for lack of free software in this area. It's just that most of it is incomplete, extremely buggy crap.

My theory is that the reason for this sorry state of affairs is that video editing is a rather specialized difficult field with high barriers to entry that doesn't really appeal to a lot of hackers for some reason.

Fortunately, I did manage to find enough software that was good enough to get the job done.

My first screencast

Starting from the end, the result of my work was a 10MB (275 kbps) 1024x768 streaming video running about 5 minutes, encoded with H.264 and MP3. This public copy was generated in turn from a 1GB master encoded with lossless codecs FFV1 and WAV, which was in turn generated from a 4GB workprint.

Getting it short and maximizing the information density was a major challenge and increases the amount of work dramatically.

This kind of production can be a massively labor intensive process.

Behind every second published is about 200 seconds of work, and this is after you climb the learning curve. In other words, 1 minute of video can equal about 3 hours of work. Of course, if you're willing to compromise on quality you can skip most of the effort but unless you have some sort of innate genius for this stuff the result will probably suck.

Major challenges

  • Keeping it short: most screencasts should avoid going over 5 minutes. Short attention spans are the norm these days.

  • Acting: When you are "shooting" a screencast you are usually acting out a script. I'm guessing this still isn't remotely as hard as "real acting" where you have to remember your lines and express emotion but getting even simple acting right can still be surprisingly difficult. At least it was for me.

    Again, this is more difficult if your goal is to execute a complex series of actions fluidly and with no obvious mistakes. It's sort of dance in that respect.

    Narrating is also difficult, especially with non-professional equipment. You have to write a clear expressive script, perform it in a friendly conversational tone, and pronounce all of the words right.

    Frankly it would have been impossible for me to do both at the level of quality I wanted at the same time, in realtime, but it's still a challenge even when you do the narration separately.

  • Unreliable tools: I experiencing serious failures (frequent crashes, corrupt output files) in EVERY tool I used.

    • xvidcap can crash unpredictably during capture. Even when it doesn't crash capture may be corrupt.
    • audacity crashes during editing, especially if you open too many editing windows. It has a crash recovery feature but sometimes incorrectly delete parts of your audio when it "recovers".
    • avidemux can go haywire and executes your edits incorrectly.

Walkthrough of the screencasting process

1) Setup

Install software: You first have to setup your toolset (audacity, xvidcap, ffmpeg, avidemux).

Create screencasting environment: I didn't want the screencast to show all the peculiarities of my work environment (e.g., filesystem contents, virtualbox machines, browser bookmarks and extensions, etc.) so I started fresh by creating a new demo user just for that.

2) Rehearsal

This is where you walk through the various things you would like to show while thinking about what you want to communicate.


  • Try to logically segment the action into easily defined scenes
  • Write an outline of the script

3) Shoot video

You shoot the video at the highest possible quality. Flash Video Screen is lossless but the captured frame rate is poor. MP4 at the highest quality settings has a slightly lower picture quality but the frame rate is much better.

You're not trying to get things perfect and with the correct timing because that's very difficult.

Instead you want to shoot in a way that allows you to redo bad parts and cut out mistakes later in editing without having to start over.

The trick is to remember the approximate mouse position at the beginning of a scene (e.g., middle of the screen) and if you want to start over just bring the mouse back to that location and redo it.

Pausing: Pause as much as you want if it helps you think things through. That can be easily edited out and it seems to work better then trying to rush through everything.

xvidcap complicates things with it's tendency to crash unpredictably.

To get around this every so often I make note of the mouse position, stop capture, and click on the next button to start recapture in a new file (e.g., take-0002.avi instead of take-0001). I then return the mouse to the original position and when I stitch the files together I will edit out the connecting tissue so it looks like one smooth seamless video.

4) Convert captured video to workprints

To edit the video in avidemux you'll want to use ffmpeg to convert the raw footage into a fast, lossless format with no key-frames such as FF HuffYuv (ffvhuff):

		ffmpeg -i path/to/file.avi -vcodec ffvhuff hugefile.avi

This allows you to edit with frame precision and save as many times as you want without loosing any video quality in the process.

The trade off is disk space, and at a 114Mbps bitrate (14MB/s) you'll need lots of it. 1 minute equals 860MB.

5) Edit the workprint

In a nutshell you use avidemux to:

  1. stitch the pieces together to create the illusion of one seamless video.
  2. cut out the bad parts / retakes but don't cut out pauses yet, they'll be useful during audio/video synchronization.

A big gotcha is that Avidemux doesn't provide an Undo and is easy to make serious mistakes in so you'll want to save frequently. Avidemux allows you to save your edits in a "project" file (basically a script that records your actions) but sometimes this doesn't seem to work very well so you will want to save your progress by actually saving your edits into a new video which will take a while but is still much faster with FFVHUFF than with other formats (e.g., especially in "Copy" mode).

6) Script "narration"

This is the part where you play back a section of the video, pause it and then write down notes on useful things you would want to say. Rinse, repeat. Then you translate your notes into a script that doesn't sound too contrived.

At the beginning you may want to introduce yourself and provide a short overview of what you are going to show, to prime the expectations of your viewers, motivate them to see the entire video and make it easier for them to follow you.

Of course, there are many ways to narrate a screencast. The natural tendency is to grunt your way through it while thinking out loud, but I think it's best to make the most of the audio channel to add:

  • audio titles: a spoken summary of what you are doing or going to do. Of course, the user has eyes so you don't have to replicate the information, but it does help to set the context up and help him interpret what they are seeing.
  • provide useful commentary and insights that you think would help users "get it".

7) Record narration

Naturally, you'll want to start with a sound test. Especially if you're using amateurish equipment.

For best results, don't try to do this while watching the video, and forget about timing issues altogether. Just focus on reading the narration script in a natural, friendly tone.


  • Don't start over: If you make a mistake, or don't like how something sounds don't worry about it, just pause and continue and redo it. It's easy to edit out the mis-steps later.

  • Speak in a consistent tone and volume unless you want the listener to start noticing the characteristics of your voice, and usually you just want them processing what your saying without any distractions.

  • Pause and breathe:

    This may sound trivial but if you forget to breathe your voice won't sound as good and you will eventually run out of breath.

    Also, audio is easier to edit when there are pauses between sections because you can identify that in the graph of the waveform.

  • Try to do it in one take.

    Especially if you're using unprofessional equipment.

    I use headphones and every time I took them off and put them back on the positioning of the microphone was a bit different and my voice sounded different.

    Also at different times you'll likely be in a different mood and this too can effect how your voice sounds.

    BTW, I discovered this a bit late and so I think it's possible to make out the different sections in the video.

8) Video/audio syncing

How long it takes: doing this right is a significant amount of work. Once you get the hang of it, expect this to take 20-30 minutes per minute of synchronized video.

Order: you start at the beginning of the audio/video, and work your way gradually through the soundtrack and video track until you reach the end. Doing it in a different order can screw up your synchronization and you'll have to redo it or at least check that it still works.

Editing primitives: I managed to do all of my synchronization done using only two basic primitives - removing and inserting pauses in the audio and video, in a way that synchronizes key audio landmarks (e.g., the start and end of an audio section or an important part in the middle) to what is happening at same time in the video.

In both Avidemux and Audacity this is easily accomplished, though it is somewhat easier in Audacity because the timeline of audio is a bit easier to visualize (I.e., you can see where a section begins and ends without having to play through it).

Most of the time I just took count of the common timeline counters (e.g., number of seconds since video/audio began) to do this but if you want to actually see and hear the result in progress you can export the soundtrack into wav (takes a couple of seconds) and then tell avidemux to load the track from an external file.

I didn't do that very often though and in some cases it shows but usually I didn't need to.

9) Audio cleanup

Played with a lot of options (including the Audacity CleanSpeech chain). Noticed that heavy processing tends to diminish the quality of the speech so that some parts can become difficult to understand.

In the end, I cleaned up the sound using only two basic Audacity filters:

  1. Noise removal
  2. Amplification

10) Encoding

Use Avidemux to encode as configuring of all the codec options is much easier. By contrast with ffmpeg I always got very high bitrates at a lower quality.

Encoded the master video in lossless FFV1 and WAV codecs. That comes out to 1GB (4:1 compression ratio).

Encoded the published video in H264 and MP3. Note that you will pay for narrower bitrates with a longer encoding time.

H264 configuration options:

  • motion estimation

    • partition decision - 6B (very high)
    • method - uneven multi-hexagon
  • partition & frames

    • check all partition macroblocks and b-frame options
    • b-frame direct mode - auto
  • bitrate

    • depends highly on the nature of the content (e.g., cartoons / screencasts achieve much smaller bitrates for the same quality settings)

    • alternatives methods:

      1. single pass - QQ (average):

        20 is very good 30 is aggressive (no noticeable artifacts)

      2. twopass - average bitrate (e.g., note that this is more of a recommendation than a hard setting. I think the first pass analyzes the variable bit rate for various parts and second pass sets the QQ to line up with your expectations)

Note that the final encoding is very CPU intensive. On my system, utilizing 2 cores the encoding with these settings is 20 fps.

11) Uploading

Blip.tv lets you upload pre-encoded video files directly, bypassing their transcoder. You'll need to repackage the AVI into an FLV file. At the time of writing, Avidemux didn't support H264 in FLV so you'll need to save as an AVI and then use ffmpeg to repackage the stream into an FLV:

		ffmpeg -i path/to/file.avi -vcodec copy file.flv

21 July, 2014 05:50AM by Liraz Siri

hackergotchi for Ubuntu developers

Ubuntu developers

Andrew Pollock: [debian] Day 173: Investigation for bug #749410 and fixing my VMs

I have a couple of virt-manager virtual machines for doing DHCP-related work. I have one for the DHCP server and one for the DHCP client, and I have a private network between the two so I can simulate DHCP requests without messing up anything else. It works nicely.

I got a bit carried away, and I use LVM to snapshots for the work I do, so that when I'm done I can throw away the virtual machine's disks and work with a new snapshot next time I want to do something.

I have a cron job, that on a good day, fires up the virtual machines using the master logical volumes and does a dist-upgrade on a weekly basis. It seems to have varying degrees of success though.

So I fired up my VMs to do some investigation of the problem for #749410 and discovered that they weren't booting, because the initramfs couldn't find the root filesystem.

Upon investigation, the problem seemed to be that the logical volumes weren't getting activated. I didn't get to the bottom of why, but a manual activation of the logical volumes allowed the instances to continue booting successfully, and after doing manual dist-upgrades and kernel upgrades, they booted cleanly again. I'm not sure if I got hit by a passing bug in unstable, or what the problem was. I did burn about 2.5 hours just fixing everything up though.

Then I realised that there'd been more activity on the bug since I'd last read it while I was on vacation, and half the investigation I needed to do wasn't necessary any more. Lesson learned.

I haven't got to the bottom of the bug yet, but I had a fun day anyway.

21 July, 2014 01:23AM

July 20, 2014

hackergotchi for Parsix developers

Parsix developers

Currently generating experimental ISO images and slashing UEFI boot and installa...

Currently generating experimental ISO images and slashing UEFI boot and installation bugs. Stay tuned!

20 July, 2014 09:14PM by Parsix GNU/Linux

hackergotchi for Ubuntu developers

Ubuntu developers

Paul Tagliamonte: Plymouth Bootsplashes

Why oh why are they so hard to write?

Even using the built in modules it is insanely hard to debug. Playing a bootsplash in X sucks and my machine boots too fast to test it on reboot.

Basically, euch. All I wanted was a hackers zebra on boot :(

20 July, 2014 09:02PM

July 19, 2014

hackergotchi for Parsix developers

Parsix developers

New security updates have been released for Parsix GNU/Linux 6.0 (Trev) and 7.0...

New security updates have been released for Parsix GNU/Linux 6.0 (Trev) and 7.0 (Nestor). Please see http://www.parsix.org/wiki/Security for details.

19 July, 2014 09:04PM by Parsix GNU/Linux

Parsix Nestor will be shipped with updated AMD (14.4.2) and Nvidia (331.79 and 3...

Parsix Nestor will be shipped with updated AMD (14.4.2) and Nvidia (331.79 and 304.121 legacy) graphics drivers.

19 July, 2014 08:39PM by Parsix GNU/Linux

There are new bug-fix updates available merged from Wheezy repositories. Update...

There are new bug-fix updates available merged from Wheezy repositories. Update your Trev and Nestor systems to install them.

19 July, 2014 08:35PM by Parsix GNU/Linux

A brand new kernel based on Linux 3.14.12 is now available for Nestor. Upgrade y...

A brand new kernel based on Linux 3.14.12 is now available for Nestor. Upgrade your systems to install it.

19 July, 2014 08:34PM by Parsix GNU/Linux

Can't wait for Nestor TEST-1? Simply point your apt repos to Nestor and do a dis...

Can't wait for Nestor TEST-1? Simply point your apt repos to Nestor and do a dist-upgrade. Everything is pretty stable now.

19 July, 2014 08:32PM by Parsix GNU/Linux

Parsix GNU/Linux 7.0 (Nestor) will support booting and installation on UEFI base...

Parsix GNU/Linux 7.0 (Nestor) will support booting and installation on UEFI based environments. Live boot and installer systems are getting ready to support UEFI.

19 July, 2014 08:30PM by Parsix GNU/Linux

hackergotchi for Ubuntu developers

Ubuntu developers

Jo Shields: Transition tracker

Friday was my last day at Collabora, the awesome Open Source consultancy in Cambridge. I’d been there more than three years, and it was time for a change.

As luck would have it, that change came in the form of a job offer 3 months ago from my long-time friend in Open Source, Miguel de Icaza. Monday morning, I fly out to Xamarin’s main office in Boston, for just over a week of induction and face time with my new co workers, as I take on the title of Release Engineer.

My job is to make sure Mono on Linux is a first-class citizen, rather than the best-effort it’s been since Xamarin was formed from the ashes of the Attachmate/Novell deal. I’m thrilled to work full-time on what I do already as community work – including making Mono great on Debian/Ubuntu – and hope to form new links with the packer communities in other major distributions. And I’m delighted that Xamarin has chosen to put its money where its mouth is and fund continued Open Source development surrounding Mono.

If you’re in the Boston area next week or the week after, ping me via the usual methods!


19 July, 2014 07:35PM

hackergotchi for Maemo developers

Maemo developers

2014-07-15 Meeting Minutes

Meeting held on FreeNode, channel #maemo-meeting (logs)

Attending: Philippe Coval (rZr), Peter Leinchen (peterleinchen), Gido Griese (Win7Mac), Paul Healy (sixwheeledbeast), Ruediger Schiller (chem|st), Niel Nielsen (nieldk), Jussi Ohenoja (juiceme).

Absent: Joerg Reisenweber (DocScrutinizer05)

Summary of topics (ordered by discussion):
- Inaugural meeting of the new Maemo Council 2Q/2014
- Discussion on Council work media
- Comments by DocScrutinizer05>

Topic (Inaugural meeting of the new Maemo Council 2Q/2014):

  • The meeting started at 20:01 UTC. The new council assembled, and juiceme welcomed the new members.
  • The election results can be seen on the voting page. There was discussion on the vote counting algorithm (STV) and the number of ballots cast in the election.
  • The council decided to continue the present practice of having weekly meetings on tuesdays at 20:00 UTC. All council members live on UTC, UTC+1, UTC+2 zones.
  • There was some discussion of the roles in the Council. After discussion nieldk proposed peterleinchen as Chair & Secretary and juiceme seconded the motion. The vote was unanimous (4/4) and peterleinchen accepted the position.

Topic (Discussion on Council work media):

  • There was discussion on how to handle the tasks and documents during council work; possibilities are to use piratepad or wikipage on Maemo for collaboration. The advantage of piratepad would be easy multiediting capabilities, on the other hand wiki is more secure.
  • The council voted to use a wiki page as work medium but as wiki cannot be set up as private, email ans piratepad are kept as alternative for in-progress/confidental work.
  • Sixwheeledbeast set up a wiki page for Council agenda.

Topic (Comments by DocScrutinizer05):

  • Joerg came on the IRC channel later when the meeting was over, having been taken ill, but he had the following comments;
    • The Council has only one set of rules which define what the council is and what the council does.
    • The Council is not subject to directives from any other body.
    • Council work is generally done in the public, unless mandated by specific necessity. For confidental work there is the council members mailing list.
    • Since Hildon Foundation has accepted the Maemo Council as its Council, the Maemo Council is also the Council of the new unified Maemo e.V. that is going to take over the responsibilities of Hildon Foundation.
Action Items:
  • N/A
0 Add to favourites0 Bury

19 July, 2014 01:58PM by Jussi Ohenoja (juice@swagman.org)

July 18, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Oli Warner: Two years without nicotine

So it's that time of year again, it's my ex-smoker-versary. Okay I'll come up with a better name for next year but for now you'll have to make do with my reflection on smoking and why it's really not that hard to quit, as well as a few silly numbers.

Thirty-six-score-and-ten days ago I stopped smoking.

  • I stopped picking them up.
  • I stopped buying them.
  • I stopped doing things that made me want to smoke.
  • I stopped cold turkey. No NRT, no e-cigs.

I just stopped and braced for the worst.
And I was expecting the worst.

It took me quitting to realise that it was probably that fear of unworldly cravings that kept me smoking for 10 years. When I got to 4 weeks without any nicotine I realised hadn't been that bad at all... Anything that says otherwise is probably either trying to keep you smoking or is trying to sell you something to do instead.

Quitting is easy, just stop smoking and you'll realise that

And this isn't a silly confidence trick. I'm not going to get all happy-clappy and woosah about this. Just stop smoking and you'll see that after a week you won't physically crave (the worst bit), after two or three you stop thinking about them, and after four weeks you're awesome...

Just don't start smoking again. A sober, smoke-free mind is jubilant you're not smoking, and under its sole influence you'll do anything to avoid clouds of smoke... But have a couple of drinks and you can very quickly find yourself drifting intimately close to smokers.

I've also heard from more than a couple of people who "tried to quit" but were still surrounded by cigarettes. Quitting does take willpower and few have enough to resist that "emergency packet" especially in the first couple of weeks. Chuck them all, avoid your triggers and make it easy on yourself.

You have to be vigilant. And consistent.
One cigarette is the end of the world. No, you cannot have a cigar.

Two years smoke-free in numbers

Now onto the fun stuff. There's a silly little tray application called QuitCount in the Ubuntu repos that I set up when I quit. It just keeps track of the number of days, an accumulated number of cigarettes (based on my rate of ~13 a day) and works out how much that would cost, as well as using some formula to work out how much less dead I'm going to be.

  • 9490 cigarettes have gone un-smoked
  • 94.9g of tar not in my lungs
  • An extra £3368.95 cluttering up my bank account (I wish), which is good because I also get:
  • An extra 66 days cluttering up the planet.

And I wasn't a heavy smoker. If you're on 20 or 40 a day, those numbers could be a whole lot higher if you quit today.

Photo credit: mendhak

18 July, 2014 07:47AM

Duncan McGreggor: Uncovering Inherent Structures in Organizations

Vladimir Levenshtein
This post should have a subtitle: "Using Team Analysis and Levenshtein Distance to Reveal said Structure." It's the first part of that subtitle that is the secret, though: being able to correctly analyze and classify individual teams. Without that, using something clever like Levenshtein distance isn't going to be very much help.

But that's coming in towards the end of the story. Let's start at the beginning.

What You're Going to See

This post is a bit long. Here are the sections I've divided it into:

  • What You're Going to See
  • Premise
  • Introducing ACME
  • Categorizing Teams
  • Category Example
  • Calculating the Levenshtein Distance of Teams
  • Sorting and Interpretation
  • Conclusion

However, you don't need to read the whole thing to obtain the main benefits. You can get the Cliff Notes version by reading the Premise, Categorizing Teams, Interpretation, and the Conclusion.


Companies grow. Teams expand. If you're well-placed in your industry and providing in-demand services or products, this is happening to you. Individuals and small teams tend to deal with this sort of change pretty well. At an organizational level, however, this sort of change tends to have an impact that can bring a group down, or rocket it up to the next level.

Of the many issues faced by growing companies (or rapidly growing orgs within large companies), the structuring one can be most problematic: "Our old structures, though comfortable, won't scale well with all these new teams and all the new hires joining our existing teams. How do we reorganize? Where do we put folks? Are there natural lines along which we can provide better management (and vision!) structure?"

The answer, of course, is "yes" -- but! It requires careful analysis and a deep understanding every team in your org.

The remainder of this post will set up a scenario and then figure out how to do a re-org. I use a software engineering org as an example, but that's just because I have a long and intimate knowledge of them and understand the ways in which one can classify such teams. These same methods could be applied a Sales group, Marketing groups, etc., as long as you know the characteristics that define the teams of which these orgs are comprised.

Introducing ACME

ACME Corporation is the leading producer of some of the most innovative products of the 20th century. The CTO had previously tasked you, the VP of Software Development to bring this product line into the digital age -- and you did! Your great ideas for the updated suite are the new hottness that everyone is clamouring for. Subsequently, the growth of your teams has been fast, and dare we say, exponential.

More details on the scenario: your Software Development Group has several teams of engineers, all working on different products or services, each of which supports ACME Corporation in different ways. In the past 2 years, you've built up your org by an order of magnitude in size. You've started promoting and hiring more managers and directors to help organize these teams into sensible encapsulating structures. These larger groups, once identified, would comprise the whole Development Group.

Ideally, the new groups would represent some aspect of the company, software development, engineering, and product vision -- in other words, some sensible clustering of teams doing related work. How would you group the teams in the most natural way?

Simply dividing along language or platform lines may seem like the obvious answer, but is it the best choice? There are some questions that can help guide you in figuring this out:
  • How do these teams interact with other parts of the company? 
  • Who are the stakeholders in feature development? 
  • Which sorts of customers does each team primarily serve?
There are many more questions you could ask (some are implicit in the analysis data linked below), but this should give a taste.

ACME Software Development has grown the following teams, some of which focus on products, some on infrastructure, some on services, etc.:
  • Digital Anvil Product Team
  • Giant Rubber Band App Team
  • Digital Iron Carrot Team
  • Jet Propelled Unicycle Service Team
  • Jet Propelled Pogo Stick Service Team
  • Ultimatum Dispatcher API Team
  • Virtual Rocket Powered Roller Skates Team
  • Operations (release management, deployments, production maintenance)
  • QA (testing infrastructure, CI/CD)
  • Community Team (documentation, examples, community engagement, meetups, etc.)

Early SW Dev team hacking the ENIAC

Categorizing Teams

Each of those teams started with 2-4 devs hacking on small skunkworks projects. They've now blossomed to the extent that each team has significant sub-teams working on new features and prototyping for the product they support. These large teams now need to be characterized using a method that will allow them to be easily compared. We need the ability to see how closely related one team is to another, across many different variables. (In the scheme outlined below, we end up examining 50 bits of information for each team.)

Keep in mind that each category should be chosen such that it would make sense for teams categorized similarly to be grouped together. A counter example might be "Team Size"; you don't necessarily want all large teams together in one group, and all small teams in a different group. As such, "Team Size" is probably a poor category choice.

Here are the categories which we will use for the ACME Software Development Group:
  • Language
  • Syntax
  • Platform
  • Implementation Focus
  • Supported OS
  • Deployment Type
  • Product?
  • Service?
  • License Type
  • Industry Segment
  • Stakeholders
  • Customer Type
  • Corporate Priority
Each category may be either single-valued or multi-valued. For instance, the categories ending in question marks will be booleans. In contrast, multiple languages might be used by the same team, so the "Language" category will sometimes have several entries.

Category Example

(Things are going to get a bit more technical at this point; for those who care more about the outcomes than the methods used, feel free to skip to the section at the end: Sorting and Interpretation.)

In all cases, we will encode these values as binary digits -- this allows us to very easily compare teams using Levenshtein distance, since the total of all characteristics we are filtering on can be represented as a string value. An example should illustrate this well.

(The Levenshtein distance between two words is the minimum number of single-character edits -- such as insertions, deletions or substitutions -- required to change one word into the other. It is named after Vladimir Levenshtein, who defined this "distance" in 1965 when exploring the possibility of correcting deletions, insertions, and reversals in binary codes.)

Let's say the Software Development Group supports the following languages, with each one assigned a binary value:
  • LFE - #b0000000001
  • Erlang - #b0000000010
  • Elixir - #b0000000100
  • Ruby - #b0000001000
  • Python - #b0000010000
  • Hy - #b0000100000
  • Clojure - #b0001000000
  • Java - #b0010000000
  • JavaScript - #b0100000000
  • CoffeeScript - #b1000000000
A team that used LFE, Hy, and Clojure would obtain its "Language" category value by XOR'ing the three supported languages, and would thus be #b0001100001. In LFE, that could be done by entering the following code the REPL:

We could then compare this to a team that used just Hy and Clojure (#b0001100000), which has a Levenshtein distance of 1 with the previous language category value. A team that used Ruby and Elixir (#b0000001100) would have a Levenshtein distance of 5 with the LFE/Hy/Clojure team (which makes sense: a total of 5 languages between the two teams with no languages shared in common). 

Calculating the Levenshtein Distance of Teams

As a VP who is keen on deeply understanding your teams, you have put together a spreadsheet with a break-down of not only languages used in each team, but lots of other categories, too. For easy reference, you've put a "legend" for the individual category binary values is at the bottom of the linked spreadsheet.

In the third table on that sheet, all of the values for each column are combined into a single binary string. This (or a slight modification of this) is what will be the input to your calculations. Needless to say, as a complete fan of LFE, you will be writing some Lisp code :-)

Partial view of the spreadsheet's first page.
(If you would like to try the code out yourself while reading, and you have lfetool installed, simply create a new project and start up the REPL: $ lfetool new library ld; cd ld && make-shell
That will download and compile the dependencies for you. In particular, you will then have access to the lfe-utils project -- which contains the Levenshtein distance functions we'll be using. You should be able to copy-and-paste functions, vars, etc., into the REPL from the Github gists.)

Let's create a couple of data structures that will allow us to more easily work with the data you collected about your teams in the spreadsheet:

We can use a quick copy and paste into the LFE REPL for two of those numbers to do a sanity check on the distance between the Community Team and the Digital Iron Carrot Team:

That result doesn't seem unreasonable, given that at a quick glance we can see both of these strings have many differences in their respective character positions.

It looks like we're on solid ground, then, so let's define some utility functions to more easily work with our data structures:

Now we're ready to roll; let's try sorting the data based on a comparison with a one of the teams:

It may not be obvious at first glance, but what the levenshtein-sort function did for us is compare our "control" string to every other string in our data set, providing both the distance and the string that the control was compared to. The first entry in the results is the our control string, and we see what we would expect: the Levenshtein distance with itself is 0 :-)

The result above is not very easily read by most humans ... so let's define a custom sorter that will take human-readable text and then output the same, after doing a sort on the binary strings:

(If any of that doesn't make sense, please stop in and say "hello" on the LFE mail list -- ask us your questions! We're a friendly group that loves to chat about LFE and how to translate from Erlang, Common Lisp, or Clojure to LFE :-) )

Sorting and Interpretation

Before we try out our new function, we should ponder which team will be compared to all the others -- the sort results will change based on this choice. Looking at the spreadsheet, we see that the "Digital Iron Carrot Team" (DICT) has some interesting properties that make it a compelling choice:
  • it has stakeholders in Sales, Engineering, and Senior Leadership;
  • it has a "Corporate Priority" of "Business critical"; and
  • it has both internal and external customers.
Of all the products and services, it seems to be the biggest star. Let's try a sort now, using our new custom function -- inputting something that's human-readable: 

Here we're making the request "Show me the sorted results of each team's binary string compared to the binary string of the DICT." Here are the human-readable results:

For a better visual on this, take a look at the second tab of the shared spreadsheet. The results have been applied to the collected data there, and then colored by major groupings. The first group shares these things in common:
  • Lisp- and Python-heavy
  • Middleware running on BSD boxen
  • Mostly proprietary
  • Externally facing
  • Focus on apps and APIs
It would make sense to group these three together.

A sort (and thus grouping) by comparison to critical team.
Next on the list is Operations and QA -- often a natural pairing, and this process bears out such conventional wisdom. These two are good candidates for a second group.

Things get a little trickier at the end of the list. Depending upon the number of developers in the Java-heavy Giant Rubber Band App Team, they might make up their own group. However, both that one and the next team on the list have frontend components written in Angular.js. They both are used internally and have Engineering as a stakeholder in common, so let's go ahead and group them.

The next two are cloud-deployed Finance APIs running on the Erlang VM. These make a very natural pairing.

Which leaves us with the oddball: the Community Team. The Levenshtein distance for this team is the greatest for all the teams ... but don't be mislead. Because it has something in common with all teams (the Community Team supports every product with docs, example code, Sales and TAM support, evangelism for open source projects, etc.), it will have many differing bits with each team. This really should be in a group all its own so that structure represents reality: all teams depend upon the Community Team. A good case could also probably be made for having the manager of this team report directly up to you. 

The other groups should probably have directors that the team managers report to (keeping in mind that the teams have grown to anywhere from 20 to 40 per team). The director will be able to guide these teams according to your vision for the Software Group and the shared traits/common vision you have uncovered in the course of this analysis.

Let's go back to the Community Team. Perhaps in working with them, you have uncovered a hidden fact: the community interactions your devs have are seriously driving market adoption through some impressive and passionate service and open source docs+evangelism. You are curious how your teams might be grouped if sorted from the perspective of the Community Team.

Let's find out!

As one might expect, most of the teams remain grouped in the same way ... the notable exception being the split-up of the Anvil and Rubber Band teams. Mostly no surprises, though -- the same groupings persist in this model.

A sort (and thus grouping) by comparison to highly-connected team.
To be fair, if this is something you'd want to fully explore, you should bump the "Corporate Priority" for the Community Team much higher, recalculate it's overall bits, regenerate your data structures, and then resort. It may not change too much in this case, but you'd be applying consistent methods, and that's definitely the right thing to do :-) You might even see the Anvil and Rubber Band teams get back together (left as an exercise for the reader).

As a last example, let's throw caution and good sense to the wind and get crazy. You know, like the many times you've seen bizarre, anti-intuitive re-orgs done: let's do a sort that compares a team of middling importance and a relatively low corporate impact with the rest of the teams. What do we see then?

This ruins everything. Well, almost everything: the only group that doesn't get split up is the middleware product line (Jet Propelled and Iron Carrot). Everything else suffers from a bad re-org.

A sort (and thus grouping) by comparison to a non-critical team.

If you were to do this because a genuine change in priority had occurred, where the Giant Rubber Band App Team was now the corporate leader/darling, then you'd need to recompute the bit values and do re-sorts. Failing that, you'd just be falling into a trap that has beguiled many before you.


If there's one thing that this exercise should show you, it's this: applying tools and analyses from one field to fresh data in another -- completely unrelated -- field can provide pretty amazing results that turn mystery and guesswork into science and planning.

If we can get two things from this, the other might be: knowing the parts of the system may not necessarily reveal the whole (c.f. Complex Systems), but it may provide you with the data that lets you better predict emergent behaviours and identify patterns and structure where you didn't see them (or even think to look!) before.

18 July, 2014 06:09AM by Duncan McGreggor (noreply@blogger.com)

Duncan McGreggor: Interview with Erlang Co-Creators

A few weeks back -- the week of the PyCon sprints, in fact -- was the San Francisco Erlang conference. This was a small conference (I haven't been to one so small since PyCon was at GW in the early 2000s), and absolutely charming as a result. There were some really nifty talks and a lot of fantastic hallway and ballroom conversations... not to mention Robert Virding's very sweet Raspberry Pi Erlang-powered wall-sensing Lego robot.

My first Erlang Factory, the event lasted for two fun-filled days and culminated with a stroll in the evening sun of San Francisco down to the Rackspace office where we held a Meetup mini-conference (beer, food, and three more talks). Conversations lasted until well after 10pm with the remaining die-hards making a trek through the nighttime streets SOMA and the Financial District back to their respective abodes.

Before the close of the conference, however, we managed to sneak a ride (4 of us in a Mustang) to Scoble's studio and conduct an interview with Joe Armstrong and Robert Virding. We covered some of the basics in order to provide a gentle overview for folks who may not have been exposed to Erlang yet and are curious about what it has to offer our growing multi-core world. This wend up on the Rackspace blog as well as the Building 43 site (also on YouTube). We've got a couple of teams using Erlang in Rackspace; if you're interested, be sure to email Steve Pestorich and ask him what's available!

18 July, 2014 05:49AM by Duncan McGreggor (noreply@blogger.com)

July 17, 2014

The Fridge: Ubuntu 13.10 (Saucy Salamander) End of Life reached on July 17 2014

This is a follow-up to the End of Life warning sent last month to confirm that as of today (July 17, 2014), Ubuntu 13.10 is no longer supported. No more package updates will be accepted to 13.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 13.10 (Saucy Salamander) release almost 9 months ago, on October 17, 2013. This was the second release with our new 9 month support cycle and, as such, the support period is now nearing its end and Ubuntu 13.10 will reach end of life on Thursday, July 17th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 13.10.

The supported upgrade path from Ubuntu 13.10 is via Ubuntu 14.04 LTS. Instructions and caveats for the upgrade may be found at:


Ubuntu 14.04 LTS continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:


Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jul 17 16:19:36 UTC 2014 by Adam Conrad

17 July, 2014 09:10PM

Ubuntu Podcast from the UK LoCo: S07E16 – The One with the Race Car Bed

We’re back with Season Seven, Episode Sixteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating Battenberg cake in Studio L.

In this week’s show:

  • We interview David Hermann about his MiracleCast project…

  • We also discuss:

    • Getting a dashcam
    • Going to Bruges
    • Reading a Stephen King book (Pet Sematary) for the first time…
    • And something that we’ll never know now…
  • We share some Command Line Lurve: YouTube-Upload to upload videos to YouTube from the command line:

     $ youtube-upload --email=myemail@gmail.com --password=mypassword 
                 --title="A.S. Mutter" --description="$(< description.txt)" 
                 --category=Music --keywords="mutter, beethoven" anne_sophie_mutter.flv
  • And we read your feedback, including:

    • Simon’s link to Seafile
    • ”’full screen cast email here when we have permission”’

    Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

17 July, 2014 08:33PM

Jonathan Riddell: Barcelona Plasma and KDE Frameworks 5.0 Release party

KDE Project:

Barcelona Free Software Users & Hackers are having a Barcelona Free Software Users & Hackers mañana, see you there!

17 July, 2014 06:29PM

Sergio Meneses: Ubucon LatinAmerica Speakers! #1

Ubucon LatinAmerica speakers list is here! This is the first post about our Speakers.

1- Bhavani Shankar

He is coming from India, he has an amazing background as Ubuntu Developer and member of the Ubuntu Loco Council.

He will talk about “Ubuntu Developing for Dummies” Esp:”Desarrollo de Ubuntu para dummies”. We will learn about all the components of Ubuntu, coding and making own our software and much more!

You can find more information about him in his wikipage: https://wiki.ubuntu.com/BhavaniShankar
Information in Spanish: http://ubuntu-co.com/node/3233

2- Marcos Alvarez Costales

He is Linux Developer and Works with Ubuntu Spain Community, founder and developer of Gufw ( http://gufw.org/ ) , Pylang, Folder Color and the Weapp for Telegram.

His conferences will be about Linux Security and how to create your own WebAppps in Ubuntu.

You can find more information about him in his wikipage: https://wiki.ubuntu.com/costales
Information in Spanish: http://ubuntu-co.com/node/3230

3- Fernando Lanero

Teacher, administrator of http://ubuntuleon.com , worked with migrations in education centers to Ubuntu. His talk called “Linux is Education, Linux is Science” , Spanish: “Linux es Educación, Linux es Ciencia”

Information in Spanish: http://ubuntu-co.com/node/3231

4- Fernando García Amen

Information in Spanish: http://ubuntu-co.com/node/3231

5- Darwin Proaño Orellana

IT engineer from Universidad del Azuay and President of CloudIT Ecuador, his talks will be about “freee-clouds” and “How to migrate from Windows in a safety mode”

As you know the Ubucon LatinAmerica will be on August 14th until 16th.

You can find all the information about the UbuconLA in our website or the wikipage.

17 July, 2014 05:58PM

Lubuntu Blog: Lubuntu 13.10 support ends today

The official support for the Saucy Salamader ends today, 17 July. Canonical recommends upgrade your older Ubuntu versions, and so do we, for the official Lubuntu flavour. Instructions about the upgrade process can be found here. Get yourself into the Trusty wagon and re-start with an improved and more stable environment.

17 July, 2014 03:55PM by Rafael Laguna (noreply@blogger.com)

Vincent Untz: Stepping down as openSUSE Board Chairman

Two years ago, I got appointed as chairman of the openSUSE Board. I was very excited about this opportunity, especially as it allowed me to keep contributing to openSUSE, after having moved to work on the cloud a few months before. I remember how I wanted to find new ways to participate in the project, and this was just a fantastic match for this. I had been on the GNOME Foundation board for a long time, so I knew it would not be easy and always fun, but I also knew I would pretty much enjoy it. And I did.

Fast-forward to today: I'm still deeply caring about the project and I'm still excited about what we do in the openSUSE board. However, some happy event to come in a couple of months means that I'll have much less time to dedicate to openSUSE (and other projects). Therefore I decided a couple of months ago that I would step down before the end of the summer, after we'd have prepared the plan for the transition. Not an easy decision, but the right one, I feel.

And here we are now, with the official news out: I'm no longer the chairman :-) (See also this thread) Of course I'll still stay around and contribute to openSUSE, no worry about that! But as mentioned above, I'll have less time for that as offline life will be more "busy".

openSUSE Board Chairman at oSC14

openSUSE Board Chairman at oSC14

Since I mentioned that we were working on a transition... First, knowing the current board, I have no doubt everything will be kept pushed in the right direction. But on top of that, my good friend Richard Brown has been appointed as the new chairman. Richard knows the project pretty well and he has been on the board for some time now, so is aware of everything that's going on. I've been able to watch his passion for the project, and that's why I'm 100% confident that he will rock!

17 July, 2014 02:40PM

Jonathan Riddell: Plasma 5.1 Kickoff

KDE Project:

We had a fun two hour meeting in #plasma yesterday to decide on the tasks for the next release. It's due out in October and there's plenty of missing features that need to be added before Plasma 5 is ready for the non-geek.

Previously Plasma has used wiki lists to manage the Todo list but this is very old-school and clunky to manage. Luckily Ben the KDE Sysadmin has just finished saving some children from a burning building before he swooped in and set up
todo.kde.org just in time for us to add some 30 Todo items.

There will also be a meeting next week to discuss Wayland action items, come along if you want to help Martin shape the future.

Plasma 5.1 Todo list

Meanwhile former tabloid rag turned bastian of quality journalism OMGUbuntu said
KDE Plasma 5 Arrives with Fresh New Look, True Convergence
. More nice comments: "I tested the neon iso yesterday and it looks absolutely stunning!".

I'd like to thank Scarlett for putting in loads of hours perfecting the KF5 and Plasma 5 packages and now Rohan is tidying KF5 up and uploading to the Ubuntu archive.

17 July, 2014 02:08PM

hackergotchi for TurnKey Linux

TurnKey Linux

The TurnKey blog: where do we go from here?

When we started out a few years ago the scope of posts was very limited. Only news announcements, once every few months. The idea was to keep the signal to noise ratio down. But then a couple of years later we added tags to the blog and that changed everything because it meant we could offer a just-the-news feed to those who wanted it while opening up the blog to a broader range of subjects. Gradually the scope of the blog expanded until it included pretty much anything interesting we came across in our TurnKey adventures.

Initially posts on the blog that weren't directly related to TurnKey bothered me somewhat, but then I realized so long as there is a decent overlap between the type of people who might be interested in a post and the type of people interested in TurnKey, we shouldn't be too worried about going wide.

After all, if someone is interested in just news announcements they can sign up for the newsletter or the TurnKey news feed and they won't miss out on any important project update.

Also, the way I've come to see it, the borders of TurnKey are fuzzy and ever expanding. TurnKey isn't really about the stuff we make. That's just a means to an end. A way of lowering the bar enough that empowers as many people as possible to explore what GNU/Linux and free software has to offer. Not just in the technological sense but also the community around it. The developers, the users, the people that are the heart of soul of this strange and unintuitive phenomena of gift culture.

Also, TurnKey doesn't and couldn't exist on its own. It's just one organism in a tightly-woven ecosystem where all the parts are dependent on all others.

The relationships with other communities is multi-dimensional, but here's one example branching:

TurnKey -> Debian -> Linux -> Free software -> Computers / networking -> Technology.

All of that stuff is related to TurnKey and if you write about any of it it is likely to interest people who are also interested in TurnKey, and hence serve our purpose.

It all comes down to who you are writing for and why.

On one level, it would be preferable for stuff that goes on the TurnKey blog to be closely related to TurnKey, but mainly because that makes it more likely that our audience (e.g., regular blog subscribers) will be interested.

On another level, only a small amount traffic to the blog actually comes from regular subscribers. Mostly people reach blog posts when they're searching for something specific. The majority of these strangers immediately "bounce" elsewhere but there's also a small minority that discover TurnKey serendipitously that way, which is good for the project.

17 July, 2014 05:45AM by Liraz Siri

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: Community Leadership Summit and OSCON Plans

As many of you will know, I organize an event every year called the Community Leadership Summit. The event brings together community leaders, organizers and managers and the projects and organizations that are interested in growing and empowering a strong community.

The event kicks off this week on Thursday evening (17th July) with a pre-CLS gathering at the Doubletree Hotel at 7.30pm, and then we get started with the main event on Friday (18th July) and Saturday (19th July). For more details, see http://www.communityleadershipsummit.com/.

This year’s event is shaping up to be incredible. We have a fantastic list of registered attendees and I want to thank our sponsors, O’Reilly, Citrix, Oracle, Mozilla, Ubuntu, and LinuxFund.

Also, be sure to join the new Community Leadership Forum for discussing topics that relate to community management, as well as topics for discussion at the Community Leadership Summit event each year. The forum is designed to be a great place for sharing and learning tips and techniques, getting to know other community leaders, and having fun.

The forum is powered by Discourse, so it is a pleasure to use, and I want to thank discoursehosting.com for generously providing free hosting for us.

Speaking Events and Training at OSCON

I also have a busy OSCON schedule. Here is the summary:

Community Management Training

On Monday 21st July from 9am – 6pm in D135 I will be providing a full day of community management training at OSCON. This full day of training will include topics such as

  • The Core Mechanics Of Community
  • Planning Your Community
  • Building a Strategic Plan
  • Building Collaborative Workflow
  • Defining Community Governance
  • Marketing, Advocacy, Promotion, and Social Media
  • Measuring Your Community
  • Tracking and Measuring Community Management
  • Conflict Resolution

Dealing With Disrespect

On Tues 22nd July at 10.40am in Expo Hall A I will be providing an Office Hours Meeting in which you can come and ask me about:

  • Building collaborative workflow and tooling
  • Conflict resolution and managing complex personalities
  • Building buzz and excitement around your community
  • Incentivized prizes and innovation
  • Hiring community managers
  • Anything else!

Office Hours

Finally, on Wed 23rd July at 2.30pm in E144 I will be giving a presentation called Dealing With Disrespect that is based upon my free book of the same name for managing complex communications.

This is the summary of the talk:

In this new presentation from Jono Bacon, author of The Art of Community, founder of the Community Leadership Summit, and Ubuntu Community Manager, he discusses how to process, interpret, and manage rude, disrespectful, and non-constructive feedback in communities so the constructive criticism gets through but the hate doesn’t.

The presentation covers the three different categories of communications, how we evaluate and assess different attributes in each communication, the factors that influence all of our communications, and how to put in place a set of golden rules for handling feedback and putting it in perspective.

If you personally or your community has suffered rudeness, trolling, and disrespect, this presentation is designed to help.

I will also be available for discussions and meetings. Just drop me an email at jono@jonobacon.org if you want to meet.

I hope to see many of you in Portland this week!

17 July, 2014 12:44AM

July 16, 2014

Nicholas Skaggs: A new test runner approaches

The problem
How acceptance tests are packaged and run has morphed over time. When autopilot was originally conceived the largest user was the unity project and debian packaging was the norm. Now that autopilot has moved well beyond that simple view to support many types of applications running across different form factors, it was time to address the issue of how to run and package these high-level tests.

While helping develop testsuites for the core apps targeting ubuntu touch, it became increasingly difficult for developers to run their application's testsuites. This gave rise to further integration points inside qtcreator, enhancements to click and its manifest files, and tools like the phablet-tools suite and click-buddy. All of these tools operate well within the confines they are intended, but none truly meets the needs for test provisioning and execution.

A solution?
With these thoughts in mind I opened the floor for discussion a couple months ago detailing the need for a proper tool that could meet all of my needs, as well as those of the application developer, test author and CI folks. In a nutshell, a workflow to setup a device as well as properly manage dependencies and resolve them was needed.

Autopkg tests all the things
I'm happy to report that as of a couple weeks ago such a tool now exists in autopkgtest. If the name sounds familar, that's because it is. Autopkgtest already runs all of our automated testing at the archive level. New package uploads are tested utilizing its toolset.

So what does this mean? Utilizing the format laid out by autopkgtest, you can now run your autopilot testsuite on a phablet device in a sane manner. If you have test dependencies, they can be defined and added to the click manifest as specified. If you don't have any test dependencies, then you can run your testsuite today without any modifications to the click manifest.

Yes, but what does this really mean?
This means you can now run a testsuite with adt-run in a similar manner to how debian packages are tested. The runner will setup the device, copy the tests, resolve any dependencies, run them, and report the results back to you.

Some disclaimers
Support for running tests this way is still new. If you do find a bug, please file it!

To use the tool first install autopkgtest. If you are running trusty, the version in the archive is old. For now download the utopic deb file and install it manually. A proper backport still needs to be done.

Also as of this writing, I must caution you that you may run into this bug. If the application fails to download dependencies (you see 404 errors during setup), update your device to the latest image and try again. Note, the latest devel image might be too old if a new image hasn't been promoted in a long time.

I want to see it!
Go ahead, give it a whirl with the calendar application (or your favorite core app). Plug in a device, then run the following on your pc.

bzr branch lp:ubuntu-calendar-app
adt-run ubuntu-calendar-app --click=com.ubuntu.calendar --- ssh -s /usr/share/autopkgtest/ssh-setup/adb

Autopkgtest will give you some output along the way about what is happening. The tests will be copied, and since --click= was specified, the runner will use the click from the device, install the click in our temporary environment, and read the click manifest file for dependencies and install those too. Finally, the tests will be executed with the results returned to you.

Feedback please!
Please try running your autopilot testsuites this way and give feedback! Feel free to contact myself, the upstream authors (thanks Martin Pitt for adding support for this!), or simply file a bug. If you run into trouble, utilize the -d and the --shell switches to get more insight into what is happening while running.

16 July, 2014 10:10PM by Nicholas Skaggs (noreply@blogger.com)

Kubuntu Wire: Plasma 5 in the News

Plasma 5 was released yesterday and the internet is ablaze with praise and delight at the results.

Slashdot just posted their story KDE Releases Plasma 5 and the first comment is “Thank our KDE developers for their hard work. I’m really impressed by KDE and have used it a lot over the years.“  You’re welcome Slashdot reader.

They point to themukt’s Details Review of Plasma 5 which rightly says “Don’t be fooled, a lot of work goes down there“.

The science fictional name reflects the futuristic technology which is already ahead of its time (companies like Apple, Microsoft, Google and Canonical are using ideas conceptualized or developed by the KDE community).

With the release of Plasma 5, the community has once again shown that free software code developed without any dictator or president or prime-minister or chancellor can be one of the best software code.

LWN has a short story on KDE Plasma 5.0 which is as usual the place to go for detailed comments including several from KDE developers.

ZDNet’s article KDE Plasma 5 Linux desktop arrives says “I found this new KDE Plasma 5 to be a good, solid desktop” and concludes “I expect most, if not all, KDE users to really enjoy this new release. Have fun!” And indeed we do have lots of fun.

And Techage gets nonenculture wrong in KDE 5 is Here: Introducing a Cleaner Frontend & An Overhauled Backend and says “On behalf of KDE fans everywhere, thank you, KDE dev team” aah, it’s nice to be thanked :)

Web UPD8 has How To Install KDE Plasma 5 In Kubuntu 14.10 Or 14.04 a useful guide to setting it up which even covers removing it when you want to go back to the old stuff.  Give it a try!


16 July, 2014 03:14PM

hackergotchi for Tanglu developers

Tanglu developers

AppStream 0.7 specification and library released

appstream-logoToday I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs.

Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what’s new in AppStream 0.7:

Specification changes

Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language.

A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications.

The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:

$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system

(if you are using the cli tool)

A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example “The KDE Community”, “GNOME Developers” or even the developer’s full name. This value can be (optionally) translated and will be displayed in software-centers.

An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification.

Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs).

Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied).

As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags)

Documentation updates

In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution.

Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have “Quickstart pages” in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future.

We also have an explicit section dealing with the question “How do I translate upstream metadata?” now.

More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop.

AppStream GObject library and tools

The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses – I like it for tivoization protection, it’s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS).

On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit’s desktop-file-database in the long term).

The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format).

The appstream-index tool received some changes, making it’s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers.

Additionally, the testsuite got improved and should now work on systems which do not have metadata installed.

Of course, libappstream also implements all features of the new 0.7 specification.

With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the “pkgname” property of AsComponent (which is now a string array and called “pkgnames”). API level was bumped to 1.


A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code!

Special thanks to Sune Vuorela for his nice rework of the Qt library!

And that’s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!

16 July, 2014 03:03PM by Matthias

hackergotchi for Ubuntu developers

Ubuntu developers

Jorge Castro: Brightbox now offering official Ubuntu images

Today we announced that Brightbox has joined the Ubuntu Certified Cloud programme, and is our first European Cloud Partner.

If you’re wondering where have you heard the name Brightbox before, it might be because they contribute updated Ruby packages for all Ubuntu users.

Official images that stay up to day, updated Ruby stacks for developers, it’s pretty much win-win for Ubuntu users!

16 July, 2014 02:57PM

Raphaël Hertzog: Spotify migrate 5000 servers from Debian to Ubuntu

Or yet another reason why it’s really important that we succeed with Debian LTS. Last year we heard of Dreamhost switching to Ubuntu because they can maintain a stable Ubuntu release for longer than a Debian stable release (and this despite the fact that Ubuntu only supports software in its main section, which misses a lot of popular software).

Spotify Logo

A few days ago, we just learned that Spotify took a similar decision:

A while back we decided to move onto Ubuntu for our backend server deployment. The main reasons for this was a predictable release cycle and long term support by upstream (this decision was made before the announcement that the Debian project commits to long term support as well.) With the release of the Ubuntu 14.04 LTS we are now in the process of migrating our ~5000 servers to that distribution.

This is just a supplementary proof that we have to provide long term support for Debian releases if we want to stay relevant in big deployments.

But the task is daunting and it’s difficult to find volunteers to do the job. That’s why I believe that our best answer is to get companies to contribute financially to Debian LTS.

We managed to convince a handful of companies already and July is the first month where paid contributors have joined the effort for a modest participation of 21 work hours (watch out for Thorsten Alteholz and Holger Levsen on debian-lts and debian-lts-announce). But we need to multiply this figure by 5 or 6 at least to make a correct work of maintaining Debian 6.

So grab the subscription form and have a chat with your management. It’s time to convince your company to join the initiative. Don’t hesitate to get in touch if you have questions or if you prefer that I contact a representative of your company. Thank you!

4 comments | Liked this article? Click here. | My blog is Flattr-enabled.

16 July, 2014 08:07AM

hackergotchi for Blankon developers

Blankon developers

za: Donasi untuk Komunitas Python Indonesia

Akhirnya, setelah sekian lama, kini komunitas Python Indonesia menerima donasi!

Ide ini sudah lama dilontarkan namun baru bisa dieksekusi baru-baru ini. Wow, ternyata jalan dari ide untuk dieksekusi begitu panjang.

Donasi ini pun masih menggunakan rekening a/n pribadi. Belum atas nama organisasi. Yang saya tahu, jika ingin membuka rekening atas nama organisasi maka organisasi harus berbadan hukum. Dan … jalan menuju organisasi berbadan hukum juga masih panjang.

Satu organisasi komunitas di Indonesia yang sudah berbadan hukum yang saya tahu adalah Wikimedia Indonesia. Saya sendiri menjadi menjadi anggota Wikimedia Indonesia. Tapi sayangnya energi saya sudah habis untuk bisa aktif dalam Wikimedia Indonesia. Ingin sekali datang, minimal datang ke RUA, tapi sudah tak ada waktu lagi :|

Sebenarnya, selama ini “donasi” sudah ada. Mulai dari perusahaan yang menyediakan tempat (dan makanan!) untuk kopdar. Lalu para pembicara-pembicara yang bersedia berbagi ilmu. Dan tentu termasuk mereka yang datang dan meluangkan waktu. Dalam kopdar Agustus 2014 nanti, teman-teman malah berencana mengadakan codesprint proyek members.

Semoga langkah donasi ini bisa menjadi satu dari seribu langkah untuk kontribusi komunitas F/OSS di Indonesia.

16 July, 2014 07:49AM

hackergotchi for TurnKey Linux

TurnKey Linux

Introducing BitKey - A secure Bitcoin live USB/CD solution built with TKLDev

I'd like to announce a side project we've been working called BitKey. The idea was to see if we could use the TurnKey development tools to create a self-contained read-only CD/USB stick with everything you need to perform highly secure air-gapped Bitcoin transactions.

bitkey screenshot



Liraz and I usually have our hands full developing TurnKey, but we've been super enthusiastic fans of Bitcoin from early on. After going to our first local Bitcoin meetup we discovered the elephant in the room was that there was no easy way to perform Bitcoin transactions with adequate security and by that I mean that your wallet's private keys live on an air-gapped system physically disconnected from the Internet.

People who didn't know enough to be paranoid were making themselves easy targets to Bitcoin stealing malware, browser man-in-the-middle attacks and a whole zoo of attacks that were old school a decade ago, while the more cautious, security-minded folks seemed to be reinventing the wheel and coming up with various cruel and unusual ad-hoc solutions such as booting from a Live Ubuntu CD offline and pointing their browser at a copy of bitaddress to create a simple paper wallet.

We realized we could come up with something better, that we would want to use ourselves, and that others might be interested in as well.

How does BitKey relate to TurnKey?

Well, it does and it doesn't. Necessity is the mother of invention and BitKey started out life as just another itch which we happened to have the means (TKLDev) of scratching. Since it doesn't fit the mold we're not sure yet whether it makes sense for this to be an official part of TurnKey or its own thing.

For now, BitKey is a side project that leverages TurnKey's open source build infrastructure - but we thought that its existence and its usage of TKLDev might make for an interesting post. 

The project has its own website: bitkey.io. You can find the source on GitHub. Check it out and tell us what you think.

Update Jul 22 2014: A discussion on Reddit prompted me to write a blog post explaining how to use BitKey to perform secure Bitcoin transactions without needing to trust BitKey not to be compromised:

16 July, 2014 05:00AM by Alon Swartz

July 15, 2014

hackergotchi for Maemo developers

Maemo developers

Updates on the Hildon Foundation

Discussion started last year about moving the US-based Hildon Foundation non-profit operations to Europe. After a positive reception to the initial proposal, some further planning by the community, and a lot of foot work, work is progressing on setting up a German e.V. (which was selected due to limited options available in Europe).
The move serves several purposes:

  • It gets us out from under the cost and regulatory burdens of operating in the US. The US is expensive and difficult to operate in for small non-profits. Taxes and regulations from two major entities: the US Federal Government and the State of Pennsylvania (where the Hildon Foundation is registered), which both require annual paperwork be submitted on the fiscal operations of the entity, are expensive and time-consuming for an all-volunteer operation funded by donations.
  • It brings the major operations of maemo.org all within a single regulatory environment. The operations of maemo.org are almost exclusively hosted at IPHH in Germany, bringing the organizational stewardship into the Euro-zone reduces the number of regulator issues which must be understood and dealt with by the foundation.
  • Finally, it eliminates the large overhead involved with international money transfers. The hardware for maemo.org is hosted at IPHH (many thanks to them) in Germany, and with the billing currency in Euros, each transfer of the annual hosting-fee costs about $40 instead of the few cents it would cost wiring it from within the SEPA-Zone. Payments for hardware purchases, and other invoices are almost all going to entities in Europe, so eliminating the currency exchange overhead will significantly reduce costs for the continued operation of maemo.org.

The Hildon Foundation assets will need to be transferred to the new e.V. once its legal registration is completed. The Hildon Foundation will then be dissolved. This is slightly complicated by the aforementioned regulatory burdens in the US, but will hopefully be finalized by the end of the third quarter of this year. In terms of obligations, responsibilities and relationship to the community, this move has no effect: it is purely a technical measure to allow the Foundation to better manage the community’s assets.

0 Add to favourites0 Bury

15 July, 2014 10:37PM by Hildon Foundation (board@hildonfoundation.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: Plasma 5 is Here! All Ready to Eat Your Babies

KDE Project:

A year and a half ago Qt 5 was released giving KDE the opportunity and excuse to do the sort of tidying up that software always needs every few years. We decided that, like Qt, we weren't going for major rewrites of the world as we did for KDE 4. Rather we'd modularise, update and simplify. Last week I clicked the publish button on the story for KDE Frameworks 5, the refresh of kdelibs. Interesting for developers. Today I clicked the publish button on the story of the first major piece of software to use KDE Frameworks, Plasma 5.

Plasma is KDE's desktop. It's also the tablet interface and media centre but those ports are still a work in progress. The basic layout of the desktop hasn't changed, we know you don't want to switch to a new workflow for no good reason. But it is cleaner and more slick. It's also got plenty of bugs in it, this release won't be the default in Kubuntu, but we will make a separate image for you to try it out. We're not putting it in the Ubuntu archive yet for the same reason but you can try it out if you are brave.

Three options to try it out:

1) On Kubuntu, Project Neon is available as PPAs which offers frequently updated development snapshots of KDE Frameworks. Packages will be installed to /opt/project-neon5 and will co-install with your normal environment and installs to 14.04.

sudo apt-add-repository ppa:neon/kf5
apt update
apt install project-neon5-session project-neon5-utils project-neon5-konsole

Log out and in again
2) Releases of KDE Frameworks 5 and Plasma 5 are being packaged in the next PPA. These will replace your Plasma 4 install and installs to Utopic development version.

sudo apt-add-repository ppa:kubuntu-ppa/next
sudo apt-add-repository ppa:ci-train-ppa-service/landing-005
apt update
apt install kubuntu-plasma5-desktop
apt full-upgrade

Log out and in again
3) Finally the Neon 5 Live image, updated every Friday with latest source from Git to run a full system from a USB disk.

Good luck! let us know how you get on using #PlasmaByKDE on Twitter or posting to Kubuntu's G+ or Facebook pages.

15 July, 2014 08:32PM

hackergotchi for Tails


Call for testing: 1.1~rc1

You can help Tails! The first release candidate for the upcoming version 1.1 is out. Please test it and see if it works for you.

How to test Tails 1.1~rc1?

  1. Keep in mind that this is a test image. We have made sure that it is not broken in an obvious way, but it might still contain undiscovered issues.

  2. Download the ISO image and its signature:

    Tails 1.1~rc1 ISO image

    Tails 1.1~rc1 signature

    Note that there is no automatic upgrade targetting this release!

  3. Verify the ISO image.

  4. Have a look at the list of known issues of this release and the list of longstanding known issues.

  5. Test wildly!

If you find anything that is not working as it should, please report to us! Bonus points if you first check if it is a known issue of this release or a longstanding known issue.

What's new since 1.1~beta1?

Notable changes since Tails 1.1~beta1 include:

  • Security fixes

    • Don't allow the desktop user to pass arguments to tails-upgrade-frontend (ticket #7410).
    • Make persistent file permissions safer (ticket #7443).
    • Set strict permissions on /home/amnesia (ticket #7463).
    • Disable FoxyProxy's proxy:// protocol handler (ticket #7479).
  • Bug fixes

    • Use pinentry as the GnuPG agent, as we do on Squeeze (ticket #7330). This is needed to support OpenPGP smartcards.
    • Cleanup some packages that were installed by mistake.
    • Fix emergency shutdown when removing the boot device before login (ticket #7333).
    • Resume support of persistent volumes created with Tails 1.0.1 and earlier (ticket #7343).
    • Revert back to browsing the offline documentation using Iceweasel instead of Yelp (ticket #7390, ticket #7285).
    • Automatically transition NetworkManager persistence setting when upgrading from Squeeze to Wheezy (ticket #7338). Note: the data is not migrated.
    • Fix the Unsafe Web Browser startup in Windows camouflage mode (ticket #7329).
    • Make it possible to close error messages displayed by the persistent volume assistant (ticket #7119).
    • Fix some file associations, with a backport of shared-mime-info 1.3 (ticket #7079).
  • Minor improvements

    • Various improvements to the Windows 8 camouflage.
    • Fix "Upgrade from ISO" functionality when run from a Tails system that ships a different version of syslinux than the one in the ISO (ticket #7345).
    • Ensure that the MBR matches the syslinux version used by the Tails release it is supposed to boot.
    • Help Universal USB Installer support Tails again, by include syslinux.exe for Windows in the ISO filesystem (ticket #7425).
    • Improve the Tails Installer user interface a bit.
    • Enable double-clicking to pick entries in the language or keyboard layout lists in Tails Greeter.

See the online Changelog for technical details.

Known issues in 1.1~rc1

  • Upgrading from ISO, from Tails 1.1~beta1, Tails 1.0.1 or earlier, is a bit more complicated than usual. Either follow the instructions to upgrade from ISO. Or, burn a DVD, start Tails from it, and use "Clone and Upgrade".

  • A persistent volume created with Tails 1.1~beta1 cannot be used with Tails 1.1~rc1 or later. Worse, trying this may freeze Tails Greeter.

  • Does not start in some virtualization environments, such as QEMU 0.11.1 and VirtualBox 4.2. This can be corrected by upgrading to QEMU 1.0 or VirtualBox 4.3, or newer (ticket #7232).

  • The web browser's JavaScript performance may be severely degraded (ticket #7127). Please let us know if you are experiencing this to a level where it is problematic.

How to upgrade from ISO?

These steps allow you to upgrade a device installed with Tails Installer from Tails 1.0.1, Tails 1.1~beta1 or earlier, to Tails 1.1~rc1.

  1. Start Tails from another DVD, USB stick, or SD card, than the device that you want to upgrade.

  2. Set an administration password.

  3. Run this command in a Root Terminal to install the latest version of Tails Installer:

    echo "deb http://deb.tails.boum.org/ 1.1-rc1 main" \
        > /etc/apt/sources.list.d/tails-upgrade.list && \
        apt-get update && \
        apt-get install liveusb-creator
  4. Follow the usual instructions to upgrade from ISO, but the first step.

15 July, 2014 04:00PM