July 24, 2014

Craig Small

PHP uniqid() not always a unique ID

For quite some time modern versions of JFFNMS have had a problem. In large installations hosts would randomly appear as down with the reachability interface going red. All other interface types worked, just this one.

Reachability interfaces are odd, because they call fping or fping6 do to the work. The reason is because to run a ping program you need to have root access to a socket and to do that is far too difficult and scary in PHP which is what JFFNMS is written in.

To capture the output of fping, the program is executed and the output captured to a temporary file. For my tiny setup this worked fine, for a lot of small setups this was also fine. For larger setups, it was not fine at all. Random failed interfaces and, most bizzarely of all, even though a file disappearing. The program checked for a file to exist and then ran stat in a loop to see if data was there. The file exist check worked but the stat said file not found.

At first I thought it was some odd load related problem, perhaps the filesystem not being happy and having a file there but not really there. That was, until someone said “Are these numbers supposed to be the same?”

The numbers he was referring to was the filename id of the temporary file. They were most DEFINITELY not supposed to be the same. They were supposed to be unique. Why were they always unique for me and not for large setups?

The problem is with the uniqid() function. It is basically a hex representation of the time.  Large setups often have large numbers of child processes for polling devices. As the number of poller children increases, the chance that two child processes start the reachability poll at the same time and have the same uniqid increases. It’s why the problem happened, but not all the time.

The stat error was another symptom of this bug, what would happen was:

  • Child 1 starts the poll, temp filename abc123
  • Child 2 starts the poll in the same microsecond, temp filename is also abc123
  • Child 1 and 2 wait poller starts, sees that the temp file exists and goes into a loop of stat and wait until there is a result
  • Child 1 finishes, grabs the details, deletes the temporary file
  • Child 2 loops, tries to run stat but finds no file

Who finishes first is entirely dependent on how quickly the fping returns and that is dependent on how quicky the remote host responds to pings, so its kind of random.

A minor patch to use tempnam() instead of uniqid() and adding the interface ID in the mix for good measure (no two children will poll the same interface, the parent’s scheduler makes sure of that.) The initial responses is that it is looking good.

 

24 July, 2014 12:17PM by Craig Small

hackergotchi for Martin Pitt

Martin Pitt

vim config for Markdown+LaTeX pandoc editing

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item]{}<Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

24 July, 2014 09:38AM by pitti

hackergotchi for Matthew Palmer

Matthew Palmer

First Step with Clojure: Terror

$ sudo apt-get install -y leiningen
[...]
$ lein new scratch
[...]
$ cd scratch
$ lein repl
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2
Transferring 5K from central
Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2
Transferring 4K from central
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2
Transferring 3311K from central
[...]

Wait… what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I’m asking for is what I’m getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that’s no safety net – if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there’s no signature on the .jar itself, either.

It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers.

I’ve seen Debian take a kicking more than once because packages aren’t individually signed, or because packages aren’t served over HTTPS. But at least Debian’s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.

This repository, on the other hand… oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there’s no way for me (or anyone) to trust the signature – the signature was made by a key that’s signed by one other key, which itself has no signatures. If I were an attacker, it wouldn’t be hard for me to replace that key chain with one of my own devising.

Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it’s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn’t hard to do, and there’s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.

Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they’re a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a “maven.org distribution root” key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn’t be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn’t be perfect, but don’t make the perfect the enemy of the good. Cost-effective improvements are possible here.

Yes, security is hard. But you don’t get to ignore it just because of that, when you’re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.


  1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that’s just icing on the shit cake.

  2. At one point in the past, my then-employer (a hosting provider) blocked Fastly’s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn’t help me to distinguish between “I’m a professionally-run distributed proxy” and “I’m a pwned box here to hammer your site into the ground”).

  3. Pretty much all of the new breed of so-called CDNs aren’t actually pro-actively distributing content, they’re just proxies. That isn’t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they’re providing “advanced” capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

24 July, 2014 12:30AM by Matt Palmer (mpalmer@hezmatt.org)

July 23, 2014

Russ Allbery

WebAuth 4.6.1

This is a bug-fix release of the WebAuth site-wide web authentication system. As is typical, I accumulated a variety of minor bug fixes and improvements that I wanted to get into a release before starting larger work (in this case, adding JSON support for the user information service protocol).

The most severe bug fix is something that only folks at Stanford would notice: support for AuthType StanfordAuth was broken in the 4.6.0 release. This is for legacy compatibility with WebAuth 2.5. It has been fixed in this release.

In other, more minor bug fixes, build issues when remctl support is disabled have been fixed, expiring password warnings are shown in WebLogin after any POST-based authentication, the confirmation page is forced if authorization identity switching is available, the username field is verified before multifactor authentication to avoid subsequent warnings, newlines and tabs are allowed in the XML sent from the WebKDC for user messages, empty RT and ST parameters are correctly diagnosed, and there are some documentation improvements.

The main new feature in this release is support for using FAST armor during password authentication in mod_webkdc. A new WebKdcFastArmorCache directive can be set to point at a Kerberos ticket cache to use for FAST armor. If set, FAST is required, so the KDC must support it as well. This provides better wire security for the initial password authentication to protect against brute-force dictionary attacks against the password by a passive eavesdropper.

This release also adds a couple of new factor types, mp (mobile push) and v (voice), that Stanford will use as part of its Duo Security integration.

Note that, for the FAST armor feature, there is also an SONAME bump in the shared library in this release. Normally, I wouldn't bump the SONAME in a minor release, but in this case the feature was fairly minor and most people will not notice the change, so it didn't feel like it warranted a major release. I'm still of two minds about that, but oh well, it's done and built now. (At least I noticed that the SONAME bump was required prior to the release.)

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

23 July, 2014 10:59PM

Lior Kaplan

Testing PHPNG on Debian/Ubuntu

We (at Zend) want to help people get more involved in testing PHPNG (PHP next generation), so we’re started to provide binaries for it, although it’s still a branch on top of PHP’s master branch. See more details about PHPNG on Zeev Suraski’s blog post.

The binaries (64bit) are compatible with Debian testing/unstable and Ubuntu Trusty (14.04) and up. The mod_php is built for Apache 2.4 which all three flavors have.

The repository is at http://repos.zend.com/zend-server/early-access/phpng/

Installation instructions:

# wget http://repos.zend.com/zend.key -O- 2> /dev/null | apt-key add -
# echo “deb http://repos.zend.com/zend-server/early-access/phpng/ trusty zend” > /etc/apt/sources.list.d/phpng.list
# apt-get update
# apt-get install php5

For the task of providing these binaries, I had a pleasure of combining my experience as a member of the Debian PHP team and a Debian Developer with stuff more internal to the PHP development process. Using the already existing Debian packaging enabled me to test more builds scenarios easily (and report problems accoredingly). Hopefully this could also be translated back into providing more experimental packages for Debian and making sure Debian packages are ready for the PHP released after PHP 5.6.


Filed under: Debian GNU/Linux, PHP

23 July, 2014 09:01PM by Kaplan

Petter Reinholdtsen

98.6 percent done with the Norwegian draft translation of Free Culture

This summer I finally had time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with todays copyright law. Yesterday, I finally completed translated the book text. There are still some foot/end notes left to translate, the colophon page need to be rewritten, and a few words and phrases still need to be translated, but the Norwegian text is ready for the first proof reading. :) More spell checking is needed, and several illustrations need to be cleaned up. The work stopped up because I had to give priority to other projects the last year, and the progress graph of the translation show this very well:

If you want to read the result, check out the github project pages and the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

23 July, 2014 08:40PM

hackergotchi for Michael Prokop

Michael Prokop

Book Review: The Docker Book

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. :) (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

23 July, 2014 08:16PM by mika

hackergotchi for Tanguy Ortolo

Tanguy Ortolo

GNU/Linux graphic sessions: suspending your computer

Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:

$ dbus-send --system --print-reply \
            --dest='org.freedesktop.UPower' \
            /org/freedesktop/UPower org.freedesktop.UPower.Suspend
Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with
signature "" on interface "org.freedesktop.UPower" doesn't exist

The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:

%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \
                        /usr/sbin/pm-suspend-hybrid, \
                        /usr/sbin/pm-hibernate

That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.

23 July, 2014 12:45PM by Tanguy

Francesca Ciceri

Adventures in Mozillaland #3

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!

Community

After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

23 July, 2014 11:04AM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

The sad state of Linux Wi-Fi

I've been using 802.11 on Linux now for over a decade, and to be honest, it's still a pretty sad experience. It works well enough that I mostly don't care... but when I care, and try to dig deeper, it always ends up in the answer “this is just crap”.

I can't say exactly why this is; between the Intel cards I've always been using, the Linux drivers, the firmware, the mac80211 layer, wpa_supplicant and NetworkManager, I have no idea who are supposed to get all these things right, and I have no idea how hard or easy they actually are to pull off. But there are still things annoying me frequently that we should really have gotten right after ten years or more:

  • Why does my Intel card consistently pick 2.4 GHz over 5 GHz? The 5 GHz signal is just as strong, and it gives a less crowded 40 MHz channel (twice the bandwidth, yay!) instead of the busy 20 MHz channel the 2.4 GHz one has to share. The worst part is, if I use an access point with band-select (essentially forcing the initial connection to be to 5 GHz—this is of course extra fun when the driver sees ten APs and tries to connect to all of them over 2.4 in turn before trying 5 GHz), the driver still swaps onto 2.4 GHz a few minutes later!
  • Rate selection. I can sit literally right next to an AP and get a connection on the lowest basic rate (which I've set to 11 Mbit/sec for the occasion). OK, maybe I shouldn't trust the output of iwconfig too much, since rate is selected per-packet, but then again, when Linux supposedly has a really good rate selection algorithm (minstrel), why are so many drivers using their own instead? (Yes, hello “iwl-agn-rs”, I'm looking at you.)
  • Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast. RFC4436 describes some of the tricks they're using, but Linux uses none of them. In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
  • Scanning/roaming seems to be pretty random; I have no idea how much thought really went into this, and I know it is a hard problem, but it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available. (See also 2.4 vs. 5 above.) I'd love to get proper support for CCX (Cisco Client Extensions) here, which makes this tons better in a larger Wi-Fi setting (since the access point can give the client a lot of information that's useful for roaming, e.g. “there's an access point on thannel 52 that sends its beacons every 100 ms with offset 54 from mine”, which means you only need to swap channel for a few milliseconds to listen instead of a full beacon period), but I suppose that's covered by licensing or patents or something. Who knows.

With now a billion mobile devices running Linux and using Wi-Fi all the time, maybe we should have solved this a while ago. But alas. Instead we get access points trying to layer hacks upon hacks to try to force clients into making the right decisions. And separate ESSIDs for 2.4 GHz and 5 GHz.

Augh.

23 July, 2014 10:45AM

hackergotchi for Andrew Pollock

Andrew Pollock

[tech] Going solar

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

23 July, 2014 05:36AM

hackergotchi for Matthew Palmer

Matthew Palmer

Per-repo update hooks with gitolite

Gitolite is a popular way to manage collections of git repositories entirely from the command line – it’s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it’s quite a valuable system.

In recent versions (3.6), it added support for configuring per-repository git hooks from within the gitolite-admin repo itself – something which previously required directly jiggering around with the repo metadata on the filesystem. It allows you to “chain” multiple hooks together, too, which is a nice touch. You can, for example, define hooks for “validate style guidelines”, “submit patch to code review” and “push to the CI server”. Then for each repo you can pick which of those hooks to execute. It’s neat.

There’s one glaring problem, though – you can only use these chained, per-repo hooks on the pre-receive, post-receive, and post-update hooks. The update hook is special, and gitolite wants to make sure you never, ever forget it. You can hook into the update processing chain by using something called a “virtual ref”; they’re stored in a separate configuration directory, use a different syntax in the config file, and if you’re trying to learn what they do, you’ll spend a fair bit of time on them. The documentation describes VREFs as “a mechanism to add additional constraints to a push”. The association between that and the update hook is one you get to make for yourself.

The interesting thing is that there’s no need for this gratuitous difference in configuration methods between the different hooks. I wrote a very small and simple patch that makes the update hook configurable in exactly the same way as the other server-side hooks, with no loss of existing functionality.

The reason I’m posting it here is that I tried to submit it to the primary gitolite developer, and was told “I’m not touching the update hook […] I’m not discussing this […] take it or leave it”. So instead, I’m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!

23 July, 2014 05:00AM by Matt Palmer (mpalmer@hezmatt.org)

Jonathan McCrohan

Git remote helpers

If you follow upstream Git development closely, you may have noticed that the Mercurial and Bazaar remote helpers (use git to interact with hg and bzr repos) no longer live in the main Git tree. They have been split out into their own repositories, here and here.

git-remote-bzr had been packaged (as git-bzr) for Debian since March 2013, but was removed in May 2014 when the remote helpers were removed upstream. There had been a wishlist bug report open since Mar 2013 to get git-remote-hg packaged, and I had submitted a patch, but it was never applied.

Splitting out of these remote helpers upstream has allowed Vagrant Cascadian and myself to pick up these packages and both are now available in Debian.

apt-get install git-remote-hg git-remote-bzr

23 July, 2014 01:19AM by jmccrohan

July 22, 2014

Tim Retout

Cowbuilder and Tor

You've installed apt-transport-tor to help prevent targeted attacks on your system. Great! Now you want to build Debian packages using cowbuilder, and you notice these are still using plain HTTP.

If you're willing to fetch the first few packages without using apt-transport-tor, this is as easy as:

  • Add 'EXTRAPACKAGES="apt-transport-tor"' to your pbuilderrc.
  • Run 'cowbuilder --update'
  • Set 'MIRRORSITE=tor+http://http.debian.net/debian' in pbuilderrc.
  • Run 'cowbuilder --update' again.

Now any future builds should fetch build-dependencies over Tor.

Unfortunately, creating a base.cow from scratch is more problematic. Neither 'debootstrap' nor 'cdebootstrap' actually rely on apt acquire methods to download files - they look at the URL scheme themselves to work out where to fetch from. I think it's a design point that they shouldn't need apt, anyway, so that you can debootstrap on non-Debian systems. I don't have a good solution beyond using some other means to route these requests over Tor.

22 July, 2014 09:31PM

hackergotchi for Neil Williams

Neil Williams

Validating ARMMP device tree blobs

I’ve done various bits with ARMMP and LAVA on this blog already, usually waiting until I’ve got all the issues ironed out before writing it up. However, this time I’m just going to do a dump of where it’s at, how it works and what can be done.

I’m aware that LAVA can seem mysterious at first, the package description has improved enormously recently, thanks to exposure in Debian: LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing, although extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

The LAVA documentation has a glossary of terms like result bundle and all the documentation is also available in the lava-server-doc package.

The goal is to validate the dtbs built for the Debian ARMMP kernel. One of the most accessible ways to get the ARMMP kernel onto a board for testing is tftp using the Debian daily DI builds. Actually using the DI initrd can come later, once I’ve got a complete preseed config so that the entire install can be automated. (There are some things to sort out in LAVA too before a full install can be deployed and booted but those are at an early stage.) It’s enough at first to download the vmlinuz which is common to all ARMMP deployments, supply the relevant dtb, partner those with a minimal initrd and see if the board boots.

The first change comes when this process is compared to how boards are commonly tested in LAVA – with a zImage or uImage and all/most of the modules already built in. Packaged kernels won’t necessarily raise a network interface or see the filesystem without modules, so the first step is to extend a minimal initramfs to include the armmp modules.

apt install pax u-boot-tools

The minimal initramfs I selected is one often used within LAVA:

wget http://images.armcloud.us/lava/common/linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot

It has a u-boot header added, as most devices using this would be using u-boot and this makes it easier to debug boot failures as the initramfs doesn’t need to have the header added, it can simply be downloaded to a local directory and passed to the board as a tftp location. To modify it, the u-boot header needs to be removed. Rather than assuming the size, the u-boot tools can (indirectly) show the size:

$ ls -l linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot
-rw-r--r-- 1 neil neil  5179571 Nov 26  2013 linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot

$ mkimage -l linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot 
Image Name:   linaro-image-minimal-initramfs-g
Created:      Tue Nov 26 22:30:49 2013
Image Type:   ARM Linux RAMDisk Image (gzip compressed)
Data Size:    5179507 Bytes = 5058.11 kB = 4.94 MB
Load Address: 00000000
Entry Point:  00000000

Referencing http://www.omappedia.com/wiki/Development_With_Ubuntu, the header size is the file size minus the data size listed by mkimage.

5179571 - 5179507 == 64

So, create a second file without the header:

dd if=linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot of=linaro-image-minimal-initramfs-genericarmv7a.cpio.gz skip=64 bs=1

decompress it

gunzip linaro-image-minimal-initramfs-genericarmv7a.cpio.gz

Now for the additions

dget http://ftp.uk.debian.org/debian/pool/main/l/linux/linux-image-3.14-1-armmp_3.14.12-1_armhf.deb

(Yes, this process will need to be repeated when this package is rebuilt, so I’ll want to script this at some point.)

dpkg -x linux-image-3.14-1-armmp_3.14.12-1_armhf.deb kernel-dir
cd kernel-dir

Pulling in the modules we need for most needs, comes thanks to a script written by the Xen folks. The set is basically disk, net, filesystems and LVM.

find lib -type d -o -type f -name modules.\*  -o -type f -name \*.ko  \( -path \*/kernel/lib/\* -o  -path \*/kernel/crypto/\* -o  -path \*/kernel/fs/mbcache.ko -o  -path \*/kernel/fs/ext\* -o  -path \*/kernel/fs/jbd\* -o  -path \*/kernel/drivers/net/\* -o  -path \*/kernel/drivers/ata/\* -o  -path \*/kernel/drivers/scsi/\* -o -path \*/kernel/drivers/md/\* \) | pax -x sv4cpio -s '%lib%/lib%' -d -w >../cpio
gzip -9f cpio

original Xen script (GPL-3+)

I found it a bit confusing that i is used for extract by cpio, but that’s how it is. Extract the minimal initramfs to a new directory:

sudo cpio -id < ../linaro-image-minimal-initramfs-genericarmv7a.cpio

Extract the new cpio into the same location. (Yes, I could do this the other way around and pipe the output of find into the already extracted location but that's for when I get a script to do this):

sudo cpio --no-absolute-filenames -id < ../ramfs/cpio

CPIO Manual

Use newc format, the new (SVR4) portable format, which supports file systems having more than 65536 i-nodes. (4294967295 bytes)
(41M)

find . | cpio -H newc -o > ../armmp-image.cpio

... and add the u-boot header back:

mkimage -A arm -T ramdisk -C none -d armmp-image.cpio.gz debian-armmp-initrd.cpio.gz.u-boot

Now what?

Now send the combination to LAVA and test it.

Results bundle for a local LAVA test job using this technique. (18k)

submission JSON - uses file:// references, so would need modification before being submitted to LAVA elsewhere.

complete log of the test job (72k)

Those familiar with LAVA will spot that I haven't optimised this job, it boots the ARMMP kernel into a minimal initramfs and then expects to find apt and other tools. Actual tests providing useful results would use available tools, add more tools or specify a richer rootfs.

The tests themselves are very quick (the job described above took 3 minutes to run) and don't need to be run particularly often, just once per board type per upload of the ARMMP kernel. LAVA can easily run those jobs in parallel and submission can be automated using authentication tokens and the lava-tool CLI. lava-tool can be installed without lava-server, so can be used in hooks for automated submissions.

Extensions

That's just one DTB and one board. I have a range of boards available locally:

* iMX6Q Wandboard (used for this test)
* iMX.53 Quick Start Board (needs updated u-boot)
* Beaglebone Black
* Cubie2
* CubieTruck
* arndale (no dtb?)
* pandaboard

Other devices available could involve ARMv7 devices hosted at www.armv7.com and validation.linaro.org - as part of a thank you to the Debian community for providing the OS which is (now) running all of the LAVA infrastructure.

That doesn't cover all of the current DTBs (and includes many devices which have no DTBs) so there is plenty of scope for others to get involved.

Hopefully, the above will help get people started with a suitable kernel+dtb+initrd and I'd encourage anyone interested to install lava-server and have a go at running test jobs based on those so that we start to build data about as many of the variants as possible.

(If anyone in DI fancies producing a suitable initrd with modules alongside the DI initrd for armhf builds, or if anyone comes up with a patch for DI to do that, it would help enormously.)

This will at least help Debian answer the question of what the Debian ARMMP package can actually support.

For help on LAVA, do read through the documentation and then come to us at #linaro-lava or the linaro-validation mailing list or file bugs in Debian: reportbug lava-server.

, so you can ask me.

I'm giving one talk on the LAVA software and there will be a BoF on validation and CI in Debian.

22 July, 2014 09:18PM by Neil Williams

Russell Coker

Public Lectures About FOSS

Eventbrite

I’ve recently started using the Eventbrite Web site [1] and the associated Eventbrite Android app [2] to discover public events in my area. Both the web site and the Android app lack features for searching (I’d like to save alerts for my accounts and have my phone notify me when new events are added to their database) but it is basically functional. The main issue is content, Eventbrite has a lot of good events in their database (I’ve got tickets for 6 free events in the next month). I assume that Eventbrite also has many people attending their events, otherwise the events wouldn’t be promoted there.

At this time I haven’t compared Eventbrite to any similar services, Eventbrite events have taken up much of my available time for the next 6 weeks (I appreciate the button on the app to add an entry to my calendar) so I don’t have much incentive to find other web sites that list events. I would appreciate comments from users of competing event registration systems and may write a post in future comparing different systems. Also I have only checked for events in Melbourne, Australia as I don’t have any personal interest in events in other places. For the topic of this post Eventbrite is good enough, it meets all requirements for Melbourne and I’m sure that if it isn’t useful in other cities then there are competing services.

I think that we need to have free FOSS events announced through Eventbrite. We regularly have experts in various fields related to FOSS visiting Melbourne who give a talk for the Linux Users of Victoria (and sometimes other technical groups). This is a good thing but I think we could do better. Most people in Melbourne probably won’t attend a LUG meeting and if they did they probably wouldn’t find it a welcoming experience.

Also I recommend that anyone who is looking for educational things to do in Melbourne visit the Eventbrite web site and/or install the Android app.

Accessible Events

I recently attended an Eventbrite event where a professor described the work of his research team, it was a really good talk that made the topic of his research accessible to random members of the public like me. Then when it came to question time the questions were mostly opinion pieces disguised as questions which used a lot of industry specific jargon and probably lost the interest of most people in the audience who wasn’t from the university department that hosted the lecture. I spent the last 15 minutes in that lecture hall reading Wikipedia and resisted the temptation to load an Android game.

Based on this lecture (and many other lectures I’ve seen) I get the impression that when the speaker or the MC addresses a member of the audience by name (EG “John Smith has a question”) then it’s strongly correlated with a low quality question. See my previous post about the Length of Conference Questions for more on this topic [3].

It seems to me that when running a lecture everyone involved has to agree about whether it’s a public lecture (IE one that is for any random people) as opposed to a society meeting (which while free for anyone to attend in the case of a LUG is for people with specific background knowledge). For a society meeting (for want of a better term) it’s OK to assume a minimum level of knowledge that rules out some people. If 5% of the audience of a LUG don’t understand a lecture that doesn’t necessarily mean it’s a bad lecture, sometimes it’s not possible to give a lecture that is easily understood by those with the least knowledge that also teaches the most experienced members of the audience.

For a public lecture the speaker has to give a talk for people with little background knowledge. Then the speaker and/or the MC have to discourage or reject questions that are for a higher level of knowledge.

As an example of how this might work consider the case of an introductory lecture about how an OS kernel works. When one of the experienced Linux kernel programmers visits Melbourne we could have an Eventbrite event organised for a lecture introducing the basic concepts of an OS kernel (with Linux as an example). At such a lecture any questions about more technical topics (such as specific issues related to compilers, drivers, etc) could be met with “we are having a meeting for more technical people at the Linux Users of Victoria meeting tomorrow night” or “we are having coffee at a nearby cafe afterwards and you can ask technical questions there”.

Planning Eventbrite Events

When experts in various areas of FOSS visit Melbourne they often offer a talk for LUV. For any such experts who read this post please note that most lectures at LUV meetings are by locals who can reschedule, so if you are only in town for a short time we can give you an opportunity to speak at short notice.

I would like to arrange to have some of those people give a talk aimed at a less experienced audience which we can promote through Eventbrite. The venue for LUV talks (Melbourne University 7PM on the first Tuesday of the month) might not work for all speakers so we need to find a sponsor for another venue.

I will contact Linux companies that are active in Melbourne and ask whether they would be prepared to sponsor the venue for such a talk. The fallback option would be to have such a lecture at a LUV meeting.

I will talk to some of the organisers of science and technology events advertised on Eventbrite and ask why they chose the times that they did. Maybe they have some insight into which times are best for getting an audience. Also I will probably get some idea of the best times by just attending many events and observing the attendance. I think that the aim of an Eventbrite event is to attract delegates who wouldn’t attend other meetings, so it is a priority to choose a suitable time and place.

Finally please note that while I am a member of the LUV committee I’m not representing LUV in this post. My aim is that community feedback on this post will help me plan such events. I will discuss this with the LUV committee after I get some comments here.

Please comment if you would like to give such a public lecture, attend such a lecture, or if you just have any general ideas.

22 July, 2014 08:22AM by etbe

hackergotchi for Martin Pitt

Martin Pitt

autopkgtest 3.2: CLI cleanup, shell command tests, click improvements

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

22 July, 2014 06:16AM by pitti

hackergotchi for MJ Ray

MJ Ray

Three systems

There are three basic systems:

The first is slick and easy to use, but fiddly to set up correctly and if you want to do something that its makers don’t want you to, it’s rather difficult. If it breaks, then fixing it is also fiddly, if not impossible and requiring complete reinitialisation.

The second system is an older approach, tried and tested, but fell out of fashion with the rise of the first and very rarely comes preinstalled on new machines. Many recent installations can be switched to and from the first system at the flick of a switch if wanted. It needs a bit more thought to operate but not much and it’s still pretty obvious and intuitive. You can do all sorts of customisations and it’s usually safe to mix and match parts. It’s debatable whether it is more efficient than the first or not.

The third system is a similar approach to the other two, but simplified in some ways and all the ugly parts are hidden away inside neat packaging. These days you can maintain and customise it yourself without much more difficulty than the other systems, but the basic hardware still attracts a price premium. In theory, it’s less efficient than the other types, but in practice it’s easier to maintain so doesn’t lose much efficiency. Some support companies for the other types won’t touch it while others will only work with it.

So that’s the three types of bicycle gears: indexed, friction and hub. What did you think it was?

22 July, 2014 03:59AM by mjr

hackergotchi for Andrew Pollock

Andrew Pollock

[debian] Day 174: Kindergarten, startup stuff, tennis

I picked up Zoe from Sarah this morning and dropped her at Kindergarten. Traffic seemed particularly bad this morning, or I'm just out of practice.

I spent the day powering through the last two parts of the registration block of my real estate licence training. I've got one more piece of assessment to do, and then it should be done. The rest is all dead-tree written stuff that I have to mail off to get marked.

Zoe's doing tennis this term as her extra-curricular activity, and it's on a Tuesday afternoon after Kindergarten at the tennis court next door.

I'm not sure what proportion of the class is continuing on from previous terms, and so how far behind the eight ball Zoe will be, but she seemed to do okay today, and she seemed to enjoy it. Megan's in the class too, and that didn't seem to result in too much cross-distraction.

After that, we came home and just pottered around for a bit and then Zoe watched some TV until Sarah came to pick her up.

22 July, 2014 01:23AM

Hideki Yamane

GeoIP support for installer is really nice


RHEL7 installation note says "The new graphical installer also generates automatic default settings where applicable. For example, if the installer detects a network connection, the user's general location is determined with GeoIP and sane suggestions are made for the default keyboard layout, language and timezone." but CentOS7 doesn't work as expected ;-)

 GeoIP support in Fedora20 Installer works well and it's pretty nice. Boot from live media and it shows "Try Fedora" and "Install to Hard Drive" menu.

Then, select "Install" and...Boom! it shows in Japanese without any configuration automagically!

I want same feature for d-i, too.

22 July, 2014 12:10AM by Hideki Yamane (noreply@blogger.com)

July 21, 2014

Ian Campbell

sunxi-tools now available in Debian

I've recently packaged the sunxi tools for Debian. These are a set of tools produce by the Linux Sunxi project for working with the Allwinner "sunxi" family of processors. See the package page for details. Thanks to Steve McIntyre for sponsoring the initial upload.

The most interesting component of the package are the tools for working with the Allwinner processors' FEL mode. This is a low-level processor mode which implements a simple USB protocol allowing for initial programming of the device and recovery which can be entered on boot (usually be pressing a special 'FEL button' somewhere on the device). It is thanks to FEL mode that most sunxi based devices are pretty much unbrickable.

The most common use of FEL is to boot over USB. In the Debian package the fel and usb-boot tools are named sunxi-fel and sunxi-usb-boot respectively but otherwise can be used in the normal way described on the sunxi wiki pages.

One enhancement I made to the Debian version of usb-boot is to integrate with the u-boot packages to allow you to easily FEL boot any sunxi platform supported by the Debian packaged version of u-boot (currently only Cubietruck, more to come I hope). To make this work we take advantage of Multiarch to install the armhf version of u-boot (unless your host is already armhf of course, in which case just install the u-boot package):

# dpkg --add-architecture armhf
# apt-get update
# apt-get install u-boot:armhf
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  u-boot:armhf
0 upgraded, 1 newly installed, 0 to remove and 1960 not upgraded.
Need to get 0 B/546 kB of archives.
After this operation, 8,676 kB of additional disk space will be used.
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
Selecting previously unselected package u-boot:armhf.
(Reading database ... 309234 files and directories currently installed.)
Preparing to unpack .../u-boot_2014.04+dfsg1-1_armhf.deb ...
Unpacking u-boot:armhf (2014.04+dfsg1-1) ...
Setting up u-boot:armhf (2014.04+dfsg1-1) ...

With that done FEL booting a cubietruck is as simple as starting the board in FEL mode (by holding down the FEL button when powering on) and then:

# sunxi-usb-boot Cubietruck -
fel write 0x2000 /usr/lib/u-boot/Cubietruck_FEL/u-boot-spl.bin
fel exe 0x2000
fel write 0x4a000000 /usr/lib/u-boot/Cubietruck_FEL/u-boot.bin
fel write 0x41000000 /usr/share/sunxi-tools//ramboot.scr
fel exe 0x4a000000

Which should result in something like this on the Cubietruck's serial console:

U-Boot SPL 2014.04 (Jun 16 2014 - 05:31:24)
DRAM: 2048 MiB


U-Boot 2014.04 (Jun 16 2014 - 05:30:47) Allwinner Technology

CPU:   Allwinner A20 (SUN7I)
DRAM:  2 GiB
MMC:   SUNXI SD/MMC: 0
In:    serial
Out:   serial
Err:   serial
SCSI:  SUNXI SCSI INIT
Target spinup took 0 ms.
AHCI 0001.0100 32 slots 1 ports 3 Gbps 0x1 impl SATA mode
flags: ncq stag pm led clo only pmp pio slum part ccc apst 
Net:   dwmac.1c50000
Hit any key to stop autoboot:  0 
sun7i# 

As more platforms become supported by the u-boot packages you should be able to find them in /usr/lib/u-boot/*_FEL.

There is one minor inconvenience which is the need to run sunxi-usb-boot as root in order to access the FEL USB device. This is easily resolved by creating /etc/udev/rules.d/sunxi-fel.rules containing either:

SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", OWNER="myuser"

or

SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", GROUP="mygroup"

To enable access for myuser or mygroup respectively. Once you have created the rules file then to enable:

# udevadm control --reload-rules

As well as the FEL mode tools the packages also contain a FEX (de)compiler. FEX is Allwinner's own hardware description language and is used with their Android SDK kernels and the fork of that kernel maintained by the linux-sunxi project. Debian's kernels follow mainline and therefore use Device Tree.

21 July, 2014 06:10PM

hackergotchi for Daniel Pocock

Daniel Pocock

Australia can't criticize Putin while competing with him

While much of the world is watching the tragedy of MH17 and contemplating the grim fate of 298 deceased passengers sealed into a refrigerated freight train in the middle of a war zone, Australia (with 28 victims on that train) has more than just theoretical skeletons in the closet too.

At this moment, some 153 Tamil refugees, fleeing the same type of instability that brought a horrible death to the passengers of MH17, have been locked up in the hull of a customs ship on the high seas. Windowless cabins and a supply of food not fit for a dog are part of the Government's strategy to brutalize these people for simply trying to avoid the risk of enhanced imprisonment(TM) in their own country.

Under international protocol for rescue at sea and political asylum, these people should be taken to the nearest port and given a humanitarian visa on arrival. Australia, however, is trying to lie and cheat their way out of these international obligations while squealing like a stuck pig about the plight of Australians in the hands of Putin. If Prime Minister Tony Abbott wants to encourage Putin to co-operate with the international community, shouldn't he try to lead by example? How can Australians be safe abroad if our country systematically abuses foreigners in their time of need?

21 July, 2014 05:00PM by Daniel.Pocock

hackergotchi for Steve Kemp

Steve Kemp

An alternative to devilspie/devilspie2

Recently I was updating my dotfiles, because I wanted to ensure that media-players were "always on top", when launched, as this suits the way I work.

For many years I've used devilspie to script the placement of new windows, and once I googled a recipe I managed to achieve my aim.

However during the course of my googling I discovered that devilspie is unmaintained, and has been replaced by something using Lua - something I like.

I'm surprised I hadn't realized that the project was dead, although I've always hated the configuration syntax it is something that I've used on a constant basis since I found it.

Unfortunately the replacement, despite using Lua, and despite being functional just didn't seem to gell with me. So I figured "How hard could it be?".

In the past I've written softare which iterated over all (visible) windows, and obviously I'm no stranger to writing Lua bindings.

However I did run into a snag. My initial implementation did two things:

  • Find all windows.
  • For each window invoke a lua script-file.

This worked. This worked well. This worked too well.

The problem I ran into was that if I wrote something like "Move window 'emacs' to desktop 2" that action would be applied, over and over again. So if I launched emacs, and then manually moved the window to desktop3 it would jump back!

In short I needed to add a "stop()" function, which would cause further actions against a given window to cease. (By keeping a linked list of windows-to-ignore, and avoiding processing them.)

The code did work, but it felt wrong to have an ever-growing linked-list of processed windows. So I figured I'd look at the alternative - the original devilspie used libwnck to operate. That library allows you to nominate a callback to be executed every time a new window is created.

If you apply your magic only on a window-create event - well you don't need to bother caching prior-windows.

So in conclusion :

I think my code is better than devilspie2 because it is smaller, simpler, and does things more neatly - for example instead of a function to get geometry and another to set it, I use one. (e.g. "xy()" returns the position of a window, but xy(3,3) sets it.).

kpie also allows you to run as a one-off job, and using the simple primitives I wrote a file to dump your windows, and their size/placement, which looks like this:

shelob ~/git/kpie $ ./kpie --single ./samples/dump.lua
-- Screen width : 1920
-- Screen height: 1080
..
if ( ( window_title() == "Buddy List" ) and
     ( window_class() == "Pidgin" ) and
     ( window_application() == "Pidgin" ) ) then
     xy(1536,24 )
     size(384,1032 )
     workspace(2)
end
if ( ( window_title() == "feeds" ) and
     ( window_class() == "Pidgin" ) and
     ( window_application() == "Pidgin" ) ) then
     xy(1,24 )
     size(1536,1032 )
     workspace(2)
end
..

As you can see that has dumped all my windows, along with their current state. This allows a simple starting-point - Configure your windows the way you want them, then dump them to a script file. Re-run that script file and your windows will be set back the way they were! (Obviously there might be tweaks required.)

I used that starting-point to define a simple recipe for configuring pidgin, which is more flexible than what I ever had with pidgin, and suits my tastes.

Bug-reports welcome.

21 July, 2014 02:30PM

Tim Retout

apt-transport-tor 0.2.1

apt-transport-tor 0.2.1 should now be on your preferred unstable Debian mirror. It will let you download Debian packages through Tor.

New in this release: support for HTTPS over Tor, to keep up with people.debian.org. :)

I haven't mentioned it before on this blog. To get it working, you need to "apt-get install apt-transport-tor", and then use sources.list lines like so:

deb tor+http://http.debian.net/debian unstable main

Note the use of http.debian.net in order to pick a mirror near to whichever Tor exit node. Throughput is surprisingly good.

On the TODO list: reproducible builds? It would be nice to have some mirrors offer Tor hidden services, although I have yet to think about the logistics of this, such as how the load could be balanced (maybe a service like http.debian.net). I also need to look at how cowbuilder etc. can be made to play nicely with Tor. And then Debian installer support!

21 July, 2014 12:17PM

hackergotchi for Francois Marier

Francois Marier

Creating a modern tiling desktop environment using i3

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian.

After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script

As soon as I log into my desktop, my startup script starts a few programs, including:

Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:

dconf write /org/gnome/settings-daemon/plugins/cursor/active false

Screensaver

In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:

xautolock -time 30 -locker "gnome-screensaver-command --lock" &

to lock the screen using gnome-screensaver after 30 minutes of inactivity.

I can also trigger it manually using the following shortcut defined in my ~/.i3/config:

bindsym Ctrl+Mod1+l exec xautolock -locknow

Keyboard shortcuts

While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:

# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'

# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5

# show battery stats
bindsym XF86Battery exec gnome-power-statistics

to make volume control, screen brightness and battery status buttons work as expected on my laptop.

These bindings require the following packages:

Keyboard layout switcher

Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard.

To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:

bindsym $mod+u exec /home/francois/bin/toggle-xkbmap

Suspend script

Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:

HandleLidSwitch=lock

Instead, when I want to suspend to ram, I use the following keyboard shortcut:

bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram

which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep.

To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:

francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend

Window and workspace placement hacks

While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications

A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:

for_window [class="VidyoDesktop"] floating enable

You can get the Xorg class of the offending application by running this command:

xprop | grep WM_CLASS

before clicking on the window.

Keeping IM windows on the first workspace

I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:

assign [class="Pidgin"] 1

Automatically moving workspaces when docking

Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:

# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1

You can get these output names by running:

xrandr --display :0 | grep " connected"

21 July, 2014 11:03AM

hackergotchi for DebConf team

DebConf team

Talks review and selection process. (Posted by René Mayorga)

Today we finished the talk selection process. We are very grateful to everyone who decided to submit talks and events for DebConf14.

If you have submitted an event, please check your email :). If you have not received any confirmation regarding your talk status, please contact us on talks@debconf.org

During the selection process, we bore in mind the number of talk slots during the conference, as well as maintaining a balance among the different submitted topics. We are pleased to announce that we have received a total of 115 events, of which 80 have been approved (69%). Approval means your event will be scheduled during the conference and you will have video coverage.

The list of approved talks can be found on the following link: https://summit.debconf.org/debconf14/all/

If you got an email telling your talk have being approved, and your talk is not listed, don’t panic. Check the status on summit, and make sure to select a track, if you have some track suggestions please mail us and tell us about it.

This year, we expect to also have a sort of “unconference” schedule. This will take place during the designated “hacking time”. During that time the talks rooms will be empty, and ad hoc meetings can be scheduled on-site while we are in the Conference. The method for booking a room for your ad hoc meeting will be decided and announced later, but is expected to be flexible (i.e: open scheduling board / 1 day or less in advance booking), Please don’t abuse the system: bear in mind the space will be limited, and only book your event if you gather enough people to work on your idea.

Please make sure to read the email regarding your talk. :) and prepare yourself.

Time is ticking and we will be happy to meet you in Portland.

21 July, 2014 10:30AM by DebConf Organizers

hackergotchi for Keith Packard

Keith Packard

Glamorous Intel

Reworking Intel Glamor

The original Intel driver Glamor support was based on the notion that it would be better to have the Intel driver capture any fall backs and try to make them faster than Glamor could do internally. Now that Glamor has reasonably complete acceleration, and its fall backs aren’t terrible, this isn’t as useful as it once was, and because this uses Glamor in a weird way, we’re making the Glamor code harder to maintain.

Fixing the Intel driver to not use Glamor in this way took a bit of effort; the UXA support is all tied into the overall operation of the driver.

Separating out UXA functions

The first task was to just identify which functions were UXA-specific by adding “_uxa” to their names. A couple dozen sed runs and now a bunch of the driver is looking better.

Next, a pile of UXA-specific functions were actually inside the non-UXA parts of the code. Those got moved out, and a new ‘intel_uxa.h” file was created to hold all of the definitions.

Finally, a few non UXA-specific functions were actually in the uxa files; those got moved over to the generic code.

Removing the Glamor paths in UXA

Each one of the UXA functions had a little piece of code at the top like:

if (uxa_screen->info->flags & UXA_USE_GLAMOR) {
    int ok = 0;

    if (uxa_prepare_access(pDrawable, UXA_GLAMOR_ACCESS_RW)) {
        ok = glamor_fill_spans_nf(pDrawable,
                      pGC, n, ppt, pwidth, fSorted);
        uxa_finish_access(pDrawable, UXA_GLAMOR_ACCESS_RW);
    }

    if (!ok)
        goto fallback;

    return;
}

Pulling those out shrank the UXA code by quite a bit.

Selecting Acceleration (or not)

The intel driver only supported UXA before; Glamor was really just a slightly different mode for UXA. I switched the driver from using a bit in the UXA flags to having an ‘accel’ variable which could be one of three options:

  • ACCEL_GLAMOR.
  • ACCEL_UXA.
  • ACCEL_NONE

I added ACCEL_NONE to give us a dumb frame buffer mode. That actually supports DRI3 so that we can bring up Mesa and run it under X before we have any acceleration code ready; avoiding a dependency loop when doing new hardware. All that it requires is a kernel that offers mode setting and buffer allocation.

Initializing Glamor

With UXA no longer supporting Glamor, it was time to plug the Glamor support into the top of the driver. That meant changing a bunch of the entry points to select appropriate Glamor or UXA functionality, instead of just calling into UXA. So, now we’ve got lots of places that look like:

        switch (intel->accel) {
#if USE_GLAMOR
        case ACCEL_GLAMOR:
                if (!intel_glamor_create_screen_resources(screen))
                        return FALSE;
                break;
#endif
#if USE_UXA
        case ACCEL_UXA:
                if (!intel_uxa_create_screen_resources(screen))
                        return FALSE;
        break;
#endif
        case ACCEL_NONE:
                if (!intel_none_create_screen_resources(screen))
                        return FALSE;
                break;
        }

Using a switch means that we can easily elide code that isn’t wanted in a particular build. Of course ‘accel’ is an enum, so places which are missing one of the required paths will cause a compiler warning.

It’s not all perfectly clean yet; there are piles of UXA-only paths still.

Making It Build Without UXA

The final trick was to make the driver build without UXA turned on; that took several iterations before I had the symbols sorted out appropriately.

I built the driver with various acceleration options and then tried to count the lines of source code. What I did was just list the source files named in the driver binary itself. This skips all of the header files and the render program source code, and ignores the fact that there are a bunch of #ifdef’s in the uxa directory selecting between uxa, glamor and none.

    Accel                    Lines          Size(B)
    -----------             ------          -------
    none                      7143            73039
    glamor                    7397            76540
    uxa                      25979           283777
    sna                     118832          1303904

    none legacy              14449           152480
    glamor legacy            14703           156125
    uxa legacy               33285           350685
    sna legacy              126138          1395231

The ‘legacy’ addition supports i810-class hardware, which is needed for a complete driver.

Along The Way, Enable Tiling for the Front Buffer

While hacking the code, I discovered that the initial frame buffer allocated for the screen was created without tiling (!) because a few parameters that depend on the GTT size were not initialized until after that frame buffer was allocated. I haven’t analyzed what effect this has on performance.

Page Flipping and Resize

Page flipping (or just flipping) means switching the entire display from one frame buffer to another. It’s generally the fastest way of updating the screen as you don’t have to copy any bits.

The trick with flipping is that a client hands you a random pixmap and you need to stuff that into the KMS API. With UXA, that’s pretty easy as all pixmaps are managed through the UXA API which knows which underlying kernel BO is tied with each pixmap. Using Glamor, only the underlying GL driver knows the mapping. Fortunately (?), we have the EGL Image extension, which lets us take a random GL texture and turn it into a file descriptor for a DMA-BUF kernel object. So, we have this cute little dance:

fd = glamor_fd_from_pixmap(screen,
                               pixmap,
                               &stride,
                               &size);


bo = drm_intel_bo_gem_create_from_prime(intel->bufmgr, fd, size);
    close(fd);
    intel_glamor_get_pixmap(pixmap)->bo = bo;

That last bit remembers the bo in some local memory so we don’t have to do this more than once for each pixmap. glamorfdfrompixmap ends up calling eglCreateImageKHR followed by gbmbo_import and then a kernel ioctl to convert a prime handle into an fd. It’s all quite round-about, but it does seem to work just fine.

After I’d gotten Glamor mostly working, I tried a few OpenGL applications and discovered flipping wasn’t working. That turned out to have an unexpected consequence — all full-screen applications would run flat-out, and not be limited to frame rate. Present ‘recovers’ from a failed flip queue operation by immediately performing a CopyArea; not waiting for vblank. This needs to get fixed in Present by having it re-queued the CopyArea for the right time. What I did in the intel driver was to add a bunch more checks for tiling mode, pixmap stride and other things to catch pixmaps that were going to fail before the operation was queued and forcing them to fall back to CopyArea at the right time.

The second adventure was with XRandR. Glamor has an API to fix up the screen pixmap for a new frame buffer, but that pulls the size of the frame buffer out of the pixmap instead of out of the screen. XRandR leaves the pixmap size set to the old screen size during this call; fixing that just meant getting the pixmap size set correctly before calling into glamor. I think glamor should get fixed to use the screen size rather than the pixmap size.

Painting Root before Mode set

The X server has generally done initialization in one order:

  1. Create root pixmap
  2. Set video modes
  3. Paint root window

Recently, we’ve added a ‘-background none’ option to the X server which causes it to set the root window background to none and have the driver fill in that pixmap with whatever contents were on the screen before the X server started.

In a pre-Glamor world, that was done by hacking the video driver to copy the frame buffer console contents to the root pixmap as it was created. The trouble here is that the root pixmap is created long before the upper layers of the X server are ready for drawing, so you can’t use the core rendering paths. Instead, UXA had kludges to call directly into the acceleration functions.

What we really want though is to change the order of operations:

  1. Create root pixmap
  2. Paint root window
  3. Set video mode

That way, the normal root window painting operation will take care of getting the image ready before that pixmap is ever used for scanout. I can use regular core X rendering to get the original frame buffer contents into the root window, and even if we’re not using -background none and are instead painting the root with some other pattern (like the root weave), I get that presented without an intervening black flash.

That turned out to be really easy — just delay the call to I830EnterVT (which sets the modes) until the server is actually running. That required one additional kludge — I needed to tell the DIX level RandR functions about the new modes; the mode setting operation used during server init doesn’t call up into RandR as RandR lists the current configuration after the screen has been initialized, which is when the modes used to be set.

Calling xf86RandR12CreateScreenResources does the trick nicely. Getting the root window bits from fbcon, setting video modes and updating the RandR/Xinerama DIX info is now all done from the BlockHandler the first time it is called.

Performance

I ran the current glamor version of the intel driver with the master branch of the X server and there were not any huge differences since my last Glamor performance evaluation aside from GetImage. The reason is that UXA/Glamor never called Glamor’s image functions, and the UXA GetImage is pretty slow. Using Mesa’s image download turns out to have a huge performance benefit:

1. UXA/Glamor from April
2. Glamor from today

       1                 2                 Operation
------------   -------------------------   -------------------------
     50700.0        56300.0 (     1.110)   ShmGetImage 10x10 square 
     12600.0        26200.0 (     2.079)   ShmGetImage 100x100 square 
      1840.0         4250.0 (     2.310)   ShmGetImage 500x500 square 
      3290.0          202.0 (     0.061)   ShmGetImage XY 10x10 square 
        36.5          170.0 (     4.658)   ShmGetImage XY 100x100 square 
         1.5           56.4 (    37.600)   ShmGetImage XY 500x500 square 
     49800.0        50200.0 (     1.008)   GetImage 10x10 square 
      5690.0        19300.0 (     3.392)   GetImage 100x100 square 
       609.0         1360.0 (     2.233)   GetImage 500x500 square 
      3100.0          206.0 (     0.066)   GetImage XY 10x10 square 
        36.4          183.0 (     5.027)   GetImage XY 100x100 square 
         1.5           55.4 (    36.933)   GetImage XY 500x500 square

Running UXA from today the situation is even more dire; I suspect that enabling tiling has made CPU reads through the GTT even worse than before?

1: UXA today
2: Glamor today

       1                 2                 Operation
------------   -------------------------   -------------------------
     43200.0        56300.0 (     1.303)   ShmGetImage 10x10 square 
      2600.0        26200.0 (    10.077)   ShmGetImage 100x100 square 
       130.0         4250.0 (    32.692)   ShmGetImage 500x500 square 
      3260.0          202.0 (     0.062)   ShmGetImage XY 10x10 square 
        36.7          170.0 (     4.632)   ShmGetImage XY 100x100 square 
         1.5           56.4 (    37.600)   ShmGetImage XY 500x500 square 
     41700.0        50200.0 (     1.204)   GetImage 10x10 square 
      2520.0        19300.0 (     7.659)   GetImage 100x100 square 
       125.0         1360.0 (    10.880)   GetImage 500x500 square 
      3150.0          206.0 (     0.065)   GetImage XY 10x10 square 
        36.1          183.0 (     5.069)   GetImage XY 100x100 square 
         1.5           55.4 (    36.933)   GetImage XY 500x500 square

Of course, this is all just x11perf, which doesn’t represent real applications at all well. However, there are applications which end up doing more GetImage than would seem reasonable, and it’s nice to have this kind of speed up.

Status

I’m running this on my crash box to get some performance numbers and continue testing it. I’ll switch my desktop over when I feel a bit more comfortable with how it’s working. But, I think it’s feature complete at this point.

Where’s the Code

As usual, the code is in my personal repository. It’s on the ‘glamor’ branch.

git://people.freedesktop.org/~keithp/xf86-video-intel  glamor

21 July, 2014 07:39AM

hackergotchi for Andrew Pollock

Andrew Pollock

[debian] Day 173: Investigation for bug #749410 and fixing my VMs

I have a couple of virt-manager virtual machines for doing DHCP-related work. I have one for the DHCP server and one for the DHCP client, and I have a private network between the two so I can simulate DHCP requests without messing up anything else. It works nicely.

I got a bit carried away, and I use LVM to snapshots for the work I do, so that when I'm done I can throw away the virtual machine's disks and work with a new snapshot next time I want to do something.

I have a cron job, that on a good day, fires up the virtual machines using the master logical volumes and does a dist-upgrade on a weekly basis. It seems to have varying degrees of success though.

So I fired up my VMs to do some investigation of the problem for #749410 and discovered that they weren't booting, because the initramfs couldn't find the root filesystem.

Upon investigation, the problem seemed to be that the logical volumes weren't getting activated. I didn't get to the bottom of why, but a manual activation of the logical volumes allowed the instances to continue booting successfully, and after doing manual dist-upgrades and kernel upgrades, they booted cleanly again. I'm not sure if I got hit by a passing bug in unstable, or what the problem was. I did burn about 2.5 hours just fixing everything up though.

Then I realised that there'd been more activity on the bug since I'd last read it while I was on vacation, and half the investigation I needed to do wasn't necessary any more. Lesson learned.

I haven't got to the bottom of the bug yet, but I had a fun day anyway.

21 July, 2014 01:23AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Trying android wear SDK using my LG G watch.

Trying android wear SDK using my LG G watch. I didn't have the permissions to access the usb device, and I had to update the udev rules. It wasn't clear what the right way was, and existing Android devices look like audio or camera, not really consistent.

21 July, 2014 01:03AM by Junichi Uekawa

July 20, 2014

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Plymouth Bootsplashes

Why oh why are they so hard to write?

Even using the built in modules it is insanely hard to debug. Playing a bootsplash in X sucks and my machine boots too fast to test it on reboot.

Basically, euch. All I wanted was a hackers zebra on boot :(

20 July, 2014 09:02PM

Laura Arjona

Upgrading my laptop to Debian Jessie

Some days ago I decided to upgrade my laptop from stable to testing.

I had tried Jessie since several months, in my husband’s laptop, but that was a fresh install, and a not-so-old laptop, and we have not much software installed there.

In my netbook (Compaq Mini 110c), with stable, I already had installed Pumpa, Dianara and how-can-i-help from testing, and since the freeze is coming, I thought that I could full-upgrade and use Jessie from now on, and report my issues and help to diagnose or fix them, if possible, before the freeze.

I keep Debian stable at work for my desktop and servers (well, some of them are still in oldstable, thanks LTS team!!), and I have testing in a laptop that I use as clonezilla/drbl server (but I had issues, next week I’ll put some time on them and I’ll write here my findings, and report bugs, if any).

So! let’s go. Here I write my experience and the issues that I found (very few! and not sure if they are bugs or configuration problems or what, I’ll keep an eye on them).

The upgrade

I pointed my /etc/apt/sources.list to jessie, then apt-get update, then apt-get dist-upgrade. (With the servers I am much more careful, read the release notes, upgrade guides and so, or directly I go for a fresh install, but with my laptop, I am too lazy).

I went to bed (wow, risky LArjona!) and when I got up for going to work, the laptop was waiting for me to accept to block root from ssh access, or restart some services, and so. Ok! the upgrade resumed… but I have to go to work and I wanted my laptop! Since all the packages were already downloaded, I closed the lid (double risky LArjona!) unplugged it, put everything in my bag, and catched the bus in time :)

At the bus, I opened again the lid of my laptop (crossing fingers!) and perfect, the laptop had suspended and returned back to life, and the upgrade just resumed with no problem. Wow! I love you Debian! After 15 minutes, I had to suspend again, since the bus arrived and I had to take the metro. In the metro, the upgrade resumed, and finished. I shutdown my laptop and arrive to work.

Testing testing :)

In a break for lunch, I opened my brand new laptop (the hardware is the same, but the software totally renewed, so it’s brand new for me). I have to say that use xfce, with some GNOME/GTK apps installed (gedit, cheese, evince, XChat…) and some others that use Qt or are part of the KDE project (Okular, Kile, QtLinguist, Pumpa, Dianara). I don’t know/care too much about desktops and tweaking my desktop: I just put the terminal and gedit in black background, Debian wallpaper is enough dark for me so ok, put the font size a bit smaller to better use my low-vertical-resolution, and that’s all, I only go to configure something else if there’s something that really annoys me.

My laptop booted correctly and a nice, more modern LightDM was greeting me. I logged in and everything worked ok, except some issues that follow.

Network Manager and WPA2-enterprise wireless connections

I had to reconfigure some wireless connections in Network Manager. At the University we use WPA2-enterprise, TTLS + PAP. I had stored my username and password in the connection, and network manager seemed to remember the username but not the password. No problem, I said, and I wrote it when it asked, but… the “Save” or “OK” button was greyed out. I could not click it.

Then I went to edit the connections, and more or less the same, it seems that I could edit, but not save the (new) configuration. Finally, I removed the wireless connection and created it again, and everything worked as a charm.

This, I had to do it with the two wireless in my University (both of them are WPA2-enterprise TTLS + PAP). At home, I have WPA2 personal, and I had no issues, everything worked ok.

This problem is not appearing in a fresh install, since there are no old configs to keep.

Adblock Plus not working any more

I opened Iceweasel and I began to see ads in the webpages that I visited. What? I checked and Adblock plus was installed and activated… I reinstalled the package xul-ext-adblock-plus and it worked again.

Strange display in programs based on Qt

When I opened Pumpa I noticed that the edges of the windows where too rough, as if it was not using a desktop theme. I asked to a friend that uses Plasma and he suggested to install qt4-qtconfig, and then, select a theme for my Qt apps. It worked like a charm, but I find strange that I didn’t need it before in stable. Maybe the default xfce configuration from stable is setting a theme, and the new one is not setting it, and so, the Qt apps are left “barefoot”.
With qtconfig I chose a GTK+ Style GUI for my Qt apps and then, they looked similar to what I had in stable (frankly, I cannot say if it was “similar” or “exactly the same”, but I didn’t find them strange as before, so I’m fine).

Strange display in programs from GNOME

Well, this is not a Jessie problem, it’s just that some programs adopted the new GNOME appearance, and since I’m on xfce, not on GNOME, they look a bit strange (no menus integration, and so). I am not sure that I can run GNOME (fallback, classic?) in my 1 GB RAM laptop, I have to investigate if I can tweak it to use less memory, or what.

I’m not very tied to xfce, and in fact it does not look so light (well, on top of it, I don’t run light programs, I run Iceweasel, Icedove, Libreoffice, and some others). At work I use GNOME in my desktop, but with GNOME shell, not the fallback or classic modes, so I’m thinking about giving a chance to MATE or second chance to LXDE. We’ll see.

Issues when opening the lid (waking up from suspend)

This is the most strange thing I found in the migration, and the most dangerous one, I think.

As I said before, I don’t tweak too much my desktop, if it works with the default configuration. I’m not sure that I know the differences between suspend, hibernate, hard disks disconnections and so. When I was in stable, and I closed the lid of my laptop, it just shutdown the screen, then I heard something like the system going to suspend or whatever, and after some seconds, the harddisk and fans stop, the wireless led turns off, and the power led begins to blink. Ok. When I open the lid, then it was waking up itself (the power led stayed on, the wireless led turns on, and when I tap the touchpad or type anything, the screen was coming, with the xscreensaver asking for my password). Just sometimes, when the screen was turning on, I could see my desktop for less than a second, before xscreensaver turns the background black and asks for the password.

Now since I migrated to Jessie, I’m experiencing a different behavior. When I close the lid, the laptop behaves the same. When I open the lid, the laptop behaves the same, but when I type or tap the touchpad and xscreensaver comes to ask the password, before than I can type it, the laptop just suspends again (or hibernates, I’m not sure), and I have to press the power button in order to bring it back to life (then I see the xscreensaver again asking for the password, I type it, and my desktop is there, the same as I left it when I closed the lid).

Strange, isn’t it?

I have tried to suspend my laptop directly from the menu, and it comes to the same state in which I have to press the power button in order to bring it back to life, but then, no xscreensaver password is required (which is double strange, IMHO).

Things I miss in Jessie

Well, until now, the only thing I miss in Jessie is the software center. I rarely use it (I love apt) but I think it makes a good job in easing the installation of programs in Debian for people coming from other operative systems (specially after smartphones and their copied software stores became popular).

I hope the maintainer can upload a new version before the freeze, and so, it enters in the release. I’ll try to contact him.

Update 2014/07/20: Julian Andres Klode, maintainer of software-center, just replied (see his comment below) and pointed to GNOME Software (gnome-packagekit) as alternative. I just installed and it looks neat and nice. I’m very happy!

TODO

I have a Debian stable laptop at work (this one with xfce + GNOME), I’ll try to upgrade it and see if I see the same problems that I notice in mine. Then, I’ll check the corresponding packages to see if there are open bugs about them, and if not, report them to their maintainers.

I have to review the wiki pages related to the Jessie Desktop theme selection, I think they wanted the wallpaper to be inside before the freeze. Maybe I can help in publicity about that, handle the votings and so. I like Joy, but it’s time to change a bit, new fresh air into the room!


Filed under: My experiences and opinion Tagged: Contributing to libre software, Debian, English, Free Software, Moving into free software

20 July, 2014 08:41PM by larjona

John Goerzen

Beautiful Earth

Sometimes you see something that takes your breath away, maybe even makes your eyes moist. That happened when I saw this photo one morning:

Sunrise on the Prairie

Photography has been one of my hobbies since I was a child, and I’ve enjoyed it all these years. Recently I was inspired by the growing ease of aerial photography using model aircraft, and now can fly two short-range RC quadcopters. That photo came from the first one, and despite being a low-res 1280×720 camera, tha image of our home in the yellow glow of sunrise brought a deep feeling a beauty and peace.

DJI00196

Somehow seeing our home surrounded by the beauty of the immense wheat fields and green pastures drives home how small we all are in comparison to the vastness of the earth, and how lucky we are to inhabit this beautiful planet.

DJI00204

As the sun starts to come up over the pasture, the only way you can tell the height of the grass at 300ft is to see the shadow it makes on the mowed pathway Laura and I use to get down to the creek.

DJI00149

This is a view of our church in a small town nearby — the church itself is right in the center of the photo. Off to the right, you see the grain elevators that can be seen for miles across the Kansas prairie, and of course the fields are never all that far off in the background.

Here you can see the quadcopter taking off from the driveway:

And here it is flying over my home church out in the country:

DJI00120

That’s the country church, at the corner of two gravel roads – with its lighted cross facing east that can be seen from a mile away at night. To the right is the church park, and the green area along the road farther back is the church cemetery.

DJI00063

Sometimes we get in debates about environmental regulations, politics, religion, whatever. We hear stories of missiles, guns, and destruction. It is sad, this damage we humans inflict on ourselves and our earth. Our earth — our home — is worth saving. Its stunning beauty from all its continents is evidence enough of that. To me, this photo of a small corner of flat Kansas is proof enough that the home we all share deserves to be treated well, and saved so that generations to come can also get damp eyes viewing its beauty from a new perspective.

DJI00183

20 July, 2014 07:04PM by John Goerzen

hackergotchi for Thomas Goirand

Thomas Goirand

sysvinit not sending output to all consoles

I spent many, many hours trying to understand why I couldn’t have both “nova console-log” showing me the output of the log, AND have the OpenStack dashboard (eg: horizon) console to work at the same time. Normally, this is done very easily, by passing multiple times the console= parameter to the Linux kernel as follow:

console=tty0 console=ttyS0,115200

But it never worked. Always, it’s the last console= thing that was taken into account by sysvinit (or, shall I say, bootlogd). Spending hours trying to figure out what would be the correct kernel command to pass didn’t help. Then this week-end, by the magic pure chance of being subscribed to the sysvinit bug reports, I have finally found out. We’ve had this bug in Debian for more than 10 years:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=181756

And it has the patch! It just feels so lame that the issue has been pending since 2003, and with a patch since 2006, and nobody even tried to have it enter Debian. I tried the Wheezy patch in the above bug report, and then tadaaaaaa! I finally had both the “nova console-log” (eg: ttyS0) console output, and the interactive tty0 to work on my Debian cloud image. I have produced a fixed version of the sysvinit package for Wheezy, if anyone wants to try it:

http://archive.gplhost.com/debian/pool/juno-backports/main/s/sysvinit/

This doesn’t only affect only the cloud images use case. Let’s say you have a server. If it’s a modern server, probably you have IPMI 2.0 on it. While having access through the integrated KVM over IP may be nice, seeing the boot process through the serial console redirection is often a lot more snappy than the (often VNC based) video output, plus it wouldn’t require Java. Too often, Java a requirement for these nasty IPMI web interface (that’s the case for at least: Dell DRAC, Supermicro IMPI, and HP iLO). Well, it should now be possible to just use ipmitools to debug the server boot process or to go fix stuff in the single user interactive shell, AND keep the “normal” video output! :)

But keeping this fix private doesn’t help much. I would really love to get this fixed within Debian. So I have sent the patch (which needed a very small rebase) in the Git repository of sysvinit (see http://deb.li/3YxUD). I of course tested it in Sid too. Though I tested only under a Xen virtual machine, I see no reason why it would work there and not elsewhere. That being said, I would welcome more testing, given the high profile of sysvinit (everyone uses/needs it, and I wouldn’t like to carry alone the unhappiness of having no boot log). Please do test it from the sysvinit git, before it’s even uploaded to Sid. Also, these days, sysvinit gets often uploaded to Experimental first. It will probably also be the case for version 2.88dsf-56.

If it works well and nobody complains about the patch, maybe it’d be worth adding it to Wheezy as well (though that decision is of course left to the release team once the fix reaches Jessie).

20 July, 2014 04:10PM by admin

July 19, 2014

hackergotchi for Steve Kemp

Steve Kemp

Did you know xine will download and execute scripts?

Today I was poking around the source of Xine, the well-known media player. During the course of this poking I spotted that Xine has skin support - something I've been blissfully ignorant of for many years.

How do these skins work? You bring up the skin-browser, by default this is achieved by pressing "Ctrl-d". The browser will show you previews of the skins available, and allow you to install them.

How does Xine know what skins are available? It downloads the contents of:

NOTE: This is an insecure URL.

The downloaded file is a simple XML thing, containing references to both preview-images and download locations.

For example the theme "Sunset" has the following details:

  • Download link: http://xine.sourceforge.net/skins/Sunset.tar.gz
  • Preview link: http://xine.sourceforge.net/skins/Sunset.png

if you choose to install the skin the Sunset.tar.gz file is downloaded, via HTTP, extracted, and the shell-script doinst.sh is executed, if present.

So if you control DNS on your LAN you can execute arbitrary commands if you persuade a victim to download your "corporate xine theme".

Probably a low-risk attack, but still a surprise.

19 July, 2014 08:48PM

hackergotchi for Jo Shields

Jo Shields

Transition tracker

Friday was my last day at Collabora, the awesome Open Source consultancy in Cambridge. I’d been there more than three years, and it was time for a change.

As luck would have it, that change came in the form of a job offer 3 months ago from my long-time friend in Open Source, Miguel de Icaza. Monday morning, I fly out to Xamarin’s main office in Boston, for just over a week of induction and face time with my new co workers, as I take on the title of Release Engineer.

My job is to make sure Mono on Linux is a first-class citizen, rather than the best-effort it’s been since Xamarin was formed from the ashes of the Attachmate/Novell deal. I’m thrilled to work full-time on what I do already as community work – including making Mono great on Debian/Ubuntu – and hope to form new links with the packer communities in other major distributions. And I’m delighted that Xamarin has chosen to put its money where its mouth is and fund continued Open Source development surrounding Mono.

If you’re in the Boston area next week or the week after, ping me via the usual methods!

IMG_20140719_203043

19 July, 2014 07:35PM by directhex

hackergotchi for Vasudev Kamath

Vasudev Kamath

Stop messing with my settings Network Manager

I use a laptop with Atheros wifi card with ath9k driver. I use hostapd to convert my laptop wifi into AP (Access point) so I can share network with my Nexus 7 and Kindle. This has been working fine for quite some time till my recent update.

After recent system update (I use Debian Sid), I couldn't for some reason convert my wifi into AP so my device can connect. I can't find anything in log nor in hostapd debug messages which is useful to trouble shoot the issue. Every time I start the laptop my wifi card will be blocked by RF-KILL and I have manually unblock (both hard and soft). The script which I use to convert my Wifi into AP is below

#Initial wifi interface configuration
ifconfig "$1" up 192.168.2.1 netmask 255.255.255.0
sleep 2

# start dhcp
sudo systemctl restart dnsmasq.service

iptables --flush
iptables --table nat --flush
iptables --delete-chain
iptables -t nat -A POSTROUTING -o "$2" -j MASQUERADE
iptables -A FORWARD -i "$1" -j ACCEPT

sysctl -w net.ipv4.ip_forward=1

#start hostapd
hostapd /etc/hostapd/hostapd.conf  &> /dev/null &

I tried rebooting the laptop and for some time I managed to convert my wifi into AP, I noticed at same time that Network Manager is not started once laptop is booted, yeah this also started happening after recent upgrade which I guess is the black magic of systemd. After some time I noticed wifi has went down and now I can't bring it up because its blocked by RF-KILL. After checking the syslog I noticed following lines.

Jul 18 23:09:30 rudra kernel: [ 1754.891060] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): using nl80211 for WiFi device control
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): driver supports Access Point (AP) mode
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): new 802.11 WiFi device (driver: 'ath9k' ifindex: 10)
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): exported as /org/freedesktop/NetworkManager/Devices/8
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): device state change: unmanaged -> unavailable (reason 'managed') [10 20 2]
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> (mon.wlan0): preparing device
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> devices added (path: /sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0/net/mon.wlan0, iface: mon.wlan0)
Jul 18 23:09:30 rudra NetworkManager[5485]: <info> device added (path: /sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0/net/mon.wlan0, iface: mon.wlan0): no ifupdown configuration found.
Jul 18 23:09:33 rudra ModemManager[891]: <warn>  Couldn't find support for device at '/sys/devices/pci0000:00/0000:00:1c.1/0000:04:00.0': not supported by any plugin

Well I couldn't figure out much but it looks like NetworkManager has come up and after seeing interface mon.wlan0, a monitoring interface created by hostapd to monitor the AP goes mad and tries to do something with it. I've no clue what it is doing and don't have enough patience to debug that. Probably some expert can give me hints on this.

So as a last resort I purged the NetworkManager completely from the system and settled back to good old wicd and rebooted the system. After reboot wifi card is happy and is not blocked by RF-KILL and now I can convert it AP and use it as long as I wish without any problems. Wicd is not a great tool but its good enough to get the job done and does only what is asked to it unlike the NetworkManager.

So in short

NetworkManager please stop f***ing with my settings and stop acting oversmart.

19 July, 2014 07:09PM by copyninja

July 18, 2014

Ian Donnelly

How-To: Write a Plug-in

Hi Everybody!

I wanted to write a how-to on how to write an Elektra plug-in. Plug-ins are what allow Elektra to translate regular configuration files into the Elektra key database, and allow keys to be coverted back into regular configuration files. For example, the hosts plug-in is used to transcribe a hosts file into a meaningful KeySet in the Elektra Key Database. This plugin is what allows the kdb tool to be able to work with hosts files like in our mount tutorial.

While there are already a good number of plug-ins written for Elektra, we obviously don’t cover all the different types of configuration files. If you want to be able to use the features of Elektra, you may have to write a plug-in for your type of configuration file. Luckily, its not hard to do. Over the new few weeks I will be writing a tutorial for how to write your very own plug-in as well as explaining all the components of an Elektra plug-in. I will link the tutorials below as I finish them for easy reference, or if you already follow my blog I am sure you will notice them as they get published.

18 July, 2014 07:49PM by Ian Donnelly

Dominique Dumont

Looking for help to package Perl6, moar and others for Debian

Let’s face reality: I cannot find the time to properly maintain Perl6 related packages for Debian. Given the recent surge of popularity of rakudo, it would be a shame to let these packages rot.

Instead of throwing the towel, I’d rather call for help to maintain these packages. You don’t need to be a Debian Developer or Maintainer: I will gladly review and upload packages.

The following packages are looking for maintainer:

  • rakudo (currently RC buggy)
  • moar (needs to be packaged, some work has been done by Daniel Dehennin)
  • parrot (up to date)
  • nqp (need to be updated. current version no longer compiles on all arch)

Next step to help Perl6 on Debian is to join:

All the best

 


Tagged: debian, package, Perl6

18 July, 2014 05:32PM by dod

Osamu Aoki

Debian does not boot ...Crucial/Micron RealSSD m4/C400/P400

Today, my PC did not boot as usual to Debian.  BIOS could not find my /dev/sda and was looking for the netboot image.  I restarted my PC and got into the BIOS boot setting menu.  Hmmm.... my first SDD (/dev/sda) was missing.  My second HDD (/dev/sdb) was there.  But I did not put the Grub boot-loader there.   No wonder it did not boot.

I have a 32GB USB3 stick with the full Debian system.  It is not a live CD image USB stick but a HDD formatted and encrypted system.  Though it is not the fastest system, it is very light and usable.  I plugged it in and powered it up.  It booted OK but /dev/sda was still missing.  While it booted, I saw "ata1: COMRESET failed (errorno=-16)" .  So this ata1 SSD cannot be accessed from BIOS nor Linux.   Sigh ...

Looking around the web under the USB stick system, I saw some people were talking that the loose serial ATA cable sometimes causes this message.   Since my PC is a laptop, It has no flexible cable but has an on-board connector inside for the SSD.

Hoping my problem is just a bad connection problem, I crack opened the back panel of my PC.  The SSD looked fine.  I unplugged it from the connector and reinserted back into the connector.  After repeating this several times to be sure, I closed the back panel and booted.

It boots as expected into Debian.  Looks like everything is fine.
  SMART Error Log Version: 1
  No Errors Logged
Good.

If you have any boot problem like mine, please reinsert your SSD to the connector like I did before you panic.

Good luck.

Osamu

PS: This Crucial/Micron RealSSD m4/C400/P400 M4-CT256M4SSD2 previously had a problem.  A firmware bug made it read-only.  The firmware updates fixed my Debian system on this SSD.  I could fix this without Win*** OS since the firmware update was on a bootable disk image file.

18 July, 2014 03:50PM by Osamu Aoki (noreply@blogger.com)

hackergotchi for Mario Lang

Mario Lang

Croudsourced accessibility: Self-made digital menus

Something straight out from the real world: Menu cards in restaurants are not nice to deal with if you are blind. It is an old problem we grow used to ignoring over time, but still something that can be quite nagging.

There are a lot of psychological issues involved in this one. Of course, you can ask for the menu to be read out to you by the staff. While they usually do their best, you end up missing out on some things most of the time.

First of all, depending on the current workload in the restaurant, the staff will usually try to cut some time and not read everything to you. What they usually do is to try to understand what type of meal you are interested in, and just read the choices from that category to you. While this can be considered a service in some situations (human preprocessing), there are situations were you will definitely miss a highlight on the menu that you would have liked to choose if you knew that it was there.

And even if the staff decides to read the complete menu to you (which is rare), you are confronted with the 7-things-in-my-head-at-once problem. It is usually rather hard to decide amongst a list of more then 7 items, because our short-term memory is sort of limited. What the sighted restaurant goers do, is to skip back and forth between the available options, until they hit a decisive moment. True, that can take a while, but it is definitely a lot easier if you can perform "random access reads" to the list of choices yourself. However, if someone presents a substantial number of choices to you in a row, as sequential speech, you loose the random access ability. You either remember every choice from the beginning and do your choosing mentaully (if you do have extraordinary mental abilities), or you end up asking the staff to read previous items aloud again. This can work, but usually it doesn't. At some point, you do not want to bother the staff anymore, and you even start to feel stupid for asking again and again, while this is something totally normal to every sighted person, just that "they" do their "random access browsing" on their own, so "they" have no need to feel bad about how long it takes them to decide, minus the typical social pressure that arises after a a certain time for everyone, at least if you are dining in a group.

In very rare cases, you happen to meet staff that is truly "awake", doing their best to not let you feel that they might be pressed on time, and really taking as much time as necessary to help you make the perfect decision. This is rare, but if it happens, it is almost a magical moment. One of these moments, where there are no "artificial" barriers between humans doing communcation. Anyway, I am drifting away.

The perfect solution to this problem is to provide random access browsing of a restaurant menu with the help of digital devices. Trying to make braille menus available in all restaurants is a goal which is not realistically reachable. Menus go out of date, and need changing. And getting a physical braille copy updated and reprinted is considerably more involved as with digital media. Restaurant owners will also likely not see the benefit to rpvide a braille card for a very small circle of customers. With a digital online menu, that is a completely different story.

These days, almost every blind person in at least my social circles owns an iOS (or similar) device. These devices have speech synthesis and web browsers.

Of course, some restaurants especially in urban areas do already have a menu online. I have found them manually with google and friends sometimes in the past, which has already given me the ability to actually sit back, and really comfortably choose amongst the available offerings myself, without having to bother a human, and without having to feel bad about (ab)using their time.

However, the case where a restaurant really has their menu online is rather rare still in the area where I am. And, it can be tedious to google for a restaurant website. Sometimes, the website itself is just marginally accessible, which makes it even more frustrating to get a relaxed dinner-experience.

I have discovered a location-based solution for the restaurant-menu problem recently. Foursquare offers the ability to provide a direct link to the menu in a restaurant-entry. I figured, since all you need to do is write a single webpage where the (common) menu items are listed per restaurant, that I could begin to create restaurant menus for my favourite locations, on my own. Well, not quite, but almost. I will sometimes need help from others to get the menu digitized, but that's just a one-time piece of work I hopefully can outsource :-). Once the actual content is in my INBUX, I create a nice HTML page listing the menu in a rather speech-based browser friendly way.

I have begun to do this today, with the menu of a restaurant just about 500 meters away from my apartment. Unterm goldenen Dachl now has a menu online, and the foursquare change request to publish the corresponsing URL is already pending. I don't fully understand how the Foursquare change review process works yet, but I hope the URL should be published in the upcoming days/weeks.

I am using Foursquare because it is the backend of a rather popular mobile navigation App for blind people, called Blindsquare. Blindsquare lets you comfortably use Open Street Map and Foursquare data to get an overview of your surroundingds. If a food place has a menu entry listed in Foursquare, Blindsquare conveniently shows it to you and opens a browser if you tap it. So there is no need to actually search for the restaurant, you can just use the location based search of Blindsquare to discover the restaurant entry and its menu link directly from within Blindsquare. Actually, you could even find a restaurant by accident, and with a little luck, find the menu for it by location, without even knowing how the restaurant is called. Isn't that neat? Yeah, that's how it is supposed to work, that's as much independence as you can get.

And, it is, as the title suggests, croudsourced accessibility. Becuase while it is nice if a restaurant owner cares to publish their menu themselves, if they haven't, you can do it yourself. Either as a user of assistive technologies, to scratch your own itch. Or as a friend of a person with a need for assistive technologies. Next time you go to lunch with your blind friend, consider making available the menu to them digitally in advance, instead of reading it. Other people will likely thank you for that, and you have actually achieved something today. And if you happne to put a menu online, make sure to submit a change request to Foursquare. Many blind people are using blindsquare these days, which makes it super-easy for them to discover the menu.

18 July, 2014 02:50PM by Mario Lang

Elena 'valhalla' Grandi

Reducing useless noise from irssi

Yesterday I missed a query from a friend (with the answer to a question *I* had asked in the first place) because it ended up in window 30-something and my statusbar was full of dim numbers from channels where people had just joined/left.

This morning I've set
activity_hide_level = JOINS PARTS QUITS
and my world is a much neater place :)

(I may have to add NICKS and possibly MODES, but they are rare enough and I'm still not sure I don't care about them, especially the latter.)

18 July, 2014 12:24PM by Elena ``of Valhalla''

July 17, 2014

hackergotchi for Jonathan McDowell

Jonathan McDowell

On the state of Free VoIP

Every now and then I decide I'll try and sort out my VoIP setup. And then I give up. Today I tried again. I really didn't think I was aiming that high. I thought I'd start by making my email address work as a SIP address. Seems reasonable, right? I threw in the extra constraints of wanting some security (so TLS, not UDP) and a soft client that would work on my laptop (I have a Grandstream hardphone and would like an Android client as well, but I figure those are the easy cases while the "I have my laptop and I want to remain connected" case is a bit trickier). I had a suitable Internet connected VM, access to control my DNS fully (so I can do SRV records) and time to read whatever HOWTOs required. And oh my ghod the state of the art is appalling.

Let's start with getting a SIP server up and running. I went with repro which seemed to be a reasonably well recommended SIP server to register against. And mostly getting it up and running and registering against it is fine. Until you try and make a TLS SIP call through it (to a sip5060.net test address). Problem the first; the StartCom free SSL certs are not suitable because they don't advertise TLS Client. So I switch to CACert. And then I get bitten by the whole question about whether the common name on the cert should be the server name, or the domain name on the SIP address (it's the domain name on the SIP address apparently, though that might make your SIP client complain).

That gets the SIP side working. Of course RTP is harder. repro looks like it's doing the right thing. The audio never happens. I capitulate at this point, and install Lumicall on my phone. That registers correctly and I can call the sip:test.time@sip5060.net test number and hear the time. So the server is functioning, it's the client that's a problem. I try the following (Debian/testing):

  • jitsi - Registers fine, seems to lack any sort of TURN/STUN support.
  • ekiga - No sign of TLS registration support.
  • twinkle - Not in testing. A recompile leads to no sign of an actual client starting up when executed.
  • sflphone - Fails to start (Debian bug #745695).
  • Empathy - Fails to connect. Doesn't show any useful debug.
  • linphone - No TLS connect (Debian bug #743494).

I'm bored at this point. Can I "dial" my debian.org SIP address from Lumicall? Of course not; I get a "Codecs incompatible" (SIP 488 Not Acceptable Here) response. I have no idea what that means. I seem to have all of the options on Lumicall enabled. Is it a NAT thing? A codec thing? Did I sacrifice the wrong colour of goat?

At some point during this process I get a Skype call from some friends, which I answer. Up comes a video call with them, their newborn, perfect audio, and no hassle. I have a conversation with them that doesn't involve me cursing technology at all. And then I go back to fighting with SIP.

Gunnar makes the comment about Skype creating a VoIP solution 10 years ago when none was to be found. I believe they're still the market leader. It just works. I'm running the Linux client, and they're maintaining it (a little behind the curve, but close enough), and it works for text chat, voice chat and video calls. I've spent half a day trying to get a Free equivalent working and failing. I need something that works behind NAT, because it's highly likely when I'm on the move that's going to be the case. I want something that lets my laptop be the client, because I don't want to rely on my mobile phone. I want my email address to also be my VoIP address. I want some security (hell, I'm not even insisting on SRTP, though I'd like to). And the state of the Open VoIP stack just continues to make me embarrassed.

I haven't given up yet, but I'd appreciate some pointers. And Skype, if you're hiring, drop me a line. ;)

17 July, 2014 10:00PM

hackergotchi for Daniel Pocock

Daniel Pocock

MH17 and the elephant in the room

Just last week, air passengers were told of intrusive new checks on their electronic devices when flying.

For years, passengers have also suffered bans on basic essentials like drinking water and excesses like the patting down of babies that even Jimmy Saville would find offensive.

Of course, all this is being done for public safety.

So if western leaders claim the safety and security of their citizens is really their number one priority, just how is it that a passenger aircraft can be flying through a war zone where two other planes were shot down this month? When it comes to aviation security, this really is the elephant in the room. The MH17 tragedy today demonstrates that terror always finds a way. It is almost like the terrorists can have their cake and eat it too: they force "free" countries to give up their freedoms and public decency and then they still knock the occasional plane out of the sky anyway.

History in the making?

It is 100 years since the assassination of Austrian Archduke Franz Ferdinand started World War I and just over 50 years since the Cuban missile crisis. Will this incident also achieve similar notoriety in history? The downing of MH17 may well have been a "mistake" but the casualties are real and very tragic indeed. I've flown with Malaysia Airlines many times, including the same route MH17 and feel a lot of sympathy for these people who have been affected.

17 July, 2014 07:09PM by Daniel.Pocock

Tiago Bortoletto Vaz

HOPE X ical for schedule

As Adirondack (train line MTL-NYC) is not Internet-friendly for RSS feeds I can't profit of my ~11h travelling to check this huge schedule in the way I want to, (= having a timetable view including room, description and speakers). HOPE X has just released a pdf and a xls (wtf??), but these contain only titles and room.

So I've coded an ics generator to process their feed. The result file is available at http://acaia.ca/hopex.ics and should be up to date with the original RSS.

17 July, 2014 05:05PM

Craig Small

No more dspam, now what?

I was surprised at first to see that a long-standing bug in dspam had been fixed. Until that is, I realised it was from the Debian ftp masters and the reason the bug was closing was that dspam was being removed from the Debian archive.

 

Damn!

 

So, now what? What is a good replacement for dspam that is actually maintained? I don’t need anti-virus because mutt just ignores those sorts of things and besides youbankdetails.zip.exe doesn’t run too well on Debian. dspam basically used tokens to find common patterns of spam and ham, with you bouncing misses so it learnt from its mistakes. Already got postgrey running for greylisting so its really something that does the bayesan filtering.

 

Some intial comments:

  • bogfilter looks interesting and seems the closest thing so far
  • cluebringer aka policyd seems like a policy and bld type of spam filter, not bayesan
  • I’ve heard crm114 is good but hard to use
  • spamassasin – I used to use this, not sure why I stopped

There really is only me on the mailserver with a pretty light load so no need to worry about efficiencies.  Not sure if it matters but my MTA is postfix and I already use procmail for delivery.

 

 

17 July, 2014 12:33PM by Craig Small

Juliana Louback

Contribute a JSCommunicator Translation

To those who don’t know, JSCommunicator is a SIP communication tool developed in HTML and JavaScript, currently supporting audio/video calls and SIP SIMPLE instant messaging. We’ve recently added i18n internationalization support for English, Portuguese, Spanish, French and now Hebrew. We would like to have support for other languages and it’s a pretty straightforward process to add a translation, so even if you do not have a tech background you can contribute!

Setup

First, create and account in Github. Github is a hosting service for git repositories and open source projects get to use Github for FREE! As a result, many open source project repositories are hosted on Github, including but not limited to JSCommunicator.

Once you have created your git account, follow steps 1-4 in the section Setting up git listed in Github’s setup tutorial. This will help you install git and configure anything you need.

Fork

Now fork the JSCommunicator repo. I really like the word ‘fork’. Great term. If this is your first fork, know that this will make a copy of the JSCommunicator repo and all its respective code to your Github account. Here’s how we go about it:

  1. Sign in to Github

  2. Browse to https://github.com/JLouback/jscommunicator

  3. Somewhere around the upper left corner, you will see a button labeled ‘Fork’. Click that.

fork

You should now have your own JSCommunicator repository on your Github account. On your main page select the tab ‘Repositories’. It should be the first on the list. The url will be https://github.com/YOUR-USERNAME/jscommunicator. My Github username is JLouback, so my jscommunicator repo can be found at https://github.com/JLouback/jscommunicator.

Clone

Now you should download all that code to your local machine so you can edit it. On your JSCommunicator page somewhere around the lower right corner you’ll see a field entitled HTTPS clone URL:

clone

I don’t know if this is correct, but you can just copy the URL on your browser too. I do that.

Now on your terminal, navigate to the directory you’d like to place your repository and enter

git clone https://github.com/YOUR-USERNAME/jscommunicator

The current directory should now have a folder named ‘jscommunicator’ and in it is all the JSCommunicator code.

Switch branches

The JSCommunicator repo has a branch for the i18n support. A branch is basically a copy of the code that you can modify without changing the original code. It’s great for adding new features, once everything is done and tested you can add the new feature to the original code. A more detailed explanation of branches can be found here.

On the terminal, enter

cd jscommunicator
git branch -a

This will list all the branches in the repository, both local and remote. There should be a branch named ‘remotes/origin/i18n-support’. Switch to this branch by entering

git checkout --track origin/i18n-support

This will create a local branch named i18n-support with all the contents of the remote i18n-branch in https://github.com/opentelecoms-org/jscommunicator.

Create a .properties file

Now your jscommunicator directory will have some new files for the internationalization functionality. Among them you should find an INTERNATIONALIZATION_README file with instructions for adding a JSCommunicator translation. It’s a less detailed version of this post, but it’s worth a read in case I’ve missed something here.

Go to the directory ‘internationalization’. You’ll see a few .properties files with different language codes. Choose the language of your preference for the translation base and make a copy of it in the internationalization directory. Your copy should be named ‘Messages_LANGUAGECODE.properties’. For example, the language code for german is ‘de’, so the german file should be named ‘Messages_de.properties’. If you’re not sure about the code for your language, please check out this list of ISO 639-1 language codes so you’re using the same code as (most) browsers.

The .properties file is list of key-value pairs, a word or a few words joined by ‘_’ which is the key, the equals sign ‘=’ then a word or phrase which is the value. Do not change anything to the left of the equals sign. Translate what is on the right of the equals sign.

Modify available_languages.xml

Go back to the root directory (jscommunicator) and open the file named ‘available_languages.ruby’. This .ruby has a series of language elements. At the end of the last ‘</language>’ add a new language element with your language display name and code, copying the previous languge elements. You actually can put your language element anwhere, as long as it’s after the ‘' and before the , as well as not cutting into any of the other language elements. Your display name should be the name of the language in its native language, for example for a french translation we’ll put ‘Français’ (I think). And the code is the code we used for the .properties file. Here’s how it should look:

<language>
    <display>Deutsch</display>
    <code>de</code>
</language>

Save your changes and voila!

Test your work

Once you’ve made a .properties file, JSCommunicator will automatically load your translation if you’ve set your browser preference to that language. If your browser preference is french, it will load the Messages_fr.properties. If your browser preference is a language we don’t have a translation for (let’s say german), JScommunicator will load the default Messages.properties file which is in english.

The available_languages.ruby is used to build a language selection menu. If you add a language element without a corresponding .properties file, JSCommunicator will throw a JS error and load the default (english) translation. The same happens if you use different language codes in the .properties file name and the available_languages.ruby. The error won’t disturb your use of JSCommunicator, it just won’t load the language you selected. No harm, no foul. But if you want others to use your translation, do take care to do this correctly.

If you read the INTERNATIONALIZATION_README, you will have seen that we need the jquery.i18n.properties.js file that’s included in the .html pages. You can download that code here. Be sure to place it in the jscommunicator directory. This is only necessary if you want to see your addition at work, you don’t need this file to contribute your translation. There are other 3rd party code dependencies to run JSCommunicator too as you may have noticed. Just creating the .properties file and adding a language element to available_languages.ruby, if done correctly, is enough to make a pull request.

Commit

Here’s the part where we load all your local changes to your remote repository. In the terminal, navigate to the root directory (jscommunicator) and enter

git add internationalization/Messages_de.properties
git add available_languages.ruby

Of course, please replace ‘Messages_de.properties’ with the name of your new .properties file. And mind you, we don’t have a german translation yet!

Then enter

git commit -m "German translation"

You can enter whatever you’d like in the “ “ part, it’s a good idea to explain what language you are adding. Now we can push the changes:

git push -u origin i18n-support

You should now see the changes you made in your Github account page at github.com/YOUR_USER/jscommunicator. The message you put in double quotations (“ “) in the commit will be beside each modified file. This push also creates a new branch in your jscommunicator repository, now you will have a ‘master’ and a ‘i18n-support’ branch.

Pull

Now it’s time to share your translation with everyone else if you feel so inclined. Your version of JSCommunicator has a new translation but not the official version. First navigate to you jscommunicator repo at github.com/YOUR_USER/jscommunicator and make sure you are on the i18n-branch as that will contain your changes.

branches

Click the green button directly to the left of the branch drop down menu shown above. This will create a pull request to add your new code to the official jscommunicator code. Once you click that button, you should see this:

pull

Make sure that the base repo (the first on the line above the green Create pull request button) is opentelecoms-org:i18n-support and not opentelecoms-org:master or another branch name. If it is, just click ‘Edit’ on the right and you’ll be able to select the branch from a dropdown like so:

pullEdit

If first repo is opentelecoms-org:i18n-support and the second is YOUR_USER:i18n-support and you’ve verified (scroll down) that there are 2 files changed which are your new .properties file and the added element to the available_languages.xml, go ahead and press create pull request. It’ll open a window with a text box you can add an explanation to with one final ‘Create pull request’ button. Click that and you’re done! You’ve kindly contributed with a translation for JSCommunicator!

17 July, 2014 08:06AM

Russ Allbery

wallet 1.1

Wallet is the secure credential management infrastructure that we use at Stanford, primarily for keytabs but increasingly for any sort of security keys that have to be stored somewhere and retrieved by specific systems or people.

The primary goal of this release is to add Duo support. This is currently somewhat preliminary, with only a single Duo integration object type that creates a UNIX integration. (Well, technically it can create any type of integration, but the integration information is returned in the format expected by the UNIX integration.) I expect a later release to rename all existing "duo" object types to "duo-unix" and add additional object types for the various other types of integrations that one wants to support, but that work will have to wait for another day.

Since it's been over a year since the previous release, there are also other accumulated bug fixes and improvements. I also tried to merge or address as many issues or patches that had been sent to me over the past year as I could, although many larger patches or improvements had to be deferred. Highlights:

  • The owner and getacl commands now return the name of the ACL instead of its numeric ID, as they probably should have from the beginning.

  • The date passed to expires can now be in any format Date::Parse supports. (On a related note, Date::Parse is now required.)

  • wallet-rekey now works properly on keytabs containing multiple principals. I had for some reason assumed that one could form a keytab containing multiple principals by just concatenating several together, but that definitely does not work. wallet-rekey now appends new keys to the end of the existing keytab. Unfortunately, I didn't get a chance to implement purging of old keys, for the folks stuck with MIT Kerberos ktutil instead of Heimdal's.

There are also multiple other bug fixes and general improvements, such as using DateTime objects uniformly for all database access that involves date fields, and recording ACL renames in the ACL history table. Both the API and the database layer are still kind of a mess, and I'd love to rewrite them with the benefit of experience and more knowledge, but that's a project for another day.

You can get the latest release from the wallet distribution page.

17 July, 2014 12:16AM

July 16, 2014

hackergotchi for Steve Kemp

Steve Kemp

So what can I do for Debian?

So I recently announced my intention to rejoin the Debian project, having been a member between 2002 & 2011 (inclusive).

In the past I resigned mostly due to lack of time, and what has changed is that these days I have more free time - primarily because my wife works in accident & emergency and has "funny shifts". This means we spend many days and evenings together, then she might work 8pm-8am for three nights in a row, which then becomes Steve-time, and can involve lots of time browsing reddit, coding obsessively, and watching bad TV (currently watching "Lost Girl". Shades of Buffy/Blood Ties/similar. Not bad, but not great.)

My NM-progress can be tracked here, and once accepted I have a plan for my activities:

  • I will minimally audit every single package running upon any of my personal systems.
  • I will audit as many of the ITP-packages I can manage.
  • I may, or may not, actually package software.

I believe this will be useful, even though there will be limits - I've no patience for PHP and will just ignore it, along with its ecosystem, for example.

As progress today I reported #754899 / CVE-2014-4978 against Rawstudio, and discussed some issues with ITP: tiptop (the program seems semi-expected to be installed setuid(0), but if it is then it will allow arbitrary files to be truncated/overwritten via "tiptop -W /path/to/file"

(ObRandom still waiting for a CVE identifier for #749846/TS-2867..)

And now sleep.

16 July, 2014 08:49PM

Matthias Klumpp

AppStream 0.7 specification and library released

appstream-logoToday I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs.

Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what’s new in AppStream 0.7:

Specification changes

Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language.

A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications.

The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:

$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system

(if you are using the cli tool)

A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example “The KDE Community”, “GNOME Developers” or even the developer’s full name. This value can be (optionally) translated and will be displayed in software-centers.

An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification.

Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs).

Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied).

As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags)

Documentation updates

In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution.

Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have “Quickstart pages” in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future.

We also have an explicit section dealing with the question “How do I translate upstream metadata?” now.

More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop.

AppStream GObject library and tools

The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses – I like it for tivoization protection, it’s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS).

On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit’s desktop-file-database in the long term).

The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format).

The appstream-index tool received some changes, making it’s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers.

Additionally, the testsuite got improved and should now work on systems which do not have metadata installed.

Of course, libappstream also implements all features of the new 0.7 specification.

With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the “pkgname” property of AsComponent (which is now a string array and called “pkgnames”). API level was bumped to 1.

Appstream-Qt

A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code!

Special thanks to Sune Vuorela for his nice rework of the Qt library!

And that’s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!

16 July, 2014 03:03PM by Matthias

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Introducing RcppParallel: Getting R and C++ to work (some more) in parallel

A common theme over the last few decades was that we could afford to simply sit back and let computer (hardware) engineers take care of increases in computing speed thanks to Moore's law. That same line of thought now frequently points out that we are getting closer and closer to the physical limits of what Moore's law can do for us.

So the new best hope is (and has been) parallel processing. Even our smartphones have multiple cores, and most if not all retail PCs now possess two, four or more cores. Real computers, aka somewhat decent servers, can be had with 24, 32 or more cores as well, and all that is before we even consider GPU coprocessors or other upcoming changes.

And sometimes our tasks are embarassingly simple as is the case with many data-parallel jobs: we can use higher-level operations such as those offered by the base R package parallel to spawn multiple processing tasks and gather the results. I covered all this in some detail in previous talks on High Performance Computing with R (and you can also consult the Task View on High Performance Computing with R which I edit).

But sometimes we can't use data-parallel approaches. Hence we have to redo our algorithms. Which is really hard. R itself has been relying on the (fairly mature) OpenMP standard for some of its operations. Luke Tierney's (awesome) keynote in May at our (sixth) R/Finance conference mentioned some of the issues related to OpenMP. One which matters is that OpenMP works really well on Linux, and either not so well (Windows) or not at all (OS X, due the usual issue with the gcc/clang switch enforced by Applem but the good news is that the OpenMP toolchain is expected to make it to OS X is some more performant form "soon"). R is still expected to make wider use of OpenMP in future versions.

Another tool which has been around for a few years, and which can be considered to be equally mature is the Intel Threaded Building Blocks library, or TBB. JJ recently started to wrap this up for use by R. The first approach resulted in a (now superseded, see below) package TBB. But hardware and OS issues bite once again, as the Intel TBB is not really building that well for the Windows toolchain used by R (and based on MinGW).

(And yes, there are two more options. But Boost Threads requires linking which precludes easy use as e.g. via our BH package. And C++11 with its threads library (based on Boost Threads) is not yet as widely available as R and Rcpp which means that it is not a real deployment option yet.)

Now, JJ, being as awesome as he is, went back to the drawing board and integrated a second threading toolkit: TinyThread++, a small header-only library without further dependencies. Not as feature-rich as Intel Threaded Building Blocks, but at least available everywhere. So a new package RcppParallel, so far only on GitHub, wraps around both TinyThread++ and Intel Threaded Building Blocks and offers a consistent interface available on all platforms used by R.

Better still, JJ also authored several pieces demonstrating this new package for the Rcpp Gallery:

All four are interesting and demonstrate different aspects of parallel computing via RcppParallel. But the last article is key. Based on a question by Jim Bullard, and then written with Jim, it shows how a particular matrix distance metric (which is missing from R) can be implemented in a serial manner in both R, and also via Rcpp. The key implementation, however, uses both Rcpp and RcppParallel and thereby achieves a truly impressive speed gain as the gains from using compiled code (via Rcpp) and from using a parallel algorithm (via RcppParallel) are multiplicative! Between JJ's and my four-core machines the gain was between 200 and 300 fold---which is rather considerable. For kicks, I also used a much bigger machine at work which came in at an even larger speed gain (but gains become clearly sublinear as the number of cores increases; there are however some tuning parameters).

So these are exciting times. I am sure there will be lots more to come. For now, head over to the RcppParallel package and start playing. Further contributions to the Rcpp Gallery are not only welcome but strongly encouraged.

16 July, 2014 02:56PM

Vincent Sanders

It is no great secret that my colleagues at Collabora have been doing work with the Raspberry Pi Foundation.

My desk is very near Marco and I often see him working with the various Pi boards. Recently he obtained one of the new B+ units for testing and I thought it looked a little sad sat naked on his desk.

To remedy this bare board problem I designed and built a laser cut a case for him and now the B+ has been publicly released I can make the design freely available.

The design is completely original though is inspired by several other plastic "clip" type designs I have seen. Originally I created and debugged the case design for my parallella though tweaking it for the Pi was pretty easy.

The design is under a CC attribution licence and I ought to say that my employer is in no way responsible for this, its all my own fault.

16 July, 2014 11:04AM by Vincent Sanders (noreply@blogger.com)

hackergotchi for Wouter Verhelst

Wouter Verhelst

Reprepro for RPM

Dear lazyweb,

reprepro is a great tool. I hand it some configuration and a bunch of packages, and it creates the necessary directory structure, moves the packages to the right location, and generates a (signed) Debian package repository. Obviously it would be possible to all that reprepro does by hand—by calling things like cp and dpkg-scanpackages and gpg and other things by hand—but it's easy to forget a step when doing so, and having a tool that just does things for me is wonderful. The fact that it does so only on request (i.e., when I know something has changed, rather than "once every so often") is also quite useful.

At work, I currently need to maintain a bunch of package repositories. The Debian package archives there are maintained with reprepro, but I currently maintain the RPM archives pretty much by hand: create the correct directories, copy the right files to the right places, run createrepo over the correct directories (and in the case of the OpenSUSE repository, also run gpg), and a bunch of other things specific to our local installation. As if to prove my above point, apparently I forgot to do a few things there, meaning, some of the RPM repositories didn't actually work correctly, and my testing didn't catch on.

Which makes me wonder how RPM package repositories are usually maintained. When one needs to maintain just a bunch of packages for a number of servers, well, running createrepo manually isn't too much of a problem. When it gets beyond own systems, however, and when you need to support multiple builds for multiple versions of multiple distributions, having to maintain all those repositories by hand is probably not the best idea.

So, dear lazyweb: how do large RPM repositories maintain state of the packages, the distributions they belong to, and similar things?

Please don't say "custom scripts" ;-)

16 July, 2014 10:43AM

Juliana Louback

Become an Open-source Contributor Video Conference

A couple weeks ago I was contacted by Yehuda Korotkin through one of the Debian mailing lists. Yehuda is a tech professor at one of Israel’s leading colleges for women. He proposed a video conference to present the open source community to his class and explain how they can contribute to open source projects. I volunteered to participate and last Monday (July 14th 2014) we had our virtual meeting, which I hope was the first of many.

Shauna G. also participated from Boston, making it a Israel - USA - Brazil meetup. Pretty impressive, eh? The conference was hosted on Google Hangouts, the video is available for viewing here. Run time is around 2.5 hours so to save you time I’ve kindly posted a summary of the conference! To be frank, I look ghastly half the time so I’d much prefer that you read this post.

First there was a Q&A period which was more of a discussion panel, followed by a hands-on session where the girls made their first contribution to an open source project. Here’s the gist of it:

Q&A

  • What is open source [software]? Software that allows free access to its source code (ergo ‘open’), permitting analysis and any modifications to the original code. All these permissions are contained in a license included in the software. Note that the term ‘open source’ is now applied to a lot of things other than software, and I assume there’s a different definition for those cases.

  • Who is the community behind open source software? Shauna pointed out that there are many kinds of projects and many kinds of communities backing them. As an example of a for-profit model, Shauna cited RedHat that offers Linux (an open source OS) for free and profits from support services. Some projects are supported by NGO’s (example: Center for Open Science) with a specific objective, a series of other projects are volunteer-based or are the result of a hobby project. The contributors vary accordingly, they are not only programmers but also people with a non-technical background. In sum, there’s a lot of variety in the open source community as a whole, details of sturcture vary case to case.

  • Why is it important that women get involved in technology? Men and women are different because of a series of reasons, among them our biological composition, influences of society and life experiences. Technology is for everyone, male or female and we need to be able to design products that will suit everyone well. If the development team is comprised of only one sex or another, it’s likely that factors will be overlooked or ignored and as a result not a democratic experience. Shauna gave some examples of this, such as a voice-powered location search software that could easily identify stereotypically ‘male’ keywords but not many of interest to females.

    In addition to this, it’s known that there is a severe workforce deficiency in the tech industry. One proposed solution is to encourage more women to follow a career in tech. The argument is based on the assumption that any boy who is remotely interested in tech has a high likelihood of studying tech, but not every girl. Often girls are not encouraged to stuy STEM-related topics and in many places tech is considered inappropriate for women. I’ve read that tech companies’ engineering team is 12% female and 88% male. That number is consistent with the stats of female college grads for STEM majors. I think boys who love tech will keep getting into tech, but there are a lot of girls who could be great engineers if they only gave it a shot. Women should be encouraged to pursue a tech career not only so they’ll get to work in such an incredible field but also so they can help assuage the gap in supply and demand present in the tech workforce today.

  • Why is open source important? We don’t always get everything we want/need from off-the-shelf software. The great thing about open source is that you are free to tweak the at will and add whatever features you’d like. Sometimes there are even more important necessities, such as security requirements. With open source you don’t need to just take the manufacturers’ word that the code is bullet proof; open it and check it out yourself. Another factor that increases the value of open source is its stability. As Linus Torvalds said, “given enough eyeballs, all bugs are shallow”. With so many developers opening, studying and testing the code, bugs are quickly identified and removed. The fact that open source software is ‘free’ doesn’t reflect negatively on the quality; to the contrary, plenty of open source software is the best that’s out there.

  • Why its important contribute to open source? To start, it’s not written anywhere that you have to contribute. That said, it sure is nice if you do. I think there are two main reasons why people contribute. One is they want to ‘give back’ to the open source community. The other is they want to add some missing detail, feature or entire product. Of course, usually if you need something, someone else does as well, then by supplying your own need you end up helping many people out on the way. But one thing that I think applies especially to students is it’s a great way to gain experience. You have to fit in some practical learning along with all that theory and one can only be so enthusiastic about the CS projects done in school that are usually discarded at the end of the semester. By contributing to an open source project, you are doing something that’s real and that people will actually use. You also have access to an incredible network of mentors who are real pros and more than willing to help you out. Maybe I’ve just been lucky, but they are really nice people who won’t snicker if you ask a stupid question. Well, maybe they do. But I haven’t seen it happen and I’m the queen of stupid questions. Many times this is their pet project, so they’re thrilled to have your contribution. Another great thing is that you can choose what you want to work with. It’s easy to find a tech internship out there, what’s not easy is finding an interesting internship. You might work with something you dislike or (gasp) end up just bringing people coffee. But we do that stuff cause we need the work experience. With open source you have a huge array of projects you can work on, one of them is certain to suit your interests.

  • How to become a contributor? When using an open source software, you might notice things you’d like to change, or actual bugs. Let the development team know about it and be sure to also let them know if you think you can figure out the fix. And if that hasn’t happened yet, try exploring Github’s open source repositories and check out the ‘Issues’ option (upper right corner). There’s usually a list of bugs or wishlist features that you just might be the man/woman for.

Hands on

For the practical part of our conference, Yehuda’s students contributed to Jscommunicator’s Issue #5 which requests i18n support for major languages. JSCommunicator is a SIP communication tool developed in HTML and JavaScript, currently supporting audio/video calls and SIP SIMPLE instant messaging.

First we set up Github in their local machines and forked the Jscommunicator repo. We learned the basic git commands needed to push changes to their remote repositories and to create a pull requests. This is explained in further detail here.

At the end of the conference with nearly an hour overtime due to my tendency to be overly verbose, Yehuda’s class made their first contribution with a Hebrew translation for Jscommunicator!

All in all, it was great to virtually meet Yehuda and his epic students who all seemed very clever engineers. I’m looking forward to our next meetup once I gain some time-management skills.

16 July, 2014 10:06AM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Spotify migrate 5000 servers from Debian to Ubuntu

Or yet another reason why it’s really important that we succeed with Debian LTS. Last year we heard of Dreamhost switching to Ubuntu because they can maintain a stable Ubuntu release for longer than a Debian stable release (and this despite the fact that Ubuntu only supports software in its main section, which misses a lot of popular software).

Spotify Logo

A few days ago, we just learned that Spotify took a similar decision:

A while back we decided to move onto Ubuntu for our backend server deployment. The main reasons for this was a predictable release cycle and long term support by upstream (this decision was made before the announcement that the Debian project commits to long term support as well.) With the release of the Ubuntu 14.04 LTS we are now in the process of migrating our ~5000 servers to that distribution.

This is just a supplementary proof that we have to provide long term support for Debian releases if we want to stay relevant in big deployments.

But the task is daunting and it’s difficult to find volunteers to do the job. That’s why I believe that our best answer is to get companies to contribute financially to Debian LTS.

We managed to convince a handful of companies already and July is the first month where paid contributors have joined the effort for a modest participation of 21 work hours (watch out for Thorsten Alteholz and Holger Levsen on debian-lts and debian-lts-announce). But we need to multiply this figure by 5 or 6 at least to make a correct work of maintaining Debian 6.

So grab the subscription form and have a chat with your management. It’s time to convince your company to join the initiative. Don’t hesitate to get in touch if you have questions or if you prefer that I contact a representative of your company. Thank you!

4 comments | Liked this article? Click here. | My blog is Flattr-enabled.

16 July, 2014 08:07AM by Raphaël Hertzog

hackergotchi for Jon Dowland

Jon Dowland

Mac

My job exposes me to a large variety of computing systems and I regularly use Mac, Windows and Linux desktops. My main desktop environment at home and work has been Debian GNU/Linux for over 10 years. However every now and then I take a little "holiday" and use something else for a few weeks. Often I'm spurred on by some niggle or other on the GNOME desktop, or burn-out with whatever the current contentious issue of the moment is in Debian. Usually I switched to Windows and I used it as an excuse to play some computer games.

Last November I had just such an excuse to take a holiday but this time I opted to go for Mac. I had a back-log of Mac issues to investigate at work anyway.

I haven't looked back.

It appears I have switched for good. I've been meaning to write about this for some time, but I couldn't quite get the words right. I doubted I could express my frustrations in a constructive, helpful way, even if I think that my experiences are useful and my discoveries valuable, perhaps I would put them across in a way that seemed inciteful rather than insightful. I wasn't sure anyone cared. Certainly the GNOME community doesn't seem interested in feedback.

I turns out that one person that doesn't care is me: I didn't realise just how broken the F/OSS desktop is. The straw that broke the camel's back was the file manager replacing type-ahead find with a search but (to seemlessly switch metaphor) it turns out I'd been cut a thousand times already. I'm not just on the other side of the fence, I'm several fields away.

Sometimes community people write about their concerns with whether they're going in the right direction, or how to tell the difference between legitimate complaints, trolls and whiners. When I look at conferences now, the sea of Thinkpads was replaced with a sea of Apple Macs a long time ago now, and the Thinkpads haven't come back. I'd suggest: don't worry about the whiners. Worry about the leavers.

What does this mean for my Debian involvement? Well, you can't help but have noticed that I've done very little this year. I've written nearly exclusively about music so far. the good news is: I still regularly use Debian, and I still intend to stay involved, just not on the desktop. I'm essentially only maintaining two packages now, lhasa and squishyball. I might pick up a few more (possibly archivemail if the situation doesn't improve) but I'm happy with a low package load; I'd like to make sure the ones I do maintain are maintained well. The sum of all my Debian efforts this year have been to get these two (or three) ship-shape. I have a bunch of other things I'd like to achieve in Debian which are not packages, and a larger package load would just distract from them. (We really are too package-oriented in Debian).

16 July, 2014 07:18AM

July 15, 2014

hackergotchi for Michal Čihař

Michal Čihař

New UI for Weblate

For quite some time, I'm working on new UI for Weblate. As the time is always limited, the progress is not that fast as I would like to see, but I think it's time to show the current status to wider audience.

Almost all pages have been rewritten, the major missing parts are zen mode and source strings review. So it's time to play with it on our demo server. The UI is responsive, so it works more or less on different screen sizes, though I really don't expect people to translate on mobile phone, so not much tweaking was done for small resolutions.

Anyway I'd like to hear as much feedback as possible :-).

Filed under: English phpMyAdmin SUSE Weblate | 2 comments | Flattr this!

15 July, 2014 04:00PM by Michal Čihař (michal@cihar.com)

hackergotchi for Mario Lang

Mario Lang

Mixing vinyl again

The turntables have me back, after quite some long-term mixing break.

I used to do straight 4-to-the-floor, mostly acid or hardtek. You can find an old mix of mine on SoundCloud. This one is actually back from 2006.

But currently I am more into drum and bass. It is an interesting mixing experience, since considerably harder. Here is a small but very recent minimix. Experts in the genre might notice that I am mostly spinning stuff from BlackOutMusicNL, admittedly my favourite label right now.

15 July, 2014 08:15AM by Mario Lang

July 13, 2014

Laura Arjona

New GPG Key!

Achievement unlocked: I have a new GPG key:

0xF22674467E4AF4A3

pub   4096R/7E4AF4A3 2014-07-13 [caduca: 2016-07-12]
Fingerprint = 445E 3AD0 3690 3F47 E19B  37B2 F226 7446 7E4A F4A3
uid                  Laura Arjona Reina <laura.arjona@upm.es>
uid                  Laura Arjona Reina <larjona@fsfe.org>
uid                  Laura Arjona Reina <larjona99@gmail.com>
sub   3072R/CC706B74 2014-07-13 [expires: 2016-07-12]
sub   3072R/7E51465F 2014-07-13 [expires: 2016-07-12]
sub   4096R/74C23D6E 2014-07-13 [expires: 2016-07-12]

The master key is 4096 bit, stored in a safe place, and 2 subkeys 3072 bit, stored in an FSFE Smartcard (I cannot store 4096 keys there).

I have carefully used the FSFE SmartCard Howto and “Creating the perfect GPG keypair” by Alex Cabal for strenghtening hash preferences and creating revocation certificate.

It seems everything works as intended. Passphrase is strong and this time I will not forget it.

As first celebration, 1/2 lt icecream is waiting for me after dinner :)

People knowing me and around Madrid, please send me an encrypted mail as test or normal communication, and ping me to meet and sign keys :)

One more step towards involvement in Debian and free software, controlling my digital life and communications, and becoming familiar with these technologies so I can teach them to my son as ‘the natural way’.

Yay!


Filed under: My experiences and opinion, Tools Tagged: Contributing to libre software, Debian, encryption, English, Free Software, gpg, libre software, Moving into free software, mswl-cases

13 July, 2014 07:47PM by larjona

hackergotchi for Steve Kemp

Steve Kemp

A brief twitter experiment

So I've recently posted a few links on Twitter, and I see followers clicking them. But also I see random hits.

Tonight I posted a link to http://transient.email/, a domain I use for "anonymous" emailing, specifically to see which bots hit the URL.

Within two minutes I had 15 visitors the first few of which were:

IP User-Agent Request
199.16.156.124Twitterbot/1.0;GET /robots.txt
199.16.156.126Twitterbot/1.0;GET /robots.txt
54.246.137.243python-requests/1.2.3 CPython/2.7.2+ Linux/3.0.0-16-virtualHEAD /
74.112.131.243Mozilla/5.0 ();GET /
50.18.102.132Google-HTTP-Java-Client/1.17.0-rc (gzip)HEAD /
50.18.102.132Google-HTTP-Java-Client/1.17.0-rc (gzip)HEAD /
199.16.156.125Twitterbot/1.0;GET /robots.txt
185.20.4.143Mozilla/5.0 (compatible; TweetmemeBot/3.0; +http://tweetmeme.com/)GET /
23.227.176.34MetaURI API/2.0 +metauri.comGET /
74.6.254.127Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp);GET /robots.txt

So what jumps out? The twitterbot makes several requests for /robots.txt, but never actually fetches the page itself which is interesting because there is indeed a prohibition in the supplied /robots.txt file.

A surprise was that both Google and Yahoo seem to follow Twitter links in almost real-time. Though the Yahoo site parsed and honoured /robots.txt the Google spider seemed to only make HEAD requests - and never actually look for the content or the robots file.

In addition to this a bunch of hosts from the Amazon EC2 space made requests, which was perhaps not a surprise. Some automated processing, and classification, no doubt.

Anyway beer. It's been a rough weekend.

13 July, 2014 07:08PM

Dimitri John Ledkov

Hacking on launchpadlib

So here is a quick sample of my progress playing around with launchpadlib using lp-shell from lptools:
In [1]: lp
Out[1]: <launchpadlib.launchpad.Launchpad at 0x7f49ecc649b0>

In [2]: lp.distributions
Out[2]: <launchpadlib.launchpad.DistributionSet at 0x7f49ddf0e630>

In [3]: lp.distributions['ubuntu']
Out[3]: <distribution at https://api.launchpad.net/1.0/ubuntu>

In [4]: lp.distributions['ubuntu'].display_name
Out[4]: 'Ubuntu'

In [5]: lp.distributions['ubuntu'].summary
Out[5]: 'Ubuntu is a complete Linux-based operating system, freely available with both community and professional support.'

In [7]: import sys; print(sys.version)
3.4.1 (default, Jun 9 2014, 17:34:49)
[GCC 4.8.3]

There is not much yet, but it's a start. python3 port of launchpadlib is coming soon. It has been attempted a few times before and I am leveraging that work. Porting this stack has proven to be the most difficult python3 port I have ever done. But there is always python-libvirt that still needs porting ;-)

Some of above is just merge proposals against launchpadlib & lazr.restfulclient, and requires not yet packaged modules in the archive. When trying it out, I'm still getting a lot of run-time asserts and things that haven't been picked up by e.g. pyflakes3 and has not been unit-tested yet.

13 July, 2014 12:32PM by Dimitri John Ledkov (noreply@blogger.com)