August 01, 2014

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in July 2014

This is my monthly summary of my free software related activities. If you’re among the people who made a donation to support my work (548.59 €, thanks everybody!), then you can learn how I spent your money. Otherwise it’s just an interesting status update on my various projects.

Distro Tracker

Now that tracker.debian.org is live, people reported bugs (on the new tracker.debian.org pseudo-package that I requested) faster than I could fix them. Still I spent many, many hours on this project, reviewing submitted patches (thanks to Christophe Siraut, Joseph Herlant, Dimitri John Ledkov, Vincent Bernat, James McCoy, Andrew Starr-Bochicchio who all submitted some patches!), fixing bugs, making sure the code works with Django 1.7, and started the same with Python 3.

I added a tox.ini so that I can easily run the test suite in all 4 supported environments (created by tox as virtualenv with the combinations of Django 1.6/1.7 and Python 2.7/3.4).

Over the month, the git repository has seen 73 commits, we fixed 16 bugs and other issues that were only reported over IRC in #debian-qa. With the help of Enrico Zini and Martin Zobel, we enabled the possibility to login via sso.debian.org (Debian’s official SSO) so that Debian developers don’t even have to explicitly create their account.

As usual more help is needed and I’ll gladly answer your questions and review your patches.

Misc packaging work

Publican. I pushed a new upstream release of publican and dropped a useless build-dependency that was plagued by a difficult to fix RC bug (#749357 for the curious, I tried to investigate but it needs major work for make 4.x compatibility).

GNOME 3.12. With gnome-shell 3.12 hitting unstable, I had to update gnome-shell-timer (and filed an upstream ticket at the same time), a GNOME Shell extension to start some run-down counters.

Django 1.7. I packaged python-django 1.7 release candidate 1 in experimental (found a small bug, submitted a ticket with a patch that got quickly merged) and filed 85 bugs against all the reverse dependencies to ask their maintainers to test their package with Django 1.7 (that we want to upload before the freeze obviously). We identified a pain point in upgrade for packages using South and tried to discuss it with upstream, but after closer investigation, none of the packages are really affected. But the problem can hit administrators of non-packaged Django applications.

Misc stuff. I filed a few bugs (#754282 against git-import-orig –uscan, #756319 against wnpp to see if someone would be willing to package loomio), reviewed an updated package for django-ratelimit in #755611, made a non-maintainer upload of mairix (without prior notice) to update the package to a new upstream release and bring it to modern packaging norms (Mako failed to make an upload in 4 years so I just went ahead and did what I would have done if it were mine).

Kali work resulting in Debian contributions

Kali wants to switch from being based on stable to being based on testing so I did try to setup britney to manage a new kali-rolling repository and encountered some problems that I reported to debian-release. Niels Thykier has been very helpful and even managed to improve britney thanks to the very specific problem that the kali setup triggered.

Since we use reprepro, I did write some Python wrapper to transform the HeidiResult file in a set of reprepro commands but at the same time I filed #756399 to request proper support of heidi files in reprepro. While analyzing britney’s excuses file, I also noticed that the Kali mirrors contains many source packages that are useless because they only concern architectures that we don’t host (and I filed #756523 filed against reprepro). While trying to build a live image of kali-rolling, I noticed that libdb5.1 and db5.1-util were still marked as priority standard when in fact Debian already switched to db5.3 and thus should only be optional (I filed #756623 against ftp.debian.org).

When doing some upgrade tests from kali (wheezy based) to kali-rolling (jessie based) I noticed some problems that were also affecting Debian Jessie. I filed #756629 against libfile-fcntllock-perl (with a patch), and also #756618 against texlive-base (missing Replaces header). I also pinged Colin Watson on #734946 because I got a spurious base-passwd prompt during upgrade (that was triggered because schroot copied my unstable’s /etc/passwd file in the kali chroot and the package noticed a difference on the shell of all system users).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

01 August, 2014 09:13PM by Raphaël Hertzog

Ian Donnelly

How-To: Write a Plug-In (Part 1, The Basics)

Hi Everybody!

For the first part of my Plug-In tutorial, I am going to cover the basic overview of an elektra plug-in. This post just explains the basic format and functions of an Elektra plug-in, specifically a storage plug-in (this will be the focus of my tutorial).

All plug-ins use the same basic interface. This interface consists of five basic functions, elektraPluginOpen, elektraPluginGet, elektraPluginSet, elektraPluginError, and elektraPluginClose. The developer replaces ‘Plugin’ with the name of their plugin. So in the case of my plugin, the names of these functions would be elektraLineOpen(), elektraLineGet(), elektraLineSet(), elektraLineError(), and elektraLineClose(). Additionally, there is one more function called ELEKTRA_PLUGIN_EXPORT(plugin), where once again ‘Plugin” should be replaced with the name of the plug-in, this time in lower-case. So for my line plugin this function would be ELEKTRA_PLUGIN_EXPORT(line).

The KDB relies on the first five functions for interacting with configuration files stored in the key database.  Calls for kdbGet() and kdbClose() will call the functions elektraPluginGet() and elektraPluginClose() respectively for the plugin that was used to mount the configuration data. kdbSet() calls elektraPluginSet() but also elektraPluginError() when an error occurs. elektraPluginOpen() is called before the first call to elektraPluginGet() or elektraPluginSet(). These functions serve different purposes that allow the plug-in to work:

  • elektraPluginOpen() is designed to allow each plug-in to do initialization if necessary.
  • elektraPluginGet() is designed to turn information from a configuration file into a usable KeySet, this is technically the only function that is REQUIRED in a plug-in.
  • elektraPluginSet() is designed to store the information from the keyset back into a configuration file.
  • elektraPluginError() is designed to allow proper rollback of operations if needed and is called if any plugin fails during the set operation. This allows exception-safety.
  • elektraPluginClose() is used to free resources that might be required for the plug-in.
  • ELEKTRA_PLUGIN_EXPORT(Plugin) simply lets Elektra know that the plug-in exists and what the name of the above functions are.

Most simply put: most plug-ins consist of five major functions, elektraPluginOpen(), elektraPluginClose(), elektraPluginGet(), elektraPluginSet(), and ELEKTRA_EXPORT_PLUGIN(Plugin).

Because remembering all these functions can be cumbersome, we provide a skeleton plugin in order to easily create a new plugin. The skeleton plugin is called “template” and a new plugin can be created by calling the copy-template script. For example for my plugin I called

../../scripts/copy-template line

from within the plugins directory. Afterwards two important things are left to be done:

  1. remove all functions (and their exports) from the plugin that are not needed. For example not every plugin actually makes use of the elektraPluginOpen() function.
  2. provide a basic contract as described above

After these two steps your plugin is ready to be compiled, installed and mounted for the first time. Have a look at the mount tutorial for details on howto do this.

So Part 1 of the tutorial covers the implementation details for a plug-in. Next up will be Part 2, which will cover the theory behind contracts in Elektra as well as how to implement them.

Sincerely,
Ian S. Donnelly

01 August, 2014 07:19PM by Ian Donnelly

hackergotchi for Gunnar Wolf

Gunnar Wolf

Wow. Just rejected an editorial offer...

Yes, I've been bragging about the Operating Systems book all over... Today, a colleague handed me a phone call from somebody at Editorial Patria, a well known educational editorial in Mexico. They are looking for material similar to what I wrote, but need the material to be enfocado a competencias — Focused on skills, a pedagogic fashion.

I was more than interested, of course. As it currently stands, I am very happy that our book is being used already at three universities in three countries (by the different authors) and have heard other people saying they would recommend it, and of course I'm interested in making our work have as big an impact as possible. Of course, we'd have to modify several aspects of the book to cater to the skills focus... But it would be great to have the book available at commercial bookstores. After all, university editions are never as widely circulated as commercial ones.

I had just one hard request to accept this: Our work must be distributed under a free licensing. Explicitly allow book photocopies and electronic distribution (didn't get into the "and modification" part, but I would eventually get there ;-) )

And... Of course, the negotiation immediately fell down. Editorials, this person says, live from selling individual books. She says she was turned down by another university professor and for another subject this same week.

So, yes, I took the opportunity to explain things as I (and the people that think as I do — Fortunately, not so few) see them. Yes, of course, editorials have to make a living. But text books are often photocopied as it is. Who buys a book? Whoever needs it. On one hand, if somebody will be using a book throughout a semester and it's reasonably priced (say, up to 3×cost of photocopies), they will probably buy it because it just works better (it is more comfortable to use and nicer to read).

If a teacher likes the explanation for a particular topic, it should be completely legal for him to distribute photocopies (or digital copies) of the specific material — And quite probably, among the students, more than one will end up appreciating the material enough to go look for the book in the library. And, as I have done throughout my life, if I read (in copies, electronically or in a library) a book I like... Quite probably I will go buy it.

So... Of course, she insisted it was against their corporate policy. I insisted on my explanation. I hope they meet many stubborn teachers refusing to distribute books under a non-free licensing. I hope I contributed to making a dent in an industry that must change. Yes, a very very small dent, but one that helps them break free from their obsolete mindset ;-)

(But yes, I don't know how long I will regret not being part of their very nice catalog of science and engineering books) ;-) )

01 August, 2014 04:32PM by gwolf

Russell Coker

More BTRFS Fun

I wrote a BTRFS status report yesterday commenting on the uneventful use of BTRFS recently [1].

Early this morning the server that stores my email (which had 93 days uptime) had a filesystem related problem. The root filesystem became read-only and then the kernel message log filled with unrelated messages so there was no record of the problem. I’m now considering setting up rsyslogd to log the kernel messages to a tmpfs filesystem to cover such problems in future. As RAM is so cheap it wouldn’t matter if a few megs of RAM were wasted by that in normal operation if it allowed me to extract useful data when something goes really wrong. It’s really annoying to have a system in a state where I can login as root but not find out what went wrong.

After that I tried 2 kernels in the 3.14 series, both of which had kernel BUG assertions related to Xen networking and failed to network correctly, I filed Debian Bug #756714. Fortunately they at least had enough uptime for me to run a filesystem scrub which reported no errors.

Then I reverted to kernel 3.13.10 but the reboot to apply that kernel change failed. Systemd was unable to umount the root filesystem (maybe because of a problem with Xen) and then hung the system instead of rebooting, I filed Debian Bug #756725. I believe that if asked to reboot a system there is no benefit in hanging the system with no user space processes accessible. Here are some useful things that systemd could have done:

  1. Just reboot without umounting (like “reboot -nf” does).
  2. Pause for some reasonable amount of time to give the sysadmin a possibility of seeing the error and then rebooting.
  3. Go back to a regular runlevel, starting daemons like sshd.
  4. Offer a login prompt to allow the sysadmin to login as root and diagnose the problem.

Options 1, 2, and 3 would have saved me a bit of driving. Option 4 would have allowed me to at least diagnose the problem (which might be worth the drive).

Having a system on the other side of the city which has no remote console access just hang after a reboot command is not useful, it would be near the top of the list of things I don’t want to happen in that situation. The best thing I can say about systemd’s operation in this regard is that it didn’t make the server catch fire.

Now all I really know is that 3.14 kernels won’t work for my server, 3.13 will cause problems that no-one can diagnose due to lack of data, and I’m now going to wait for it to fail again. As an aside the server has ECC RAM and it’s hardware is known to be good, so I’m sure that BTRFS is at fault.

01 August, 2014 10:41AM by etbe

July 31, 2014

Craig Small

Linux Capabilities

I was recently updating some code that uses fping. Initially it used exec() that was redirected to a temporary file but I changed it to use popen.  While it had been a while since I’ve done this sort of thing, I do recall there was an issue with running popen on setuid binary.  A later found it is mainly around setuid scripts which are very problematic and there are good reasons why you don’t do this.

Anyhow, the program worked fine which surprised me. Was fping setuid root to get the raw socket?

$ ls -l /usr/bin/fping
-rwxr-xr-x 1 root root 31464 May  6 21:42 /usr/bin/fping

It wasn’t which at first all I thought “ok, so that’s why popen is happy”. The way that fping and other programs work is they bind to a raw socket. This socket sits below the normal type sockets such as the ones used for TCP and UDP and normal users cannot use them by default. So how did fping work it’s magic and get access to this socket? It used Capabilities.

 

Previously getting privileged features had a big problem; it was an all or nothing thing. You want access to a raw socket? Sure, be setuid but that means you also could, for example, read any file on the system or set passwords. Capabilites provide a way of giving programs some better level of access, but not a blank cheque.

The tool getcap is the way of determining what capabilities are found on a file. These capabilities are attributes on the file which, when the file is run, turn into capabilities or extra permissions. fping has the capability cap_net_raw+ep applied to it. This gives access to the RAW and PACKET sockets which is what fping needs. The +ep after the capability name means it is an Effective and Permitted capability, which describes what happens with child processes and dropping privileges.

I hadn’t seen these Capabilities before. They are a nice way to give your programs the access they need, but limiting the risk of something going wrong and having a rouge program running as root.

31 July, 2014 12:58PM by Craig Small

hackergotchi for Steve Kemp

Steve Kemp

luonnos viesti - 31 heinäkuu 2014

Yesterday I spent a while looking at the Debian code search site, an enormously useful service allowing you to search the code contained in the Debian archives.

The end result was three trivial bug reports:

#756565 - lives

Insecure usage of temporary files.

A CVE-identifier should be requested.

#756566 - libxml-dt-perl

Insecure usage of temporary files.

A CVE-identifier has been requested by Salvatore Bonaccorso, and will be added to my security log once allocated.

756600 - xcfa

Insecure usage of temporary files.

A CVE-identifier should be requested.

Finding these bugs was a simple matter of using the code-search to look for patterns like "system.*>.*%2Ftmp".

Perhaps tomorrow somebody else would like to have a go at looking for backtick-related operations ("`"), or the usage of popen.

Tomorrow I will personally be swimming in a loch, which is more fun than wading in code..

31 July, 2014 12:54PM

Russell Coker

BTRFS Status July 2014

My last BTRFS status report was in April [1], it wasn’t the most positive report with data corruption and system hangs. Hacker News has a brief discussion of BTRFS which includes the statement “Russell Coker’s reports of his experiences with BTRFS give me the screaming heebie-jeebies, no matter how up-beat and positive he stays about it” [2] (that’s one of my favorite comments about my blog).

Since April things have worked better. Linux kernel 3.14 solves the worst problems I had with 3.13 and it’s generally doing everything I want it to do. I now have cron jobs making snapshots as often as I wish (as frequently as every 15 minutes on some systems), automatically removing snapshots (removing 500+ snapshots at once doesn’t hang the system), balancing, and scrubbing. The fact that I can now expect that a filesystem balance (which is a type of defragment operation for BTRFS that frees some “chunks”) from a cron job and expect the system not to hang means that I haven’t run out of metadata chunk space. I expect that running out of metadata space can still cause filesystem deadlocks given a lack of reports on the BTRFS mailing list of fixes in that regard, but as long as balance works well we can work around that.

My main workstation now has 35 days of uptime and my home server has 90 days of uptime. Also the server that stores my email now has 93 days uptime even though it’s running Linux kernel 3.13.10. I am rather nervous about the server running 3.13.10 because in my experience every kernel before 3.14.1 had BTRFS problems that would cause system hangs. I don’t want a server that’s an hour’s drive away to hang…

The server that runs my email is using kernel 3.13.10 because when I briefly tried a 3.14 kernel it didn’t work reliably with the Xen kernel 4.1 from Debian/Wheezy and I had a choice of using the Xen kernel 4.3 from Debian/Unstable to match the Linux kernel or use an earlier Linux kernel. I have a couple of Xen servers running Debian/Unstable for test purposes which are working well so I may upgrade my mail server to the latest Xen and Linux kernels from Unstable in the near future. But for the moment I’m just not doing many snapshots and never running a filesystem scrub on that server.

Scrubbing

In kernel 3.14 scrub is working reliably for me and I have cron jobs to scrub filesystems on every system running that kernel. So far I’ve never seen it report an error on a system that matters to me but I expect that it will happen eventually.

The paper “An Analysis of Data Corruption in the Storage Stack” from the University of Wisconsin (based on NetApp data) [3] shows that “nearline” disks (IE any disks I can afford) have an incidence of checksum errors (occasions when the disk returns bad data but claims it to be good) of about 0.42%. There are 18 disks running in systems I personally care about (as opposed to systems where I am paid to care) so with a 0.42% probability of a disk experiencing data corruption per year that would give a 7.3% probability of having such corruption on one disk in any year and a greater than 50% chance that it’s already happened over the last 10 years. Of the 18 disks in question 15 are currently running BTRFS. Of the 15 running BTRFS 10 are scrubbed regularly (the other 5 are systems that don’t run 24*7 and the system running kernel 3.13.10).

Newer Kernels

The discussion on the BTRFS mailing list about kernel 3.15 is mostly about hangs. This is correlated with some changes to improve performance so I presume that it has exposed race conditions. Based on those discussions I haven’t felt inclined to run a 3.15 kernel. As the developers already have some good bug reports I don’t think that I could provide any benefit by doing more testing at this time. I think that there would be no benefit to me personally or the Linux community in testing 3.15.

I don’t have a personal interest in RAID-5 or RAID-6. The only systems I run that have more data than will fit on a RAID-1 array of cheap SATA disks are ones that I am paid to run – and they are running ZFS. So the ongoing development of RAID-5 and RAID-6 code isn’t an incentive for me to run newer kernels. Eventually I’ll test out RAID-6 code, but at the moment I don’t think they need more bug reports in this area.

I don’t have a great personal interest in filesystem performance at this time. There are some serious BTRFS performance issues. One problem is that a filesystem balance and subtree removal seem to take excessive amounts of CPU time. Another is that there isn’t much support for balancing IO to multiple devices (in RAID-1 every process has all it’s read requests sent to one device). For large-scale use of a filesystem these are significant problems. But when you have basic requirements (such as a mail server for dozens of users or a personal workstation with a quad-core CPU and fast SSD storage) it doesn’t make much difference. Currently all of my systems which use BTRFS have storage hardware that exceeds the system performance requirements by such a large margin that nothing other than installing Debian packages can slow the system down. So while there are performance improvements in newer versions of the BTRFS kernel code that isn’t an incentive for me to upgrade.

It’s just been announced that Debian/Jessie will use Linux 3.16, so I guess I’ll have to test that a bit for the benefit of Debian users. I am concerned that 3.16 won’t be stable enough for typical users at the time that Jessie is released.

31 July, 2014 10:45AM by etbe

hackergotchi for Wouter Verhelst

Wouter Verhelst

Using your Belgian eID with SSH mark two

Several years ago, I blogged about how to use a Belgian electronic ID card with SSH. I never really used it myself, but was interested in figuring out if it would still work.

The good news is that since then, you don't need to recompile OpenSSH anymore to get PKCS#11 support; this is now compiled in by default.

The slightly bad news is that there will be some more typework. Rather than entering ssh-add -D 0 (to access the PKCS#11 certificate in slot 0), you should now enter something along the lines of ssh-add -s /usr/lib/libbeidpkcs11.so.0. This will ask for your passphrase, but it isn't necessary to enter the correct pin code at this point in time. The first time you try to log on, you'll get a standard beid dialog box where you should enter your pin code; this will then work. The next time, you'll be logged on and you can access servers without having to enter a pin code.

The worse news is that there seems to be a bug in ssh-agent, making it impossible to unload a PKCS#11 library. Doing ssh-add -D will remove your keys from the agent; the next time you try to add them again, however, ssh-agent will simply report SSH_AGENT_FAILURE. I suspect the dlopen()ed modules aren't being unloaded when the keys are removed.

Unfortunately, the same (or at least, a similar) bug appears to occur when one removes the card from the cardreader.

As such, I don't currently recommend trying to use this.

Update: fix command-line options to ssh-add invocation above.

31 July, 2014 10:28AM

Petter Reinholdtsen

Debian Edu interview: Bernd Zeitzen

The complete and free “out of the box” software solution for schools, Debian Edu / Skolelinux, is used quite a lot in Germany, and one of the people involved is Bernd Zeitzen, who show up on the project mailing lists from time to time with interesting questions and tips on how to adjust the setup. I managed to interview him this summer.

Who are you, and how do you spend your days?

My name is Bernd Zeitzen and I'm married with Hedda, a self employed physiotherapist. My former profession is tool maker, but I haven't worked for 30 years in this job. 30 years ago I started to support my wife and become her officeworker and a few years later the administrator for a small computer network, today based on Ubuntu Server (Samba, OpenVPN). For her daily work she has to use Windows Desktops because the software she needs to organize her business only works with Windows . :-(

In 1988 we started with one PC and DOS, then I learned to use Windows 98, 2000, XP, …, 8, Ubuntu, MacOSX. Today we are running a Linux server with 6 Windows clients and 10 persons (teacher of children with special needs, speech therapist, occupational therapist, psychologist and officeworkers) using our Samba shares via OpenVPN to work with the documentations of our patients.

How did you get in contact with the Skolelinux / Debian Edu project?

Two years ago a friend of mine asked me, if I want to get a job in his school (Gymnasium Harsewinkel). They started with Skolelinux / Debian Edu and they were looking for people to give support to the teachers using the software and the network and teaching the pupils increasing their computer skills in optional lessons. I'm spending 4-6 hours a week with this job.

What do you see as the advantages of Skolelinux / Debian Edu?

The independence.

First: Every person is allowed to use, share and develop the software. Even if you are poor, you are allowed to use the software included in Skolelinux/Debian Edu and all the other Free Software.

Second: The software runs on old machines and this gives us the possibility to recycle computers, weeded out from offices. The servers and desktops are running for more than two years and they are working reliable.

We have two servers (one tjener and one terminal server), 45 workstations in three classrooms and seven laptops as a mobile solution for all classrooms. These machines are all booting from the terminal server. In the moment we are installing 30 laptops as mobile workstations. Then the pupils have the possibility to work with these machines in their classrooms. Internet access is realized by a WLAN router, connected to the schools network. This is all done without a dedicated system administrator or a computer science teacher.

What do you see as the disadvantages of Skolelinux / Debian Edu?

Teachers and pupils are Windows users. <Irony on> And Linux isn't cool. It's software for freaks using the command line. <Irony off> They don't realize the stability of the system.

Which free software do you use daily?

Firefox, Thunderbird, LibreOffice, Ubuntu Server 12.04 (Samba, Apache, MySQL, Joomla!, … and Skolelinux / Debian Edu)

Which strategy do you believe is the right one to use to get schools to use free software?

In Germany we have the situation: every school is free to decide which software they want to use. This decision is influenced by teachers who learned to use Windows and MS Office. They buy a PC with Windows preinstalled and an additional testing version of MS Office. They don't know about the possibility to use Free Software instead. Another problem are the publisher of school books. They develop their software, added to the school books, for Windows.

31 July, 2014 06:30AM

July 30, 2014

Bits from Debian

Jessie will ship Linux 3.16

The Debian Linux kernel team has discussed and chosen the kernel version to use as a basis for Debian 8 'jessie'.

This will be Linux 3.16, due to be released in early August. Release candidates for Linux 3.16 are already packaged and available in the experimental suite.

If you maintain a package that is closely bound to the kernel version - a kernel module or a userland application that depends on an unstable API - please ensure that it is compatible with Linux 3.16 prior to the freeze date (5th November, 2014). Incompatible packages are very likely to be removed from testing and not included in 'jessie'.

  1. My kernel module package doesn't build on 3.16 and upstream is not interested in supporting this version. What can I do?
    The kernel team might be able to help you with forward-porting, but also try Linux Kernel Newbies or the mailing list(s) for the relevant kernel subsystem(s).

  2. There's an important new kernel feature that ought to go into jessie, but it won't be in 3.16. Can you still add it?
    Maybe - sometimes this is easy and sometimes it's too disruptive to the rest of the kernel. Please contact the team on the debian-kernel mailing list or by opening a wishlist bug.

  3. Will Linux 3.16 get long term support from upstream?
    The Linux 3.16-stable branch will not be maintained as a longterm branch at kernel.org. However, the Ubuntu kernel team will continue to maintain that branch, following the same rules for acceptance and review, until around April 2016. Ben Hutchings is planning to continue maintenance from then until the end of regular support for 'jessie'.

30 July, 2014 09:10PM by Ana Guerrero Lopez

hackergotchi for Konstantinos Margaritis

Konstantinos Margaritis

SIMD book, update 3! Addition/Subtraction mostly finished

Finally. Apologies for the delay, but it's been a busy month. This time I will hold true to my word and upload a PDF for people to see (attached to this page).

So, what's new? Here is a list of things done:

* Finished ALL addition/subtraction related instructions for all engines and major derivatives (SSE*/AVX, VMX/VSX, NEON/armv8 NEON). With diagrams (these were the reasons it has taken so long).
* Reorganized the structure (split the book into Parts I/II, the instruction index will be in Part II, Part I will carry the design analysis of each SIMD engine.
* Added an TOC/index.
* So far, with just Addition/Subtraction Chapter and the rest empty sections, it has reached 175 pages (B5, again I'm not fixed on the size, it might actually change)! My estimate is that the whole book will surpass 800 pages with everything included.

TODO:

30 July, 2014 07:08PM by markos

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Blehnovo, follow-up

A few weeks back, I posted about my adventures in getting Lenovo Switzerland/Germany to repair my laptop after I dropped it onto the floor and cracked the screen. (I'm sure they would be thrilled to hear that it sits next to my Wi-Fi on Linux rant, which got 39000 readers after reaching the front page Hacker News and then /r/linux.) Now that I'm leaving Norway tomorrow (via Assembly 2014 and then on to Zürich), I thought it would be a good time to wrap up the story.

In case you don't remember, we last left the situation July 7th, when I had called Lenovo Norway, who accepted my laptop for repair pretty much immediately after Lenovo Germany had staunchly refused for over a month. What happened next:

July 8th: DHL comes and picks up my laptop.

July 10th: I get an email saying that my laptop is repaired, and will be returned to me the next day.

July 11th: DHL calls me and asks if I'm at home. I say no, I'm at Solskogen, can they deliver it there instead? They redirect the package, no problems. Just as they call the second time, I'm away to shower after a run, but I ask them to just deliver it to someone there. A friend signs for it.

I start up the laptop, and woo, the screen works again. But the EFI menu has been tampered with, probably for diagnostic purposes, so it no longer boots Linux (but rather has a Windows entry that doesn't work). I spend half an hour fiddling with d-i to get it back again. During this process, I notice something strange -- my beautiful 1920x1080 screen has been replaced with a much less beautiful 1366x768 one! Also, if I lift the laptop in the wrong spot (at first I think it's related to the monitor bezel, but later understand that it's not), it promptly dies and spews garbage all over the screen. In any case, it's good enough to fix my backup job (which hadn't been properly running for a while--boo) and thus get the source code I've been working on out.

I call Lenovo Norway. They are confused. They try to figure out what happened, but are having problems finding the right internal part number for the 1920x1080 screen. They say they'll call me back.

July 16th: I call Lenovo Norway, wondering why they haven't called me back. I also send them a video showing the issue. The guy says he'll need to check with his colleague and will contact me back, although he can't guarantee anything will happen the same day. They don't call me back.

July 17th: A guy from InfoCare calls and wonders if I'm available. Turns out he's sent out to fix my laptop. (He doesn't speak Norwegian, but he speaks English, so no problem.) He arrives in my house, and replaces the 1366x768 panel with a 1920x1080 one. I can't reproduce the crash issue offhand, but shows him the video. He tries to figure out what the crashes are due to, but can't immediately see anything being loose.

I call Lenovo Norway again, and say it's great they've fixed my panel, but that the issue still persists. The guy doesn't own my case but has seen the video. After some discussion and thinking, they can't remotely diagnose it, and we agree that I'll send it in again, because they can seemingly not send a full array of parts with InfoCare (so they'd potentially go five trips, which I understand it not so cool for them).

July 18th: DHL picks up the laptop again. The guy looks a bit bewildered.

July 23rd: The laptop arrives at Lenovo. I get an email saying it's fixed.

July 24th: DHL arrives with the laptop. The repair sheet says they have replaced the “primary memory” (which I suppose means RAM). I try to trigger the issue. It's gone!

So, that's the status. I have a perfectly working laptop, with a gorgeous 1920x1080 display. I'm basically in the state I thought I would be over the weekend when I dropped it May 30th, it just took almost two months instead of one working day.

So, is my warranty correctly registered in the system now? I have no idea, because while Lenovo Norway was busy repairing my laptop three times, Lenovo Germany appears to have done nothing at all (not even closed my case, because that should have triggered an email). So as far as I know, there's still an open case on me somewhere, where they try to figure out why system A shows I have accident coverage and system B doesn't.

30 July, 2014 03:00PM

July 29, 2014

Ian Campbell

Debian Installer ARM64 Dailies

It's taken a while but all of the pieces are finally in place to run successfully through Debian Installer on ARM64 using the Debian ARM64 port.

So I'm now running nightly builds locally and uploading them to http://www.hellion.org.uk/debian/didaily/arm64/.

If you have CACert in your CA roots then you might prefer the slightly more secure version.

Hopefully before too long I can arrange to have them building on one of the project machines and uploaded to somewhere a little more formal like people.d.o or even the regular Debian Installer dailies site. This will have to do for now though.

Warning

The arm64 port is currently hosted on Debian Ports which only supports the unstable "sid" distribution. This means that installation can be a bit of a moving target and sometimes fails to download various installer components or installation packages. Mostly it's just a case of waiting for the buildd and/or archive to catch up. You have been warned!

Installing in a Xen guest

If you are lucky enough to have access to some 64-bit ARM hardware (such as the APM X-Gene, see wiki.xen.org for setup instructions) then installing Debian as a guest is pretty straightforward.

I suppose if you had lots of time (and I do mean lots) you could also install under Xen running on the Foundation or Fast Model. I wouldn't recommend it though.

First download the installer kernel and ramdisk onto your dom0 filesystem (e.g. to /root/didaily/arm64).

Second create a suitable guest config file such as:

name = "debian-installer"
disk = ["phy:/dev/LVM/debian,xvda,rw"]
vif = [ '' ] 
memory = 512
kernel = "/root/didaily/arm64/vmlinuz"
ramdisk= "/root/didaily/arm64/initrd.gz"
extra = "console=hvc0 -- "

In this example I'm installing to a raw logical volume /dev/LVM/debian. You might also want to use randmac to generate a permanent MAC address for the Ethernet device (specified as vif = ['mac=xx:xx:xx:xx:xx:xx']).

Once that is done you can start the guest with:

xl create -c cfg

From here you'll be in the installer and things carry on as usual. You'll need to manually point it to ftp.debian-ports.org as the mirror, or you can preseed by appending to the extra line in the cfg like so:

mirror/country=manual mirror/http/hostname=ftp.debian-ports.org mirror/http/directory=/debian

Apart from that there will be a warning about not knowing how to setup the bootloader but that is normal for now.

Installing in Qemu

To do this you will need a version of http://www.qemu.org which supports qemu-system-aarch64. The latest release doesn't yet so I've been using v2.1.0-rc3 (it seems upstream are now up to -rc5). Once qemu is built and installed and the installer kernel and ramdisk have been downloaded to $DI you can start with:

qemu-system-aarch64 -M virt -cpu cortex-a57 \
    -kernel $DI/vmlinuz -initrd $DI/initrd.gz \
    -append "console=ttyAMA0 -- " \
    -serial stdio -nographic --monitor none \
    -drive file=rootfs.qcow2,if=none,id=blk,format=qcow2 -device virtio-blk-device,drive=blk \
    -net user,vlan=0 -device virtio-net-device,vlan=0

That's using a qcow2 image for the rootfs, I think I created it with something like:

qemu-img create -f qcow2 rootfs.qcow2 4G

Once started installation proceeds much like normal. As with Xen you will need to either point it at the debian-ports archive by hand or preseed by adding to the -append line and the warning about no bootloader configuration is expected.

Installing on real hardware

Someone should probably try this ;-).

29 July, 2014 07:36PM

hackergotchi for Daniel Pocock

Daniel Pocock

Pruning Syslog entries from MongoDB

I previously announced the availability of rsyslog+MongoDB+LogAnalyzer in Debian wheezy-backports. This latest rsyslog with MongoDB storage support is also available for Ubuntu and Fedora users in one way or another.

Just one thing was missing: a flexible way to prune the database. LogAnalyzer provides a very basic pruning script that simply purges all records over a certain age. The script hasn't been adapted to work within the package layout. It is written in PHP, which may not be ideal for people who don't actually want LogAnalyzer on their Syslog/MongoDB host.

Now there is a convenient solution: I've just contributed a very trivial Python script for selectively pruning the records.

Thanks to Python syntax and the PyMongo client, it is extremely concise: in fact, here is the full script:

#!/usr/bin/python

import syslog
import datetime
from pymongo import Connection

# It assumes we use the default database name 'logs' and collection 'syslog'
# in the rsyslog configuration.

with Connection() as client:
    db = client.logs
    table = db.syslog
    #print "Initial count: %d" % table.count()
    today = datetime.datetime.today()

    # remove ANY record older than 5 weeks except mail.info
    t = today - datetime.timedelta(weeks=5)
    table.remove({"time":{ "$lt": t }, "syslog_fac": { "$ne" : syslog.LOG_MAIL }})

    # remove any debug record older than 7 days
    t = today - datetime.timedelta(days=7)
    table.remove({"time":{ "$lt": t }, "syslog_sever": syslog.LOG_DEBUG})

    #print "Final count: %d" % table.count()

Just put it in /usr/local/bin and run it daily from cron.

Customization

Just adapt the table.remove statements as required. See the PyMongo tutorial for a very basic introduction to the query syntax and full details in the MongoDB query operator reference for creating more elaborate pruning rules.

Potential improvements

  • Indexing the columns used in the queries
  • Logging progress and stats to Syslog


LogAnalyzer using a database backend such as MongoDB is very easy to set up and much faster than working with text-based log files

29 July, 2014 06:27PM by Daniel.Pocock

hackergotchi for Gunnar Wolf

Gunnar Wolf

Editorial process starting in 3... 2... 1...

Yay!

Today I finally submitted our book, Fundamentos de Sistemas Operativos, for the Editorial Department of our institute. Of course, I'm not naïve enough to assume there won't be a heavy editorial phase, but I'm more than eager to dive into it... And have the book printed in maybe two months time!

Of course, this book is to be published under a free license (CC-BY-SA). And I'm talking with the coauthors, we are about to push the Git repository to a public location, as we believe the source for the text and figures can also be of interest to others.

The book itself (as I've already boasted about here :-} ) is available (somewhat as a preprint) for download.

[update] Talked it over with the coauthors, and we finally have a public repository! Clone it from:

https://github.com/gwolf/sistop.git

Or just browse it from Github's web interface.

29 July, 2014 06:09PM by gwolf

hackergotchi for Christian Perrier

Christian Perrier

Developers per country (July 2014)

This is time again for my annual report about the number of developers per country.

This is now the sixth edition of this report. Former editions:

So, here we are with the July 2014 version, sorted by the ratio of *active* developers per million population for each country.

Act: number of active developers
Dev: total number of developers
A/M: number of active devels per million pop.
D/M: number of devels per million pop.
2009: rank in 2009
2010: rank in 2010
2011: rank in 2011 (June)
2012: rank in 2012 (June)
2013: rank in 2012 (July)
2014: rank now
Code Name Population Act Dev Dev Act/Million Dev/Million 2009 2010 June 2011 June 2012 July 2013 July 2014
fi Finland 5259250 19 31 3,61 5,89 1 1 1 1 1 1
ie Ireland 4670976 13 17 2,78 3,64 13 9 6 2 2 2
nz New Zealand 4331600 11 15 2,54 3,46 4 3 5 7 7 3 *
mq Martinique 396404 1 1 2,52 2,52

3 4 4 4
se Sweden 9088728 22 37 2,42 4,07 3 6 7 5 5 5
ch Switzerland 7870134 19 29 2,41 3,68 2 2 2 3 3 6 *
no Norway 4973029 11 14 2,21 2,82 5 4 4 6 6 7 *
at Austria 8217280 18 29 2,19 3,53 6 8 10 10 10 8 *
de Germany 81471834 164 235 2,01 2,88 7 7 9 9 8 9 *
lu Luxemburg 503302 1 1 1,99 1,99 8 5 8 8 9 10 *
fr France 65350000 101 131 1,55 2 12 12 11 11 11 11
au Australia 22607571 32 60 1,42 2,65 9 10 12 12 12 12
be Belgium 11071483 14 17 1,26 1,54 10 11 13 13 13 13
uk United-Kingdom 62698362 77 118 1,23 1,88 14 14 14 14 14 14
nl Netherlands 16728091 18 40 1,08 2,39 11 13 15 15 15 15
ca Canada 33476688 34 63 1,02 1,88 15 15 17 16 16 16
dk Denmark 5529888 5 10 0,9 1,81 17 17 16 17 17 17
es Spain 46754784 34 56 0,73 1,2 16 16 19 18 18 18
it Italy 59464644 36 52 0,61 0,87 23 22 22 19 19 19
hu Hungary 10076062 6 12 0,6 1,19 18 25 26 20 24 20 *
cz Czech Rep 10190213 6 6 0,59 0,59 21 20 21 21 20 21 *
us USA 313232044 175 382 0,56 1,22 19 21 25 24 22 22
il Israel 7740900 4 6 0,52 0,78 24 24 24 25 23 23
hr Croatia 4290612 2 2 0,47 0,47 20 18 18 26 25 24 *
lv Latvia 2204708 1 1 0,45 0,45 26 26 27 27 26 25 *
bg Bulgaria 7364570 3 3 0,41 0,41 25 23 23 23 27 26 *
sg Singapore 5183700 2 2 0,39 0,39


33 33 27 *
uy Uruguay 3477778 1 2 0,29 0,58 22 27 28 28 28 28
pl Poland 38441588 11 15 0,29 0,39 29 29 30 30 30 29 *
jp Japan 127078679 36 52 0,28 0,41 30 28 29 29 29 30 *
lt Lithuania 3535547 1 1 0,28 0,28 28 19 20 22 21 31 *
gr Greece 10787690 3 4 0,28 0,37 33 38 34 35 35 32 *
cr Costa Rica 4301712 1 1 0,23 0,23 31 30 31 31 31 33 *
by Belarus 9577552 2 2 0,21 0,21 35 36 39 39 32 34 *
ar Argentina 40677348 8 10 0,2 0,25 34 33 35 32 37 35 *
pt Portugal 10561614 2 4 0,19 0,38 27 32 32 34 34 36 *
sk Slovakia 5477038 1 1 0,18 0,18 32 31 33 36 36 37 *
rs Serbia 7186862 1 1 0,14 0,14



38 38
tw Taiwan 23040040 3 3 0,13 0,13 37 34 37 37 39 39
br Brazil 192376496 18 21 0,09 0,11 36 35 38 38 40 40
cu Cuba 11241161 1 1 0,09 0,09
38 41 41 41 41
co Colombia 45566856 4 5 0,09 0,11 41 44 46 47 46 42 *
kr South Korea 48754657 4 6 0,08 0,12 39 39 42 42 42 43 *
gt Guatemala 13824463 1 1 0,07 0,07



43 44 *
ec Ecuador 15007343 1 1 0,07 0,07
40 43 43 45 45
cl Chile 16746491 1 2 0,06 0,12 42 41 44 44 47 46 *
za South Africa 50590000 3 10 0,06 0,2 38 48 48 48 48 47 *
ru Russia 143030106 8 9 0,06 0,06 43 42 47 45 49 48 *
mg Madagascar 21281844 1 1 0,05 0,05 44 37 40 40 50 49 *
ro Romania 21904551 1 2 0,05 0,09 45 43 45 46 51 50 *
ve Venezuela 28047938 1 1 0,04 0,04 40 45 50 49 44 51 *
my Malaysia 28250000 1 1 0,04 0,04

49 50 52 52
pe Peru 29907003 1 1 0,03 0,03 46 46 51 51 53 53
tr Turkey 74724269 2 2 0,03 0,03 47 47 52 52 54 54
ua Ukraine 45134707 1 1 0,02 0,02 48 53 58 59 55 55
th Thailand 66720153 1 2 0,01 0,03 50 50 54 54 56 56
eg Egypt 80081093 1 3 0,01 0,04 51 51 55 55 57 57
mx Mexico 112336538 1 1 0,01 0,01 49 49 53 53 58 58
cn China 1344413526 10 14 0,01 0,01 53 53 57 56 59 59
in India 1210193422 8 9 0,01 0,01 52 52 56 57 60 60
sv El Salvador 7066403 0 1 0 0,14

36 58 61 61































969 1561 62,08%







A few interesting facts:
  • New Zealand bumps from rank 7 to rank 3, thanks to one new active developer
  • Switzerland loses one developer and goes donw to rank 6
  • Norway also slightly goes down by losing one developer
  • With two more developers, Austria climbs up to rank 8 and overtakes Germany...;-)
  • Hungary climbs a little bit by gaining one developer
  • Singapore doubles its number of developers from 1 to 2 and bumps from 33 to 27
  • One rank up too for Poland that gained one developer
  • Down to rank 31 for Lithuania by losing one developer
  • Up to rank 32 for Greece with 4 developers instead of 3
  • Argentina goes up by havign two more developers (it lost 2 last year)
  • Up from 46 to 42 for Colombia by winning one more developer
  • One more developer and Russia climps from 49 to 48
  • One less for Venezuela that has only one developer left...:-(
  • No new country this year. Less movement towards "the universal OS"?
  • We have 12 more active Debian developers and 26 more developers overall. Less progression than last year
  • The ratio of active developers increases is nearly stable though slightly decreasing

29 July, 2014 04:34PM

Russell Coker

Android Screen Saving

Just over a year ago I bought a Samsung Galaxy Note 2 [1]. About 3 months ago I noticed that some of the Ingress menus had burned in to the screen. Back in ancient computer times there were “screen saver” programs that blanked the screen to avoid this, then the “screen saver” programs transitioned to displaying a variety of fancy graphics which didn’t really fulfill the purpose of saving the screen. With LCD screens I have the impression that screen burn wasn’t an issue, but now with modern phones we have LED displays which have the problem again.

Unfortunately there doesn’t seem to be a free screen-saver program for Android in the Google Play store. While I can turn the screen off entirely there are some apps such as Ingress that I’d like to keep running while the screen is off or greatly dimmed. Now I sometimes pull the notification menu down when I’m going to leave Ingress idle for a while, this doesn’t stop the screen burning but it does cause different parts to burn which alleviates the problem.

It would be nice if apps were designed to alleviate this. A long running app should have an option to change the color of it’s menus, it would be ideal to randomly change the color on startup. If the common menus such as the “COMM” menu would appear in either red, green, or blue (the 3 primary colors of light) in a ratio according to the tendency to burn (blue burns fastest so should display least) then it probably wouldn’t cause noticable screen burn after 9 months. The next thing that they could do is to slightly vary the position of the menus, instead of having a thin line that’s strongly burned into the screen there would be a fat line lightly burned in which should be easier to ignore.

It’s good when apps have an option of a “dark” theme, that involves less light coming from the screen that should reduce battery use and screen burn. A dark theme should be at least default and probably mandatory for long running apps, a dark theme is fortunately the only option for Ingress.

I am a little disappointed with my phone. I’m not the most intensive Ingress player so I think that the screen should have lasted for more than 9 months before being obviously burned.

29 July, 2014 12:28PM by etbe

Happiness and Lecture Questions

I just attended a lecture about happiness comparing Australia and India at the Australia India Institute [1]. The lecture was interesting but the “questions” were so bad that it makes a good case for entirely banning questions from public lectures. Based on this and other lectures I’ve attended I’ve written a document about how to recognise worthless questions and cut them off early [2].

As you might expect from a lecture on happiness there were plenty of stupid comments from the audience about depression, as if happiness is merely the absence of depression.

Then they got onto stupidity about suicide. One “question” claimed that Australia has a high suicide rate, Wikipedia however places Australia 49th out of 110 countries, that means Australia is slightly above the median for suicide rates per country. Given some of the dubious statistics in the list (for example the countries claiming to have no suicides and the low numbers reported by some countries with extreme religious policies) I don’t think we can be sure that Australia would be above the median if we had better statistics. Another “question” claimed that Sweden had the highest suicide rate in Europe, while Greenland, Belgium, Finland, Austria, France, Norway, Denmark, Iceland, and most of Eastern Europe are higher on the list.

But the bigger problem in regard to discussing suicide is that the suicide rate isn’t about happiness. When someone kills themself because they have a terminal illness that doesn’t mean that they were unhappy for the majority of their life and doesn’t mean that they were any unhappier than the terminally ill people who don’t do that. Some countries have a culture that is more positive towards suicide which would increase the incidence, Japan for example. While people who kill themselves in Japan are probably quite unhappy at the time I don’t think that there is any reason to believe that they are more unhappy than people in other countries who only keep living because suicide is considered to be wrong.

It seems to me that the best strategy when giving or MCing a lecture about a potentially contentious topic is to plan ahead for what not to discuss. For a lecture about happiness it would make sense to rule out all discussion of suicide, anti-depressants, and related issues as they aren’t relevant to the discussion and can’t be handled in an appropriate manner in question time.

29 July, 2014 10:57AM by etbe

hackergotchi for Johannes Schauer

Johannes Schauer

bootstrap.debian.net temporarily not updated

I'll be moving places twice within the next month and as I'm hosting the machine that generates the data, I'll temporarily suspend the bootstrap.debian.net service until maybe around September. Until then, bootstrap.debian.net will not be updated and retain the status as of 2014-07-28. Sorry if that causes any inconvenience. You can write to me if you need help with manually generating the data bootstrap.debian.net provided.

29 July, 2014 08:46AM

hackergotchi for Ricardo Mones

Ricardo Mones

Switching PGP keys

Finally I find the mood to do this, a process which started 5 years ago in DebConf 9 at Cáceres by following Ana's post, of course with my preferred options and my name, not like some other ;-).

Said that, dear reader, if you have signed my old key:

1024D/C9B55DAC 2005-01-19 [expires: 2015-10-01]
Key fingerprint = CFB7 C779 6BAE E81C 3E05  7172 2C04 5542 C9B5 5DAC

And want to sign my "new" and stronger key:

4096R/DE5BCCA6 2009-07-29
Key fingerprint = 43BC 364B 16DF 0C20 5EBD  7592 1F0F 0A88 DE5B CCA6

You're welcome to do so :-)

The new key is signed with the old, and the old key is still valid, and will probably be until expiration date next year. Don't forget to gpg --recv-keys DE5BCCA6 to get the new key and gpg --refresh-keys C9B55DAC to refresh the old (otherwise it may look expired).

Debian's Keyring Team has already processed my request to add the new key, so all should keep working smoothly. Kudos to them!

29 July, 2014 08:42AM

hackergotchi for Christian Perrier

Christian Perrier

[life] Running update July 26th 2014

Dog, long time since I blogged about my running activities. Apparently, I didn't since.....I posted a summary for 2013.

So, well, that will be a long update as many things happened during the first half of 2014 when it comes at running, for me.

January: I was recovering from a fatigue fracture injury inherited from last races in 2013. As a consequence, I resumed running only on Jan 7th. Therefore I cancelled my participation to the "Semi Raid 28", an night orienteering raid of about 50-60km in southern neighbourhood of Paris. Instead, I actually offerred my help to organizers in collecting orienteering signs after the race (the longest one : 120km). So, I ended up spending over 24 hours running in woods and hunting down hidden signs with the same information than runners. My only advantage was that I was able to use my car to go from one point to another. Still, I ended up running over 70km in many small parts, often alone in the dark woods with my headlamp, on very muddy areas...and collecting nearly 80 huge signs.

February: Everything was going well and I for instance ran a great half-marathon in Bullion (south of Paris) in 1h3821" (great for a quite hilly race)....until I twisted my left ankle while running back from work. A quite severe twist, though no bone damage, thankfully. I had to stop running, again, for 3 years. Biking to/from work was the replacement activity....

March: I resumed running on March 10th, one week before a quite difficult trail race in my neighbourhood (30km "only" but up to 800 meters positive climb). That race was a preparation (and a test after the injury) for my 3rd participation to "Paris Ecotrail", a 80km trail race in woods of the South-West area of Paris, ending in the Eiffel Tower area. Indeed, both went very well, though I was very careful with my ankle. I finally broke my record at Ecotrail, finishing the race in 9h08 (to be compared to 9h36 last year and 11h15 the year before).

April: Paris marathon was scheduled one week after Ecotrail. Everybody will tell you that running a marathon one week after a 80km race is kinda crazy.....which is why I made it..:-). That was my 3rd Paris marathon and my 12th marathon overall. However, this year, no record in sight. The challenge was running the marathon....dressed as SpongeBob (you know me, right?). I actually had great fun doing that and was happy to get zillions of cheering all over the race, from the crowd. I finally completed the race in 4h30, which is, after all, not that far from the time of my very first marathon (4h12). The only drawback was that the succession of quite very long distance runs made my left knee suffer as it never happened before. As a consequence, I (again) had to stop running for nearly one month before we found that I was quite sensitive to pronation, which the succession of long and slow races made worse.

May: so finally afterthese (very) long weeks, I could gradually resume running, which finally culminated in mid-May with the 50km race "trail des Cerfs", in the Rambouillet Forest, closed to our place. This quite long but not too difficult trail race ("only" 800 meters positive climb overall) was completed in 5h16, which was completely unexpected, given the low training during the previous weeks.

June: no race during that month. The entire month was focused on preparing the Montagn'hard race of July 5th: so several training sessions with a lot of climbing either by running or by fast walking (nordic style) as well as downhill run training (always important for moutain trail).

July: the second "big peak" of my 2014 season was scheduled for July 5th: "La Montagn'hard", a moutain trail race close to Les Contamines in the neighbourhood of Chamonix, the french moutaineering Mekkah. "Only" 60 kilometers....but close to 5000 meters positive climb. Montagn'hard is among the thoughest moutain trail races in France and therefore a "must do" for trail runners. This race week-end includes also a 105km ultra-race, which is often said to be as hard, even maybe harder, than the very famous "Ultra Trail du Mont-Blanc" trail in Chamonix. Still, for my second only season in moutain trail running, I decided to be "wise" and stick with the "medium" version (after all, my experience, as of now with moutain trails were only two quite "short" ones). Needless to say, it has indeed been a GREAT race. The environment is wonderful ("Miage" side of the Mont-Blanc range), the race goes through great place (Col de Tricot, noticeably) and I made a great result by finishing80th out of 325+ runners, in 12h18, while my target time was around 13 hours.

This is where I am now. Nearly one month after Montagn'hard, I'm deeply training for my next Big Goal: The "Sur la Trace des Ducs de Savoie" or "TDS", one of the 4 races of the Ultra Trail du Mont-Blanc week, in end August (during DebConf): 120km, nearly 7500m positive climp, between Courmayeur and Chamonix, through several passes, up to 2600m height. Yet another challenge: my first "over 24h" race, with a full night out in the moutains.

You'll certainly hear again from me about that...:-)

29 July, 2014 04:46AM

July 28, 2014

hackergotchi for Chris Lamb

Chris Lamb

start-stop-daemon: --exec vs --startas

start-stop-daemon is the classic tool on Debian and derived distributions to manage system background processes. A typical invokation from an initscript is as follows:

start-stop-daemon \
    --quiet \
    --oknodo \
    --start \
    --pidfile /var/run/daemon.pid \
    --exec /usr/sbin/daemon \
    -- -c /etc/daemon.cfg -p /var/run/daemon.pid

The basic operation is that it will first check whether /usr/sbin/daemon is not running and, if not, execute /usr/sbin/daemon -c /etc/daemon.cfg -p /var/run/daemon.pid. This process then has the responsibility to daemonise itself and write the resulting process ID to /var/run/daemon.pid.

start-stop-daemon then waits until /var/run/daemon.pid has been created as the test of whether the service has actually started, raising an error if that doesn't happen.

(In practice, the locations of all these files are parameterised to prevent DRY violations.)

Idempotency

By idempotence we are mostly concerned with repeated calls to /etc/init.d/daemon start not starting multiple versions of our daemon.

This might not seem to be particularly big issue at first but the increased adoption of stateless configuration management tools such as Ansible (which should be completely free to call start to ensure a started state) mean that one should be particularly careful of this apparent corner case.

In its usual operation, start-stop-daemon ensures only one instance of the daemon is running with the --exec parameter: if the specified pidfile exists and the PID it refers to is an "instance" of that executable, then it is assumed that the daemon is already running and another copy is not started. This is handled in the pid_is_exec method (source) - the /proc/$PID/exe symlink is resolved and checked against the value of --exec.

Interpreted scripts

However, one case where this doesn't work is interpreted scripts. Lets look at what happens if /usr/sbin/daemon is such a script, eg. a file that starts:

#!/usr/bin/env python
# [..]

The problem this introduces is that /proc/$PID/exe now points to the interpreter instead, often with an essentially non-deterministic version suffix:

$ ls -l /proc/14494/exe
lrwxrwxrwx 1 www-data www-data 0 Jul 25 15:18
                              /proc/14494/exe -> /usr/bin/python2.7

When this process is examined using the --exec mechanism outlined above it will be rejected as an instance of /usr/sbin/daemon and therefore another instance of that daemon will be incorrectly started.

--startas

The solution is to use the --startas parameter instead. This omits the /proc/$PID/exe check and merely tests whether a PID with that number is running:

start-stop-daemon \
    --quiet \
    --oknodo \
    --start \
    --pidfile /var/run/daemon.pid \
    --startas /usr/sbin/daemon \
    -- -c /etc/daemon.cfg -p /var/run/daemon.pid

Whilst it is therefore less reliable (in that the PID found in the pidfile could actually be an entirely different process altogether) it's probably an acceptable trade-off against the case of running multiple instances of that daemon.

This danger can be ameliorated by using some of start-stop-daemon's other matching tests, such as --user or even --name.

28 July, 2014 01:15PM

hackergotchi for Daniel Pocock

Daniel Pocock

Secure that Dictaphone

2014 has been a big year for dictaphones so far.

First, it was France and the secret recordings made by Patrick Buisson during the reign of President Sarkozy.

Then, a US court ordered the release of the confidential Boston College tapes, part of an oral history project. Originally, each participant had agreed their recording would only be released after their death. Sinn Fein leader Gerry Adams was arrested and questioned over a period of 100 hours and released without charge.

Now Australia is taking its turn. In #dictagate down under, a senior political correspondent from a respected newspaper recorded (most likely with consent) some off-the-record comments of former conservative leader Ted Baillieu. Unfortunately, this journalist misplaced the dictaphone at the state conference of Baillieu's arch-rivals, the ALP. A scandal quickly errupted.

Secure recording technology

There is no question that electronic voice recordings can be helpful for people, including journalists, researchers, call centers and many other purposes. However, the ease with which they can now be distributed is only dawning on people.

Twenty years ago, you would need to get the assistance of a radio or TV producer to disseminate such recordings so widely. Today there is email and social media. The Baillieu tapes were emailed directly to 400 people in a matter of minutes.

Just as technology brings new problems, it also brings solutions. Encryption is one of them.

Is encryption worthwhile?

Coverage of the Snowden revelations has revealed that many popular security technologies are not one hundred percent safe. In each of these dictaphone cases, however, NSA-level expertise was not a factor. Even the most simplistic encryption would have caused endless frustration to the offenders who distributed the Baillieu tape.

How can anybody be sure encryption is reliable?

Part of the problem is education. Everybody using the technology needs to be aware of the basic concepts, for example, public key cryptography.

Another big question mark is back doors. There is ongoing criticism of Apple iPhone/iPod devices and the many ways that their encryption can be easily disabled by Apple engineers and presumably many former staff, security personnel and others. The message is clear: proprietary, closed-source solutions should be avoided. Free and open source technologies are the alternative. If a company does not give you the source code, how can anybody independently audit their code for security? With encryption software, what use is it if nobody has verified it?

What are the options?

However, given that the majority of people don't have a PhD in computer science or mathematics, are there convenient ways to get started with encryption?

Reading is a good start. The Code Book by Simon Singh (author of other popular science books like Fermat's Last Theorem) is very accessible, not classified and assumes no formal training in mathematics. Even for people who do know these topics inside out, it is a good book to share with friends and family.

The Guardian Project (no connection with Guardian Media of Edward Snowden fame) aims to provide a secure and easy to use selection of apps for pocket devices. This project has practical applications in business, journalism and politics alike.

How should a secure dictaphone app work?

Dictaphone users typically need to take their dictaphones in the field, so there is a risk of losing it or having it stolen. A strong security solution in this situation may involve creating an RSA key pair on a home/office computer, keeping the private key on the home computer and putting the public key on the dictaphone device. Configured this way, the dictaphone will not be able to play back any of the recordings itself - the user will always have to copy them to the computer for decryption.

28 July, 2014 05:35AM by Daniel.Pocock

Russ Allbery

AFS::PAG 1.02

This is primarily a testing exercise. I moved my software release process and web page generation to a different host, and wanted to run through a release of a package to make sure that I got all the details right.

It's still a bit clunky, and I need to tweak the process, but it's close enough.

That said, there are a few minor changes in this module (which provides the minimum C glue required to do AFS operations from Perl — only the pieces that can't be duplicated by calling command-line programs). I'm improving the standardization of my Perl distributions, so I've moved NEWS to Changes and switched to the Lancaster Consensus environment variables for controlling testing. I also added some more pieces to the package metadata.

You can get the latest version from the AFS::PAG distribution page.

28 July, 2014 12:22AM

July 27, 2014

hackergotchi for Christian Perrier

Christian Perrier

Developers per country (July 2014)

This is time again for my annual report about the number of developers per country.

This is now the sixth edition of this report. Former editions:

So, here we are with the July 2014 version, sorted by the ratio of *active* developers per million population for each country.

Act: number of active developers
Dev: total number of developers
A/M: number of active devels per million pop.
D/M: number of devels per million pop.
2009: rank in 2009
2010: rank in 2010
2011: rank in 2011 (June)
2012: rank in 2012 (June)
2013: rank in 2012 (July)
2014: rank now
Code Name Population Act Dev Dev Act/Million Dev/Million 2009 2010 June 2011 June 2012 July 2013 July 2014
fi Finland 5259250 19 31 3,61 5,89 1 1 1 1 1 1
ie Ireland 4670976 13 17 2,78 3,64 13 9 6 2 2 2
nz New Zealand 4331600 11 15 2,54 3,46 4 3 5 7 7 3 *
mq Martinique 396404 1 1 2,52 2,52

3 4 4 4
se Sweden 9088728 22 37 2,42 4,07 3 6 7 5 5 5
ch Switzerland 7870134 19 29 2,41 3,68 2 2 2 3 3 6 *
no Norway 4973029 11 14 2,21 2,82 5 4 4 6 6 7 *
at Austria 8217280 18 29 2,19 3,53 6 8 10 10 10 8 *
de Germany 81471834 164 235 2,01 2,88 7 7 9 9 8 9 *
lu Luxemburg 503302 1 1 1,99 1,99 8 5 8 8 9 10 *
fr France 65350000 101 131 1,55 2 12 12 11 11 11 11
au Australia 22607571 32 60 1,42 2,65 9 10 12 12 12 12
be Belgium 11071483 14 17 1,26 1,54 10 11 13 13 13 13
uk United-Kingdom 62698362 77 118 1,23 1,88 14 14 14 14 14 14
nl Netherlands 16728091 18 40 1,08 2,39 11 13 15 15 15 15
ca Canada 33476688 34 63 1,02 1,88 15 15 17 16 16 16
dk Denmark 5529888 5 10 0,9 1,81 17 17 16 17 17 17
es Spain 46754784 34 56 0,73 1,2 16 16 19 18 18 18
it Italy 59464644 36 52 0,61 0,87 23 22 22 19 19 19
hu Hungary 10076062 6 12 0,6 1,19 18 25 26 20 24 20 *
cz Czech Rep 10190213 6 6 0,59 0,59 21 20 21 21 20 21 *
us USA 313232044 175 382 0,56 1,22 19 21 25 24 22 22
il Israel 7740900 4 6 0,52 0,78 24 24 24 25 23 23
hr Croatia 4290612 2 2 0,47 0,47 20 18 18 26 25 24 *
lv Latvia 2204708 1 1 0,45 0,45 26 26 27 27 26 25 *
bg Bulgaria 7364570 3 3 0,41 0,41 25 23 23 23 27 26 *
sg Singapore 5183700 2 2 0,39 0,39


33 33 27 *
uy Uruguay 3477778 1 2 0,29 0,58 22 27 28 28 28 28
pl Poland 38441588 11 15 0,29 0,39 29 29 30 30 30 29 *
jp Japan 127078679 36 52 0,28 0,41 30 28 29 29 29 30 *
lt Lithuania 3535547 1 1 0,28 0,28 28 19 20 22 21 31 *
gr Greece 10787690 3 4 0,28 0,37 33 38 34 35 35 32 *
cr Costa Rica 4301712 1 1 0,23 0,23 31 30 31 31 31 33 *
by Belarus 9577552 2 2 0,21 0,21 35 36 39 39 32 34 *
ar Argentina 40677348 8 10 0,2 0,25 34 33 35 32 37 35 *
pt Portugal 10561614 2 4 0,19 0,38 27 32 32 34 34 36 *
sk Slovakia 5477038 1 1 0,18 0,18 32 31 33 36 36 37 *
rs Serbia 7186862 1 1 0,14 0,14



38 38
tw Taiwan 23040040 3 3 0,13 0,13 37 34 37 37 39 39
br Brazil 192376496 18 21 0,09 0,11 36 35 38 38 40 40
cu Cuba 11241161 1 1 0,09 0,09
38 41 41 41 41
co Colombia 45566856 4 5 0,09 0,11 41 44 46 47 46 42 *
kr South Korea 48754657 4 6 0,08 0,12 39 39 42 42 42 43 *
gt Guatemala 13824463 1 1 0,07 0,07



43 44 *
ec Ecuador 15007343 1 1 0,07 0,07
40 43 43 45 45
cl Chile 16746491 1 2 0,06 0,12 42 41 44 44 47 46 *
za South Africa 50590000 3 10 0,06 0,2 38 48 48 48 48 47 *
ru Russia 143030106 8 9 0,06 0,06 43 42 47 45 49 48 *
mg Madagascar 21281844 1 1 0,05 0,05 44 37 40 40 50 49 *
ro Romania 21904551 1 2 0,05 0,09 45 43 45 46 51 50 *
ve Venezuela 28047938 1 1 0,04 0,04 40 45 50 49 44 51 *
my Malaysia 28250000 1 1 0,04 0,04

49 50 52 52
pe Peru 29907003 1 1 0,03 0,03 46 46 51 51 53 53
tr Turkey 74724269 2 2 0,03 0,03 47 47 52 52 54 54
ua Ukraine 45134707 1 1 0,02 0,02 48 53 58 59 55 55
th Thailand 66720153 1 2 0,01 0,03 50 50 54 54 56 56
eg Egypt 80081093 1 3 0,01 0,04 51 51 55 55 57 57
mx Mexico 112336538 1 1 0,01 0,01 49 49 53 53 58 58
cn China 1344413526 10 14 0,01 0,01 53 53 57 56 59 59
in India 1210193422 8 9 0,01 0,01 52 52 56 57 60 60
sv El Salvador 7066403 0 1 0 0,14

36 58 61 61































969 1561 62,08%







A few interesting facts:
  • New Zealand bumps from rank 7 to rank 3, thanks to one new active developer
  • Switzerland loses one developer and goes donw to rank 6
  • Norway also slightly goes down by losing one developer
  • With two more developers, Austria climbs up to rank 8 and overtakes Germany...;-)
  • Hungary climbs a little bit by gaining one developer
  • Singapore doubles its number of developers from 1 to 2 and bumps from 33 to 27
  • One rank up too for Poland that gained one developer
  • Down to rank 31 for Lithuania by losing one developer
  • Up to rank 32 for Greece with 4 developers instead of 3
  • Argentina goes up by havign two more developers (it lost 2 last year)
  • Up from 46 to 42 for Colombia by winning one more developer
  • One more developer and Russia climps from 49 to 48
  • One less for Venezuela that has only one developer left...:-(
  • No new country this year. Less movement towards "the universal OS"?
  • We have 12 more active Debian developers and 26 more developers overall. Less progression than last year
  • The ratio of active developers increases is nearly stable though slightly decreasing

27 July, 2014 07:37AM

July 26, 2014

hackergotchi for Holger Levsen

Holger Levsen

20140726-the-future-is-now

Do you remember the future?

Unless you are over 60, you weren't promised flying cars. You were promised an oppressive cyberpunk dystopia. Here you go.

(Source: found in the soup)

Luckily the future today is still unwritten. Shape it well.

26 July, 2014 12:08PM

July 25, 2014

Richard Hartmann

Release Critical Bug report for Week 30

I have been asked to publish bug stats from time to time. Not exactly sure about the schedule yet, but I will try and stick to Fridays, as in the past; this is for the obvious reason that it makes historical data easier to compare. "Last Friday of each month" may or may not be too much. Time will tell.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1511
    • Affecting Jessie: 431 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 383 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 44 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 319 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

25 July, 2014 09:58PM by Richard 'RichiH' Hartmann

Juliana Louback

Extending an xTuple Business Object

xTuple is in my opinion incredibly well designed; the code is clean and the architecture ahderent to a standardized structure. All this makes working with xTuple software quite a breeze.

I wanted to integrate JSCommunicator into the web-based xTuple version. JSCommunicator is a SIP communication tool, so my first step was to create an extension for the SIP account data. Luckily for me, the xTuple development team published an awesome tutorial for writing an xTuple extension.

xTuple cleverly uses model based business objects for the various features available. This makes customizing xTuple very straightforward. I used the tutorial mentioned above for writing my extension, but soon noticed my goals were a little different. A SIP account has 3 data fields, these being the SIP URI, the account password and an optional display name. xTuple currently has a business object in the core code for a User Account and it would make a lot more sense to simply add my 3 fields to this existing business object rather than create another business object. The tutorial very clearly shows how to extend a business object with another business object, but not how to extend a business object with only new fields (not a whole new object).

Now maybe I’m just a whole lot slower than most people, but I had a ridiculously had time figuring this out. Mind you, this is because I’m slow, because the xTuple documentation and code is understandable and as self-explanatory as it gets. I think it just takes a bit to get used to. Either way, I thought this just might be useful to others so here is how I went about it.

Setup

First you’ll have to set up your xTuple development environment and fork the xtuple and xtuple-extesions repositories as shown in this handy tutorial. A footnote I’d like to add is please verify that your version of Vagrant (and anything else you install) is the one listed in the tutorial. I think I spent like two entire days or more on a wild goose (bug) chase trying to set up my environment when the cause of all the errors was that I somehow installed an older version of Vagrant - 1.5.4 instead of 1.6.3. Please don’t make the same mistake I did. Actually if for some reason you get the following error when you try using node:

<<ERROR 2014-07-10T23:52:46.948Z>> Unrecoverable exception. Cannot call method 'extend' of undefined

    at /home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:37:39

    at Object.<anonymous> (/home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:1364:3)
    ...

chances are, you have the wrong version. That’s what happened to me. The Vagrant Virtual Development Environment automatically installs and configures everything you need, it’s ready to go. So if you find yourself installing and updating and apt-gets and etc, you probably did something wrong.

Coding

So by now we should have the Vagrant Virtual Development Environment set up and the web app up and running and accessible at localhost:8443. So far so good.

Disclaimer: You will note that much of this is similar - or rather, nearly identical - to xTuple’s tutorial but there are some small but important differences and a few observations I think might be useful. Other Disclaimer: I’m describing how I did it, which may or may not be ‘up to snuff’. Works for me though.

Schema

First let’s make a schema for the table we will create with the new custom fields. Be sure to create the correct directory stucture, aka /path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>/database/source or in my case /path/to/xtuple-extensions/source/sip_account/database/source, and create the file create_sa_schema.sql, ‘sa’ is the name of my schema. This file will contain the following lines:

do $$
  /* Only create the schema if it hasn't been created already */
  var res, sql = "select schema_name from information_schema.schemata where schema_name = 'sa'",
  res = plv8.execute(sql);
  if (!res.length) {
    sql = "create schema sa; grant all on schema sa to group xtrole;"
    plv8.execute(sql);
  }
$$ language plv8;

Of course, feel free to replace ‘sa’ with your schema name of choice. All the code described here can be found in my xtuple-extensions fork, on the sip_ext branch.

Table

We’ll create a table containing your custom fields and a link to an existing table - the table for the existing business object you want to extend. If you’re wondering why make a whole new table for a few extra fields, here’s a good explanation, the case in question is adding fields to the Contact business object.

You need to first figure out what table you want to link to. This might not be uber easy. I think the best way to go about it is to look at the ORMs. The xTuple ORMs are a JSON mapping between the SQL tables and the object-oriented world above the database, they’re .json files found at path/to/xtuple/node_modules/xtuple/enyo-client/database/orm/models for the core business objects and at path/to/xtuplenyo-client/extensions/source/<EXTENSION NAME>/database/orm/models for exension business objects. I’ll give two examples. If you look at contact.json you will see that the Contact business object refers to the table “cntct”. Look for the “type”: “Contact” on the line above, so we know it’s the “Contact” business object. In my case, I wanted to extend the UserAccount and UserAccountRelation business objects, so check out user_account.json. The table listed for UserAccount is xt.usrinfo and the table listed for UserAccountRelation is xt.usrlite. A closer look at the sql files for these tables (usrinfo.sql and usrlite.sql) revealed that usrinfo is in fact a view and usrlite is ‘A light weight table of user information used to avoid punishingly heavy queries on the public usr view’. I chose to refer to xt.usrlite - that or I received error messages when trying the other table names.

Now I’ll make the file /path/to/xtuple-extensions/source/sip_account/database/source/usrlitesip.sql, to create a table with my custom fields plus the link to the urslite table. Don’t quote me on this, but I’m under the impression that this is the norm for naming the sql file joining tables: the name of the table you are referring to (‘usrlite’ in this case) and your extension’s name. Content of usrlitesip.sql:

select xt.create_table('usrlitesip', 'sa');

select xt.add_column('usrlitesip','usrlitesip_id', 'serial', 'primary key', 'sa');
select xt.add_column('usrlitesip','usrlitesip_usr_username', 'text', 'references xt.usrlite (usr_username)', 'sa');
select xt.add_column('usrlitesip','usrlitesip_uri', 'text', '', 'sa');
select xt.add_column('usrlitesip','usrlitesip_name', 'text', '', 'sa');
select xt.add_column('usrlitesip','usrlitesip_password', 'text', '', 'sa');

comment on table sa.usrlitesip is 'Joins User with SIP account';

Breaking it down, line 1 creates the table named ‘usrlitesip’ (no duh), line 2 is for the primary key (self-explanatory). You can then add any columns you like, just be sure to add one that references the table you want to link to. I checked usrlite.sql and saw the primary key is usr_username, be sure to use the primary key of the table you are referencing.

You can check what you made by executing the .sql files like so:

$ cd /path/to/xtuple-extensions/source/sip_account/database/source
$ psql -U admin -d dev -f create_sa_schema.sql
$ psql -U admin -d dev -f usrlitesip.sql

After which you will see the table with the columns you created if you enter:

$ psql -U admin -d dev -c "select * from sa.usrlitesip;"

Now create the file /path/to/xtuple-extensions/source/sip_account/database/source/manifest.js to put the files together and in the right order. It should contain:

{
  "name": "sip_account",
  "version": "1.4.1",
  "comment": "Sip Account extension",
  "loadOrder": 999,
  "dependencies": ["crm"],
  "databaseScripts": [
    "create_sa_schema.sql",
    "usrlitesip.sql",
    "register.sql"
  ]
}

I think the “name” has to be the same you named your extension directory as in /path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>. I think the “comment” can be anything you like and you want your “loadOrder” to be high so it’s the last thing installed (as it’s an add on.) So far we are doing exactly what’s instructed in the xTuple tutorial. It’s repetitive, but I think you can never have too many examples to compare to. In “databaseScripts” you will list the two .sql files you just created for the schema and the table, plus another file to be made in the same directory named register.sql.

I’m not sure why you have to make the register.sql or even if you indeed have to. If you leave the file empty, there will be a build error, so put a ‘;’ in the register.sql or remove the line “register.sql” from manifest.js as I think for now we are good without it.

Now let’s update the database with our new extension:

$ cd /path/to/xtuple
$ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account
$ psql -U admin -d dev -c "select * from xt.ext;"

That last command should display a table with a list of extensions; the ones already in xtuple like ‘crm’ and ‘billing’ and some others plus your new extension, in this case ‘sip_account’. When you run build_app.js you’ll probably see a message along the lines of “<Extension name> has no client code, not building client code” and that’s fine because yeah, we haven’t worked on the client code yet.

ORM

Here’s where things start getting different. So ORMs link your object to an SQL table. But we DON’T want to make a new business object, we want to extend an existing business object, so the ORM we will make will be a little different than the xTuple tutorial. Steve Hackbarth kindly explained this new business object/existing business object ORM concept here.

First we’ll create the directory /path/to/xtuple-extensions/source/sip_account/database/orm/ext, according to xTuple convention. ORMs for new business objects would be put in /path/to/xtuple-extensions/source/sip_account/database/orm/models. Now we’ll create the .json file /path/to/xtuple-extensions/source/sip_account/database/orm/ext/user_account.jscon for our ORM. Once again, don’t quote me on this, but I think the name of the file should be the name of the business object you are extending, as is done in the turorial example extending the Contact object. In our case, UserAccount is defined in user_account.json and that’s what I named my extension ORM too. Here’s what you should place in it:

 1 [
 2   {
 3     "context": "sip_account",
 4     "nameSpace": "XM",
 5     "type": "UserAccount",
 6     "table": "sa.usrlitesip",
 7     "isExtension": true,
 8     "isChild": false,
 9     "comment": "Extended by Sip",
10     "relations": [
11       {
12         "column": "usrlitesip_usr_username",
13         "inverse": "username"
14       }
15     ],
16     "properties": [
17       {
18         "name": "uri",
19         "attr": {
20           "type": "String",
21           "column": "usrlitesip_uri",
22           "isNaturalKey": true
23         }
24       },
25       {
26         "name": "displayName",
27         "attr": {
28           "type": "String",
29           "column": "usrlitesip_name"
30         }
31       },
32       {
33         "name": "sipPassword",
34         "attr": {
35           "type": "String",
36           "column": "usrlitesip_password"
37         }
38       }
39     ],
40     "isSystem": true
41   },
42   {
43     "context": "sip_account",
44     "nameSpace": "XM",
45     "type": "UserAccountRelation",
46     "table": "sa.usrlitesip",
47     "isExtension": true,
48     "isChild": false,
49     "comment": "Extended by Sip",
50     "relations": [
51       {
52         "column": "usrlitesip_usr_username",
53         "inverse": "username"
54       }
55     ],
56     "properties": [
57       {
58         "name": "uri",
59         "attr": {
60           "type": "String",
61           "column": "usrlitesip_uri",
62           "isNaturalKey": true
63         }
64       },
65       {
66         "name": "displayName",
67         "attr": {
68           "type": "String",
69           "column": "usrlitesip_name"
70         }
71       },
72       {
73         "name": "sipPassword",
74         "attr": {
75           "type": "String",
76           "column": "usrlitesip_password"
77         }
78       }
79     ],
80     "isSystem": true
81   }
82 ]

Note the “context” is my extension name, because the context + nameSpace + type combo has to be unique. We already have a UserAccount and UserAccountRelation object in the “XM” namespace in the “xtuple” context in the original user_account.json, now we will have a UserAccount and UserAccountRelation object in the “XM” namespace in the “sip_account” conext. What else is important? Note that “isExtension” is true on lines 7 and 47 and the “relations” item contains the “column” of the foreign key we referenced.

This is something you might want to verify: “column” (lines 12 and 52) is the name of the attribute on your table. When we made a reference to the primary key usr_usrname from the xt.usrlite table we named that column usrlitesip_usr_usrname. But the “inverse” is the attribute name associated with the original sql column in the original ORM. Did I lose you? I had a lot of trouble with this silly thing. In the original ORM that created a new UserAccount business object, the primary key attribute is named “username”, as can be seen here. That is what should be used for the “inverse” value. Not the sql column name (usr_username) but the object attribute name (username). I’m emphasizing this because I made that mistake and if I can spare you the pain I will.

If we rebuild our extension everything should come along nicely, but you won’t see any changes just yet in the web app because we haven’t created the client code.

Client

Create the directory /path/to/xtuple-extensions/source/sip_account/client which is where we’ll keep all the client code.

Extend Workspace View I want the fields I added to show up on the form to create a new User Account, so I need to extend the view for the User Account workspace. I’ll start by creating a directory /path/to/xtuple-extensions/source/sip_account/client/views and in it creating a file named ‘workspace.js’ containing this code:

XT.extensions.sip_account.initWorkspace = function () {

	var extensions = [
  	{kind: "onyx.GroupboxHeader", container: "mainGroup", content: "_sipAccount".loc()},
  	{kind: "XV.InputWidget", container: "mainGroup", attr: "uri" },
  	{kind: "XV.InputWidget", container: "mainGroup", attr: "displayName" },
  	{kind: "XV.InputWidget", container: "mainGroup", type:"password", attr: "sipPassword" }
	];

	XV.appendExtension("XV.UserAccountWorkspace", extensions);
};

So I’m initializing my workspace and creating an array of items to add (append) to view XV.UserAccountWorkspace. The first ‘item’ is this onyx.GroupboxHeader which is a pretty divider for my new form fields, the kind you find in the web app at Setup > User Accounts, like ‘Overview’. I have no idea what other options there are for container other than “mainGroup”, so let’s stick to that. I’ll explain content: “_sipAccount”.loc() in a bit. Next I created three input fields of the XV.InputWidget kind. This also confused me a bit as there are different kinds of input to be used, like dropdowns and checkboxes. The only advice I can give is snoop around the webapp, find an input you like and look up the corresponding workspace.js file to see what was used.

What we just did is (should be) enough for the new fields to show up on the User Account form. But before we see things change, we have to package the client. Create the file /path/to/xtuple-extensions/source/sip_account/client/views/package.js. This file is needed to ‘package’ groups of files and indicates the order the files should be loaded (for more on that, see this). For now, all the file will contain is:

enyo.depends(
"workspace.js"
);

You also need to package the ‘views’ directory containing workspace.js, so create the file Create the file /path/to/xtuple-extensions/source/sip_account/client/package.js and in it show that the directory ‘views’ and its contents must be part of the higher level package:

enyo.depends(
"views"
);

I like to think of it as a box full of smaller boxes.

This will sound terrible, but apparently you also need to create the file /path/to/xtuple-extensions/source/sip_account/client/core.js containing this line:

XT.extensions.icecream = {};

I don’t know why. As soon as I find out I’ll be sure to inform you.

As we’ve added a file to the client directory, be sure to update /path/to/xtuple-extensions/source/sip_account/client/package.js so it included the new file:

enyo.depends(
"core.js",
"views"
);

Translations

Remember “_sipAccount”.loc()” in our workspace.js file? xTuple has great internationalization support and it’s easy to use. Just create the directory and file /path/to/xtuple-extensions/source/sip_account/client/en/strings.js and in it put key-value pairs for labels and their translation, like this:

(function () {
  "use strict";

  var lang = XT.stringsFor("en_US", {
    "_sipAccount": "Sip Account",
    "_uri": "Sip URI",
    "_displayName": "Display Name",
    "_sipPassword": "Password"
  });

  if (typeof exports !== 'undefined') {
    exports.language = lang;
  }
}());

So far I included all the labels I used in my Sip Account form. If you write the wrong label (key) or forget to include a corresponding key-value pair in strings.js, xTuple will simply name your lable “_lableName”, underscore and all.

Now build your extension and start up the server:

$ cd /path/to/xtuple 
$ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account
$ node node-datasource/main.js

If the server is already running, just stop it and restart it to reflect your changes.

Now if you go to Setup > User Accounts and click the “+” button, you should see a nice little addition to the form with a ‘Sip Account’ divider and three new fields. Nice, eh?

Extend Parameters

Currently you can search your User Accounts list using any of the User Account fields. It would be nice to be able to search with the Sip account fields we added as well. To do that, let’s create the directory /path/to/xtuple-extensions/source/sip_account/client/widgets and there create the file parameter.js to extend XV.UserAccountListParameters. One again, you’ll have to look this up. In the xTuple code you’ll find the application’s parameter.js in /path/to/xtuple/enyo-client/application/source/widgets. Search for the business object you are extending (for example, XV.UserAccount) and look for some combination of the business object name and ‘Parameters’. If there’s more than one, try different ones. Not a very refined method, but it worked for me. Here’s the content of our parameter.js:

XT.extensions.sip_account.initParameterWidget = function () {

  var extensions = [
    {kind: "onyx.GroupboxHeader", content: "_sipAccount".loc()},
    {name: "uri", label: "_uri".loc(), attr: "uri", defaultKind: "XV.InputWidget"},
    {name: "displayName", label: "_displayName".loc(), attr: "displayName", defaultKind: "XV.InputWidget"}
  ];

  XV.appendExtension("XV.UserAccountListParameters", extensions);
};

Node that I didn’t include a search field for the password attribute for obvious reasons. Now once again, we package this new code addition by creating a /path/to/xtuple-extensions/source/sip_account/client/widgets/package.js file:

enyo.depends(
"parameter.js"
);

We also have to update /path/to/xtuple-extensions/source/sip_account/client/package.js:

enyo.depends(
"core.js",
"widgets"
"views"
);

Rebuild the extension (and restart the server) and go to Setup > User Accounts. Press the magnifying glass button on the upper left side of the screen and you’ll see many options for filtering the User Accounts, among them the SIP Uri and Display Name.

Extend List View

You might want your new fields to show up on the list of User Accounts. There’s a bit of an issue here because unlike what we did in workspace.js and parameter.js, we can’t append new things to the list of UserAccounts with the funciton XV.appendExtension(args). First I tried overwriting the original UserAccountList, which works but it’s far from ideal as this could result in a loss of data from the core implementation. After some discussion with the xTuple dev community, now there’s a better alternative:

Create the file /path/to/xtuple-extensions/source/sip_account/client/views/list.js and add the following:

1 var oldUserAccountListCreate = XV.UserAccountList.prototype.create;
2 
3 XV.UserAccountList.prototype.create = function () {
4   oldUserAccountListCreate.apply(this, arguments);
5   this.createComponent(
6   {kind: "XV.ListColumn", container: this.$.listItem, components: [
7        {kind: "XV.ListAttr", attr: "uri"}
8    ]})
9 };

To understand what I’m doing, check out the XV.UserAccountList implementation in /path/to/xtuple/enyo-client/application/source/views/list.js – the entire highlighted part. What we are doing is ‘extending’ XV.UserAccountList through ‘prototype-chaining’; this is how inheritance works with Enyo. In line 1 we create a prototype and in line 4 we inherit the features including original components array which the list is based on. We then create an additional component immitating the setup shown in XV.UserAccountList: An XV.ListColumn containing an XV.ListAttr, which should be placed in the XV.ListItem components array as is done with the existing columns (refer to implementation). Components can or should (?) have names which are used to access said components. You’d refer to a specific component by the this.$.componentName hash. The components in XV.UserAccountList don’t have names, so Enyo automatically names them (apparently) based on the kind name, for example something of the kind ListItem is named listItem. I found this at random after a lot of trial and error and it’s not a bullet proof solution. Can be bettered.

It’s strange because if you encapsulate that code with

XT.extensions.sip_account.initList = function () {
 //Code here
};

as is done with parameter.js and workspace.js (and in the xTuple tutorial you are supposed to do that with a new business object), it doesn’t work. I have no idea why. This might be ‘wrong’ or against xTuple coding norms; I will find out and update this post ASAP. But it does work this way. * shrugs *

That said, as we’ve created the list.js file, we need to ad it to our package by editing /path/to/xtuple-extensions/source/sip_account/client/views/package.js:

enyo.depends(
"list.js",
"workspace.js"
);

That’s all. Rebuild the app and restart your server and when you select Setup > User Accounts in the web app you should see the Sip URI displayed on the User Accounts that have the Sip Account data. Add a new User Account to try this out.

25 July, 2014 03:06PM

hackergotchi for Steve Kemp

Steve Kemp

The selfish programmer

Once upon a time I wrote a piece of software for scheduling the classes available to a college.

There was a bug in the scheduler: Students who happened to be named 'Steve Kemp' had a significantly higher chance (>=80% IIRC) of being placed in lessons where the class makeup was more than 50% female.

This bug was never fixed. Which was nice, because I spent several hours both implementing and disguising this feature.

I'm was a bad coder when I was a teenager.

These days I'm still a bad coder, but in different ways.

25 July, 2014 01:16PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Multiarchified eID libraries for Debian

A few weeks back, I learned that some government webinterfaces require users to download a PDF files, sign them with their eID, and upload the signed PDF document. On Linux, the only way to do this appeared to be to download Adobe Reader for Linux, install the eID middleware, make sure that the former would use the latter, and from there things would just work.

Except for the bit where Adobe Reader didn't exist in a 64-bit version. Since the eid middleware packages were not multiarch ready, that meant you couldn't use Adobe Reader to create signatures with your eID card on a 64-bit Linux distribution. Which is, pretty much, "just about everything out there".

For at least the Debian packages, that has been fixed now (I still need to handle the RPM side of things, but that's for later). When I wanted to test just now if everything would work right, however...

... I noticed that Adobe no longer provides any downloads of the Linux version of Adobe Reader. They're just gone. There is an ftp.adobe.com containing some old versions, but nothing more recent than a 5.x version.

Well, I suppose that settles that, then.

Regardless, the middleware package has been split up and multiarchified, and is ready for early adopters. If you want to try it out, you should:

  • run dpkg --add-architecture i386 if you haven't yet enabled multiarch
  • Install the eid-archive package, as usual
  • Edit /etc/apt/sources.list.d/eid.list, and enable the continuous repository (that is, remove the # at the beginning of the line)
  • run dpkg-reconfigure eid-archive, so that the key for the continuous repository is enabled
  • run apt-get update
  • run apt-get -t continuous install eid-mw to upgrade your middleware to the version in continuous
  • run apt-get -t continuous install libbeidpkcs11-0:i386 to install the 32-bit middleware version.
  • run your 32-bit application and sign things.

You should, however, note that the continuous repository is named so because it contains the results of our continuous integration system; that is, every time a commit is done to the middleware, packages in this repository are updated automatically. This means the software in the continuous repository might break. Or it might eat your firstborn. Or it might cause nasal daemons. As such, FedICT does not support these versions of the middleware. Don't try the above if you're not prepared to deal with that...

25 July, 2014 11:44AM

Tim Retout

London.pm's July 2014 tech meeting

Last night, I went to the London.pm tech meeting, along with a couple of colleagues from CV-Library. The talks, combined with the unusually hot weather we're having in the UK at the moment, combined with my holiday all last week, make it feel like I'm at a software conference. :)

The highlight for me was Thomas Klausner's talk about OX (and AngularJS). We bought him a drink at the pub later to pump him for information about using Bread::Board, with some success. It was worth the long, late commute back to Southampton.

All very enjoyable, and I hope they have more technical meetings soon. I'm planning to attend the London Perl Workshop later in the year.

25 July, 2014 07:36AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Nice read: «The Fasinatng … Frustrating … Fascinating History of Autocorrect»

A long time ago, I did some (quite minor!) work on natural language parsing. Most of what I got was the very basic rudiments on what needs to be done to begin with. But I like reading some texts on the subject every now and then.

I am also a member of the ACM — Association for Computing Machinery. Most of you will be familiar with it, it's one of the main scholarly associations for the field of computing. One of the basic perks of being an ACM member is the subscription to a very nice magazine, Communications of the ACM. And, of course, although I enjoy the physical magazine, I like reading some columns and articles as they appear along the month using the RSS feeds. They also often contain pointers to interesting reads on other media — As happened today. I found quite a nice article, I think, worth sharing with whoever thinks I have interesting things to say.

They published a very short blurb titled The Fasinatng … Frustrating … Fascinating History of Autocorrect. I was somewhat skeptical reading it links to an identically named article, published in Wired. But gave it a shot, anyway...

The article follows a style that's often abused and not very amusing, but I think was quite well done: The commented interview. Rather than just drily following through an interview, the writer tells us a story about that interview. And this is the story of Gideon Lewis-Kraus interviewing Dean Hachamovitch, the creator of the much hated (but very much needed) autocorrect feature that appeared originally in Microsoft Word.

The story of Hachamovitch's work (and its derivations, to the much maligned phone input predictors) over the last twenty-something years is very light to read, very easy to enjoy. I hope you find it as interesting as I did.

25 July, 2014 03:18AM by gwolf

July 24, 2014

Craig Small

PHP uniqid() not always a unique ID

For quite some time modern versions of JFFNMS have had a problem. In large installations hosts would randomly appear as down with the reachability interface going red. All other interface types worked, just this one.

Reachability interfaces are odd, because they call fping or fping6 do to the work. The reason is because to run a ping program you need to have root access to a socket and to do that is far too difficult and scary in PHP which is what JFFNMS is written in.

To capture the output of fping, the program is executed and the output captured to a temporary file. For my tiny setup this worked fine, for a lot of small setups this was also fine. For larger setups, it was not fine at all. Random failed interfaces and, most bizzarely of all, even though a file disappearing. The program checked for a file to exist and then ran stat in a loop to see if data was there. The file exist check worked but the stat said file not found.

At first I thought it was some odd load related problem, perhaps the filesystem not being happy and having a file there but not really there. That was, until someone said “Are these numbers supposed to be the same?”

The numbers he was referring to was the filename id of the temporary file. They were most DEFINITELY not supposed to be the same. They were supposed to be unique. Why were they always unique for me and not for large setups?

The problem is with the uniqid() function. It is basically a hex representation of the time.  Large setups often have large numbers of child processes for polling devices. As the number of poller children increases, the chance that two child processes start the reachability poll at the same time and have the same uniqid increases. It’s why the problem happened, but not all the time.

The stat error was another symptom of this bug, what would happen was:

  • Child 1 starts the poll, temp filename abc123
  • Child 2 starts the poll in the same microsecond, temp filename is also abc123
  • Child 1 and 2 wait poller starts, sees that the temp file exists and goes into a loop of stat and wait until there is a result
  • Child 1 finishes, grabs the details, deletes the temporary file
  • Child 2 loops, tries to run stat but finds no file

Who finishes first is entirely dependent on how quickly the fping returns and that is dependent on how quicky the remote host responds to pings, so its kind of random.

A minor patch to use tempnam() instead of uniqid() and adding the interface ID in the mix for good measure (no two children will poll the same interface, the parent’s scheduler makes sure of that.) The initial responses is that it is looking good.

 

24 July, 2014 12:17PM by Craig Small

hackergotchi for Martin Pitt

Martin Pitt

vim config for Markdown+LaTeX pandoc editing

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item]{}<Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

24 July, 2014 09:38AM by pitti

hackergotchi for Matthew Palmer

Matthew Palmer

First Step with Clojure: Terror

$ sudo apt-get install -y leiningen
[...]
$ lein new scratch
[...]
$ cd scratch
$ lein repl
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2
Transferring 5K from central
Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2
Transferring 4K from central
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2
Transferring 3311K from central
[...]

Wait… what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I’m asking for is what I’m getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that’s no safety net – if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there’s no signature on the .jar itself, either.

It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers.

I’ve seen Debian take a kicking more than once because packages aren’t individually signed, or because packages aren’t served over HTTPS. But at least Debian’s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.

This repository, on the other hand… oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there’s no way for me (or anyone) to trust the signature – the signature was made by a key that’s signed by one other key, which itself has no signatures. If I were an attacker, it wouldn’t be hard for me to replace that key chain with one of my own devising.

Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it’s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn’t hard to do, and there’s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.

Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they’re a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a “maven.org distribution root” key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn’t be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn’t be perfect, but don’t make the perfect the enemy of the good. Cost-effective improvements are possible here.

Yes, security is hard. But you don’t get to ignore it just because of that, when you’re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.


  1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that’s just icing on the shit cake.

  2. At one point in the past, my then-employer (a hosting provider) blocked Fastly’s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn’t help me to distinguish between “I’m a professionally-run distributed proxy” and “I’m a pwned box here to hammer your site into the ground”).

  3. Pretty much all of the new breed of so-called CDNs aren’t actually pro-actively distributing content, they’re just proxies. That isn’t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they’re providing “advanced” capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

24 July, 2014 12:30AM by Matt Palmer (mpalmer@hezmatt.org)

July 23, 2014

Russ Allbery

WebAuth 4.6.1

This is a bug-fix release of the WebAuth site-wide web authentication system. As is typical, I accumulated a variety of minor bug fixes and improvements that I wanted to get into a release before starting larger work (in this case, adding JSON support for the user information service protocol).

The most severe bug fix is something that only folks at Stanford would notice: support for AuthType StanfordAuth was broken in the 4.6.0 release. This is for legacy compatibility with WebAuth 2.5. It has been fixed in this release.

In other, more minor bug fixes, build issues when remctl support is disabled have been fixed, expiring password warnings are shown in WebLogin after any POST-based authentication, the confirmation page is forced if authorization identity switching is available, the username field is verified before multifactor authentication to avoid subsequent warnings, newlines and tabs are allowed in the XML sent from the WebKDC for user messages, empty RT and ST parameters are correctly diagnosed, and there are some documentation improvements.

The main new feature in this release is support for using FAST armor during password authentication in mod_webkdc. A new WebKdcFastArmorCache directive can be set to point at a Kerberos ticket cache to use for FAST armor. If set, FAST is required, so the KDC must support it as well. This provides better wire security for the initial password authentication to protect against brute-force dictionary attacks against the password by a passive eavesdropper.

This release also adds a couple of new factor types, mp (mobile push) and v (voice), that Stanford will use as part of its Duo Security integration.

Note that, for the FAST armor feature, there is also an SONAME bump in the shared library in this release. Normally, I wouldn't bump the SONAME in a minor release, but in this case the feature was fairly minor and most people will not notice the change, so it didn't feel like it warranted a major release. I'm still of two minds about that, but oh well, it's done and built now. (At least I noticed that the SONAME bump was required prior to the release.)

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

23 July, 2014 10:59PM

Lior Kaplan

Testing PHPNG on Debian/Ubuntu

We (at Zend) want to help people get more involved in testing PHPNG (PHP next generation), so we’re started to provide binaries for it, although it’s still a branch on top of PHP’s master branch. See more details about PHPNG on Zeev Suraski’s blog post.

The binaries (64bit) are compatible with Debian testing/unstable and Ubuntu Trusty (14.04) and up. The mod_php is built for Apache 2.4 which all three flavors have.

The repository is at http://repos.zend.com/zend-server/early-access/phpng/

Installation instructions:

# wget http://repos.zend.com/zend.key -O- 2> /dev/null | apt-key add -
# echo “deb [arch=amd64] http://repos.zend.com/zend-server/early-access/phpng/ trusty zend” > /etc/apt/sources.list.d/phpng.list
# apt-get update
# apt-get install php5

For the task of providing these binaries, I had a pleasure of combining my experience as a member of the Debian PHP team and a Debian Developer with stuff more internal to the PHP development process. Using the already existing Debian packaging enabled me to test more builds scenarios easily (and report problems accoredingly). Hopefully this could also be translated back into providing more experimental packages for Debian and making sure Debian packages are ready for the PHP released after PHP 5.6.


Filed under: Debian GNU/Linux, PHP

23 July, 2014 09:01PM by Kaplan

Petter Reinholdtsen

98.6 percent done with the Norwegian draft translation of Free Culture

This summer I finally had time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with todays copyright law. Yesterday, I finally completed translated the book text. There are still some foot/end notes left to translate, the colophon page need to be rewritten, and a few words and phrases still need to be translated, but the Norwegian text is ready for the first proof reading. :) More spell checking is needed, and several illustrations need to be cleaned up. The work stopped up because I had to give priority to other projects the last year, and the progress graph of the translation show this very well:

If you want to read the result, check out the github project pages and the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

23 July, 2014 08:40PM

hackergotchi for Michael Prokop

Michael Prokop

Book Review: The Docker Book

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. :) (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

23 July, 2014 08:16PM by mika

hackergotchi for Tanguy Ortolo

Tanguy Ortolo

GNU/Linux graphic sessions: suspending your computer

Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:

$ dbus-send --system --print-reply \
            --dest='org.freedesktop.UPower' \
            /org/freedesktop/UPower org.freedesktop.UPower.Suspend
Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with
signature "" on interface "org.freedesktop.UPower" doesn't exist

The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:

%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \
                        /usr/sbin/pm-suspend-hybrid, \
                        /usr/sbin/pm-hibernate

That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.

23 July, 2014 12:45PM by Tanguy

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

The sad state of Linux Wi-Fi

I've been using 802.11 on Linux now for over a decade, and to be honest, it's still a pretty sad experience. It works well enough that I mostly don't care... but when I care, and try to dig deeper, it always ends up in the answer “this is just crap”.

I can't say exactly why this is; between the Intel cards I've always been using, the Linux drivers, the firmware, the mac80211 layer, wpa_supplicant and NetworkManager, I have no idea who are supposed to get all these things right, and I have no idea how hard or easy they actually are to pull off. But there are still things annoying me frequently that we should really have gotten right after ten years or more:

  • Why does my Intel card consistently pick 2.4 GHz over 5 GHz? The 5 GHz signal is just as strong, and it gives a less crowded 40 MHz channel (twice the bandwidth, yay!) instead of the busy 20 MHz channel the 2.4 GHz one has to share. The worst part is, if I use an access point with band-select (essentially forcing the initial connection to be to 5 GHz—this is of course extra fun when the driver sees ten APs and tries to connect to all of them over 2.4 in turn before trying 5 GHz), the driver still swaps onto 2.4 GHz a few minutes later!
  • Rate selection. I can sit literally right next to an AP and get a connection on the lowest basic rate (which I've set to 11 Mbit/sec for the occasion). OK, maybe I shouldn't trust the output of iwconfig too much, since rate is selected per-packet, but then again, when Linux supposedly has a really good rate selection algorithm (minstrel), why are so many drivers using their own instead? (Yes, hello “iwl-agn-rs”, I'm looking at you.)
  • Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast. RFC4436 describes some of the tricks they're using, but Linux uses none of them. In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
  • Scanning/roaming seems to be pretty random; I have no idea how much thought really went into this, and I know it is a hard problem, but it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available. (See also 2.4 vs. 5 above.) I'd love to get proper support for CCX (Cisco Client Extensions) here, which makes this tons better in a larger Wi-Fi setting (since the access point can give the client a lot of information that's useful for roaming, e.g. “there's an access point on thannel 52 that sends its beacons every 100 ms with offset 54 from mine”, which means you only need to swap channel for a few milliseconds to listen instead of a full beacon period), but I suppose that's covered by licensing or patents or something. Who knows.

With now a billion mobile devices running Linux and using Wi-Fi all the time, maybe we should have solved this a while ago. But alas. Instead we get access points trying to layer hacks upon hacks to try to force clients into making the right decisions. And separate ESSIDs for 2.4 GHz and 5 GHz.

Augh.

23 July, 2014 11:28AM

Francesca Ciceri

Adventures in Mozillaland #3

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!

Community

After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

23 July, 2014 11:04AM

hackergotchi for Andrew Pollock

Andrew Pollock

[tech] Going solar

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

23 July, 2014 05:36AM

hackergotchi for Matthew Palmer

Matthew Palmer

Per-repo update hooks with gitolite

Gitolite is a popular way to manage collections of git repositories entirely from the command line – it’s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it’s quite a valuable system.

In recent versions (3.6), it added support for configuring per-repository git hooks from within the gitolite-admin repo itself – something which previously required directly jiggering around with the repo metadata on the filesystem. It allows you to “chain” multiple hooks together, too, which is a nice touch. You can, for example, define hooks for “validate style guidelines”, “submit patch to code review” and “push to the CI server”. Then for each repo you can pick which of those hooks to execute. It’s neat.

There’s one glaring problem, though – you can only use these chained, per-repo hooks on the pre-receive, post-receive, and post-update hooks. The update hook is special, and gitolite wants to make sure you never, ever forget it. You can hook into the update processing chain by using something called a “virtual ref”; they’re stored in a separate configuration directory, use a different syntax in the config file, and if you’re trying to learn what they do, you’ll spend a fair bit of time on them. The documentation describes VREFs as “a mechanism to add additional constraints to a push”. The association between that and the update hook is one you get to make for yourself.

The interesting thing is that there’s no need for this gratuitous difference in configuration methods between the different hooks. I wrote a very small and simple patch that makes the update hook configurable in exactly the same way as the other server-side hooks, with no loss of existing functionality.

The reason I’m posting it here is that I tried to submit it to the primary gitolite developer, and was told “I’m not touching the update hook […] I’m not discussing this […] take it or leave it”. So instead, I’m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!

23 July, 2014 05:00AM by Matt Palmer (mpalmer@hezmatt.org)

Jonathan McCrohan

Git remote helpers

If you follow upstream Git development closely, you may have noticed that the Mercurial and Bazaar remote helpers (use git to interact with hg and bzr repos) no longer live in the main Git tree. They have been split out into their own repositories, here and here.

git-remote-bzr had been packaged (as git-bzr) for Debian since March 2013, but was removed in May 2014 when the remote helpers were removed upstream. There had been a wishlist bug report open since Mar 2013 to get git-remote-hg packaged, and I had submitted a patch, but it was never applied.

Splitting out of these remote helpers upstream has allowed Vagrant Cascadian and myself to pick up these packages and both are now available in Debian.

apt-get install git-remote-hg git-remote-bzr

23 July, 2014 01:19AM by jmccrohan

July 22, 2014

Tim Retout

Cowbuilder and Tor

You've installed apt-transport-tor to help prevent targeted attacks on your system. Great! Now you want to build Debian packages using cowbuilder, and you notice these are still using plain HTTP.

If you're willing to fetch the first few packages without using apt-transport-tor, this is as easy as:

  • Add 'EXTRAPACKAGES="apt-transport-tor"' to your pbuilderrc.
  • Run 'cowbuilder --update'
  • Set 'MIRRORSITE=tor+http://http.debian.net/debian' in pbuilderrc.
  • Run 'cowbuilder --update' again.

Now any future builds should fetch build-dependencies over Tor.

Unfortunately, creating a base.cow from scratch is more problematic. Neither 'debootstrap' nor 'cdebootstrap' actually rely on apt acquire methods to download files - they look at the URL scheme themselves to work out where to fetch from. I think it's a design point that they shouldn't need apt, anyway, so that you can debootstrap on non-Debian systems. I don't have a good solution beyond using some other means to route these requests over Tor.

22 July, 2014 09:31PM

hackergotchi for Neil Williams

Neil Williams

Validating ARMMP device tree blobs

I’ve done various bits with ARMMP and LAVA on this blog already, usually waiting until I’ve got all the issues ironed out before writing it up. However, this time I’m just going to do a dump of where it’s at, how it works and what can be done.

I’m aware that LAVA can seem mysterious at first, the package description has improved enormously recently, thanks to exposure in Debian: LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing, although extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

The LAVA documentation has a glossary of terms like result bundle and all the documentation is also available in the lava-server-doc package.

The goal is to validate the dtbs built for the Debian ARMMP kernel. One of the most accessible ways to get the ARMMP kernel onto a board for testing is tftp using the Debian daily DI builds. Actually using the DI initrd can come later, once I’ve got a complete preseed config so that the entire install can be automated. (There are some things to sort out in LAVA too before a full install can be deployed and booted but those are at an early stage.) It’s enough at first to download the vmlinuz which is common to all ARMMP deployments, supply the relevant dtb, partner those with a minimal initrd and see if the board boots.

The first change comes when this process is compared to how boards are commonly tested in LAVA – with a zImage or uImage and all/most of the modules already built in. Packaged kernels won’t necessarily raise a network interface or see the filesystem without modules, so the first step is to extend a minimal initramfs to include the armmp modules.

apt install pax u-boot-tools

The minimal initramfs I selected is one often used within LAVA:

wget http://images.armcloud.us/lava/common/linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot

It has a u-boot header added, as most devices using this would be using u-boot and this makes it easier to debug boot failures as the initramfs doesn’t need to have the header added, it can simply be downloaded to a local directory and passed to the board as a tftp location. To modify it, the u-boot header needs to be removed. Rather than assuming the size, the u-boot tools can (indirectly) show the size:

$ ls -l linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot
-rw-r--r-- 1 neil neil  5179571 Nov 26  2013 linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot

$ mkimage -l linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot 
Image Name:   linaro-image-minimal-initramfs-g
Created:      Tue Nov 26 22:30:49 2013
Image Type:   ARM Linux RAMDisk Image (gzip compressed)
Data Size:    5179507 Bytes = 5058.11 kB = 4.94 MB
Load Address: 00000000
Entry Point:  00000000

Referencing http://www.omappedia.com/wiki/Development_With_Ubuntu, the header size is the file size minus the data size listed by mkimage.

5179571 - 5179507 == 64

So, create a second file without the header:

dd if=linaro-image-minimal-initramfs-genericarmv7a.cpio.gz.u-boot of=linaro-image-minimal-initramfs-genericarmv7a.cpio.gz skip=64 bs=1

decompress it

gunzip linaro-image-minimal-initramfs-genericarmv7a.cpio.gz

Now for the additions

dget http://ftp.uk.debian.org/debian/pool/main/l/linux/linux-image-3.14-1-armmp_3.14.12-1_armhf.deb

(Yes, this process will need to be repeated when this package is rebuilt, so I’ll want to script this at some point.)

dpkg -x linux-image-3.14-1-armmp_3.14.12-1_armhf.deb kernel-dir
cd kernel-dir

Pulling in the modules we need for most needs, comes thanks to a script written by the Xen folks. The set is basically disk, net, filesystems and LVM.

find lib -type d -o -type f -name modules.\*  -o -type f -name \*.ko  \( -path \*/kernel/lib/\* -o  -path \*/kernel/crypto/\* -o  -path \*/kernel/fs/mbcache.ko -o  -path \*/kernel/fs/ext\* -o  -path \*/kernel/fs/jbd\* -o  -path \*/kernel/drivers/net/\* -o  -path \*/kernel/drivers/ata/\* -o  -path \*/kernel/drivers/scsi/\* -o -path \*/kernel/drivers/md/\* \) | pax -x sv4cpio -s '%lib%/lib%' -d -w >../cpio
gzip -9f cpio

original Xen script (GPL-3+)

I found it a bit confusing that i is used for extract by cpio, but that’s how it is. Extract the minimal initramfs to a new directory:

sudo cpio -id < ../linaro-image-minimal-initramfs-genericarmv7a.cpio

Extract the new cpio into the same location. (Yes, I could do this the other way around and pipe the output of find into the already extracted location but that's for when I get a script to do this):

sudo cpio --no-absolute-filenames -id < ../ramfs/cpio

CPIO Manual

Use newc format, the new (SVR4) portable format, which supports file systems having more than 65536 i-nodes. (4294967295 bytes)
(41M)

find . | cpio -H newc -o > ../armmp-image.cpio

... and add the u-boot header back:

mkimage -A arm -T ramdisk -C none -d armmp-image.cpio.gz debian-armmp-initrd.cpio.gz.u-boot

Now what?

Now send the combination to LAVA and test it.

Results bundle for a local LAVA test job using this technique. (18k)

submission JSON - uses file:// references, so would need modification before being submitted to LAVA elsewhere.

complete log of the test job (72k)

Those familiar with LAVA will spot that I haven't optimised this job, it boots the ARMMP kernel into a minimal initramfs and then expects to find apt and other tools. Actual tests providing useful results would use available tools, add more tools or specify a richer rootfs.

The tests themselves are very quick (the job described above took 3 minutes to run) and don't need to be run particularly often, just once per board type per upload of the ARMMP kernel. LAVA can easily run those jobs in parallel and submission can be automated using authentication tokens and the lava-tool CLI. lava-tool can be installed without lava-server, so can be used in hooks for automated submissions.

Extensions

That's just one DTB and one board. I have a range of boards available locally:

* iMX6Q Wandboard (used for this test)
* iMX.53 Quick Start Board (needs updated u-boot)
* Beaglebone Black
* Cubie2
* CubieTruck
* arndale (no dtb?)
* pandaboard

Other devices available could involve ARMv7 devices hosted at www.armv7.com and validation.linaro.org - as part of a thank you to the Debian community for providing the OS which is (now) running all of the LAVA infrastructure.

That doesn't cover all of the current DTBs (and includes many devices which have no DTBs) so there is plenty of scope for others to get involved.

Hopefully, the above will help get people started with a suitable kernel+dtb+initrd and I'd encourage anyone interested to install lava-server and have a go at running test jobs based on those so that we start to build data about as many of the variants as possible.

(If anyone in DI fancies producing a suitable initrd with modules alongside the DI initrd for armhf builds, or if anyone comes up with a patch for DI to do that, it would help enormously.)

This will at least help Debian answer the question of what the Debian ARMMP package can actually support.

For help on LAVA, do read through the documentation and then come to us at #linaro-lava or the linaro-validation mailing list or file bugs in Debian: reportbug lava-server.

, so you can ask me.

I'm giving one talk on the LAVA software and there will be a BoF on validation and CI in Debian.

22 July, 2014 09:18PM by Neil Williams

hackergotchi for Martin Pitt

Martin Pitt

autopkgtest 3.2: CLI cleanup, shell command tests, click improvements

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

22 July, 2014 06:16AM by pitti

hackergotchi for MJ Ray

MJ Ray

Three systems

There are three basic systems:

The first is slick and easy to use, but fiddly to set up correctly and if you want to do something that its makers don’t want you to, it’s rather difficult. If it breaks, then fixing it is also fiddly, if not impossible and requiring complete reinitialisation.

The second system is an older approach, tried and tested, but fell out of fashion with the rise of the first and very rarely comes preinstalled on new machines. Many recent installations can be switched to and from the first system at the flick of a switch if wanted. It needs a bit more thought to operate but not much and it’s still pretty obvious and intuitive. You can do all sorts of customisations and it’s usually safe to mix and match parts. It’s debatable whether it is more efficient than the first or not.

The third system is a similar approach to the other two, but simplified in some ways and all the ugly parts are hidden away inside neat packaging. These days you can maintain and customise it yourself without much more difficulty than the other systems, but the basic hardware still attracts a price premium. In theory, it’s less efficient than the other types, but in practice it’s easier to maintain so doesn’t lose much efficiency. Some support companies for the other types won’t touch it while others will only work with it.

So that’s the three types of bicycle gears: indexed, friction and hub. What did you think it was?

22 July, 2014 03:59AM by mjr

hackergotchi for Andrew Pollock

Andrew Pollock

[debian] Day 174: Kindergarten, startup stuff, tennis

I picked up Zoe from Sarah this morning and dropped her at Kindergarten. Traffic seemed particularly bad this morning, or I'm just out of practice.

I spent the day powering through the last two parts of the registration block of my real estate licence training. I've got one more piece of assessment to do, and then it should be done. The rest is all dead-tree written stuff that I have to mail off to get marked.

Zoe's doing tennis this term as her extra-curricular activity, and it's on a Tuesday afternoon after Kindergarten at the tennis court next door.

I'm not sure what proportion of the class is continuing on from previous terms, and so how far behind the eight ball Zoe will be, but she seemed to do okay today, and she seemed to enjoy it. Megan's in the class too, and that didn't seem to result in too much cross-distraction.

After that, we came home and just pottered around for a bit and then Zoe watched some TV until Sarah came to pick her up.

22 July, 2014 01:23AM

Hideki Yamane

GeoIP support for installer is really nice


RHEL7 installation note says "The new graphical installer also generates automatic default settings where applicable. For example, if the installer detects a network connection, the user's general location is determined with GeoIP and sane suggestions are made for the default keyboard layout, language and timezone." but CentOS7 doesn't work as expected ;-)

 GeoIP support in Fedora20 Installer works well and it's pretty nice. Boot from live media and it shows "Try Fedora" and "Install to Hard Drive" menu.

Then, select "Install" and...Boom! it shows in Japanese without any configuration automagically!

I want same feature for d-i, too.

22 July, 2014 12:10AM by Hideki Yamane (noreply@blogger.com)

July 21, 2014

hackergotchi for Chris Lamb

Chris Lamb

Disabling internet for specific processes with libfiu

My primary usecase is to prevent testsuites and build systems from contacting internet-based services. This, at the very least, introduces an element of non-determinism and malicious code at worst.

I use Alberto Bertogli's libfiu for this, specifically the fiu-run utility which part of the fiu-utils package on Debian and Ubuntu.

Here's a contrived example, where I prevent Curl from talking to the internet:

$ fiu-run -x -c 'enable name=posix/io/net/connect' curl google.com
curl: (6) Couldn't resolve host 'google.com'

... and here's an example of it detecting two possibly internet-connecting tests:

$ fiu-run -x -c 'enable name=posix/io/net/connect' ./manage.py text
[..]
----------------------------------------------------------------------
Ran 892 tests in 2.495s

FAILED (errors=2)
Destroying test database for alias 'default'...

Note that libfiu inherits all the drawbacks of LD_PRELOAD; in particular, we cannot limit the child process that calls setuid binaries such as /bin/ping:

$ fiu-run -x -c 'enable name=posix/io/net/connect' ping google.com
PING google.com (173.194.41.65) 56(84) bytes of data.
64 bytes from lhr08s01.1e100.net (17.194.41.65): icmp_req=1 ttl=57 time=21.7 ms
64 bytes from lhr08s01.1e100.net (17.194.41.65): icmp_req=2 ttl=57 time=18.9 ms
[..]

Whilst it would certainly be more robust and flexible to use iptables—such as allowing localhost and other local socket connections but disabling all others—I gravitate towards this entirely userspace solution as it requires no setup and I can quickly modify it to block other calls on an ad-hoc basis. The list of other "modules" libfiu supports is viewable here.

21 July, 2014 06:26PM

Ian Campbell

sunxi-tools now available in Debian

I've recently packaged the sunxi tools for Debian. These are a set of tools produce by the Linux Sunxi project for working with the Allwinner "sunxi" family of processors. See the package page for details. Thanks to Steve McIntyre for sponsoring the initial upload.

The most interesting component of the package are the tools for working with the Allwinner processors' FEL mode. This is a low-level processor mode which implements a simple USB protocol allowing for initial programming of the device and recovery which can be entered on boot (usually be pressing a special 'FEL button' somewhere on the device). It is thanks to FEL mode that most sunxi based devices are pretty much unbrickable.

The most common use of FEL is to boot over USB. In the Debian package the fel and usb-boot tools are named sunxi-fel and sunxi-usb-boot respectively but otherwise can be used in the normal way described on the sunxi wiki pages.

One enhancement I made to the Debian version of usb-boot is to integrate with the u-boot packages to allow you to easily FEL boot any sunxi platform supported by the Debian packaged version of u-boot (currently only Cubietruck, more to come I hope). To make this work we take advantage of Multiarch to install the armhf version of u-boot (unless your host is already armhf of course, in which case just install the u-boot package):

# dpkg --add-architecture armhf
# apt-get update
# apt-get install u-boot:armhf
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  u-boot:armhf
0 upgraded, 1 newly installed, 0 to remove and 1960 not upgraded.
Need to get 0 B/546 kB of archives.
After this operation, 8,676 kB of additional disk space will be used.
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
Selecting previously unselected package u-boot:armhf.
(Reading database ... 309234 files and directories currently installed.)
Preparing to unpack .../u-boot_2014.04+dfsg1-1_armhf.deb ...
Unpacking u-boot:armhf (2014.04+dfsg1-1) ...
Setting up u-boot:armhf (2014.04+dfsg1-1) ...

With that done FEL booting a cubietruck is as simple as starting the board in FEL mode (by holding down the FEL button when powering on) and then:

# sunxi-usb-boot Cubietruck -
fel write 0x2000 /usr/lib/u-boot/Cubietruck_FEL/u-boot-spl.bin
fel exe 0x2000
fel write 0x4a000000 /usr/lib/u-boot/Cubietruck_FEL/u-boot.bin
fel write 0x41000000 /usr/share/sunxi-tools//ramboot.scr
fel exe 0x4a000000

Which should result in something like this on the Cubietruck's serial console:

U-Boot SPL 2014.04 (Jun 16 2014 - 05:31:24)
DRAM: 2048 MiB


U-Boot 2014.04 (Jun 16 2014 - 05:30:47) Allwinner Technology

CPU:   Allwinner A20 (SUN7I)
DRAM:  2 GiB
MMC:   SUNXI SD/MMC: 0
In:    serial
Out:   serial
Err:   serial
SCSI:  SUNXI SCSI INIT
Target spinup took 0 ms.
AHCI 0001.0100 32 slots 1 ports 3 Gbps 0x1 impl SATA mode
flags: ncq stag pm led clo only pmp pio slum part ccc apst 
Net:   dwmac.1c50000
Hit any key to stop autoboot:  0 
sun7i# 

As more platforms become supported by the u-boot packages you should be able to find them in /usr/lib/u-boot/*_FEL.

There is one minor inconvenience which is the need to run sunxi-usb-boot as root in order to access the FEL USB device. This is easily resolved by creating /etc/udev/rules.d/sunxi-fel.rules containing either:

SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", OWNER="myuser"

or

SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", GROUP="mygroup"

To enable access for myuser or mygroup respectively. Once you have created the rules file then to enable:

# udevadm control --reload-rules

As well as the FEL mode tools the packages also contain a FEX (de)compiler. FEX is Allwinner's own hardware description language and is used with their Android SDK kernels and the fork of that kernel maintained by the linux-sunxi project. Debian's kernels follow mainline and therefore use Device Tree.

21 July, 2014 06:10PM

hackergotchi for Daniel Pocock

Daniel Pocock

Australia can't criticize Putin while competing with him

While much of the world is watching the tragedy of MH17 and contemplating the grim fate of 298 deceased passengers sealed into a refrigerated freight train in the middle of a war zone, Australia (with 28 victims on that train) has more than just theoretical skeletons in the closet too.

At this moment, some 153 Tamil refugees, fleeing the same type of instability that brought a horrible death to the passengers of MH17, have been locked up in the hull of a customs ship on the high seas. Windowless cabins and a supply of food not fit for a dog are part of the Government's strategy to brutalize these people for simply trying to avoid the risk of enhanced imprisonment(TM) in their own country.

Under international protocol for rescue at sea and political asylum, these people should be taken to the nearest port and given a humanitarian visa on arrival. Australia, however, is trying to lie and cheat their way out of these international obligations while squealing like a stuck pig about the plight of Australians in the hands of Putin. If Prime Minister Tony Abbott wants to encourage Putin to co-operate with the international community, shouldn't he try to lead by example? How can Australians be safe abroad if our country systematically abuses foreigners in their time of need?

21 July, 2014 05:00PM by Daniel.Pocock

hackergotchi for Steve Kemp

Steve Kemp

An alternative to devilspie/devilspie2

Recently I was updating my dotfiles, because I wanted to ensure that media-players were "always on top", when launched, as this suits the way I work.

For many years I've used devilspie to script the placement of new windows, and once I googled a recipe I managed to achieve my aim.

However during the course of my googling I discovered that devilspie is unmaintained, and has been replaced by something using Lua - something I like.

I'm surprised I hadn't realized that the project was dead, although I've always hated the configuration syntax it is something that I've used on a constant basis since I found it.

Unfortunately the replacement, despite using Lua, and despite being functional just didn't seem to gell with me. So I figured "How hard could it be?".

In the past I've written softare which iterated over all (visible) windows, and obviously I'm no stranger to writing Lua bindings.

However I did run into a snag. My initial implementation did two things:

  • Find all windows.
  • For each window invoke a lua script-file.

This worked. This worked well. This worked too well.

The problem I ran into was that if I wrote something like "Move window 'emacs' to desktop 2" that action would be applied, over and over again. So if I launched emacs, and then manually moved the window to desktop3 it would jump back!

In short I needed to add a "stop()" function, which would cause further actions against a given window to cease. (By keeping a linked list of windows-to-ignore, and avoiding processing them.)

The code did work, but it felt wrong to have an ever-growing linked-list of processed windows. So I figured I'd look at the alternative - the original devilspie used libwnck to operate. That library allows you to nominate a callback to be executed every time a new window is created.

If you apply your magic only on a window-create event - well you don't need to bother caching prior-windows.

So in conclusion :

I think my code is better than devilspie2 because it is smaller, simpler, and does things more neatly - for example instead of a function to get geometry and another to set it, I use one. (e.g. "xy()" returns the position of a window, but xy(3,3) sets it.).

kpie also allows you to run as a one-off job, and using the simple primitives I wrote a file to dump your windows, and their size/placement, which looks like this:

shelob ~/git/kpie $ ./kpie --single ./samples/dump.lua
-- Screen width : 1920
-- Screen height: 1080
..
if ( ( window_title() == "Buddy List" ) and
     ( window_class() == "Pidgin" ) and
     ( window_application() == "Pidgin" ) ) then
     xy(1536,24 )
     size(384,1032 )
     workspace(2)
end
if ( ( window_title() == "feeds" ) and
     ( window_class() == "Pidgin" ) and
     ( window_application() == "Pidgin" ) ) then
     xy(1,24 )
     size(1536,1032 )
     workspace(2)
end
..

As you can see that has dumped all my windows, along with their current state. This allows a simple starting-point - Configure your windows the way you want them, then dump them to a script file. Re-run that script file and your windows will be set back the way they were! (Obviously there might be tweaks required.)

I used that starting-point to define a simple recipe for configuring pidgin, which is more flexible than what I ever had with pidgin, and suits my tastes.

Bug-reports welcome.

21 July, 2014 02:30PM

Tim Retout

apt-transport-tor 0.2.1

apt-transport-tor 0.2.1 should now be on your preferred unstable Debian mirror. It will let you download Debian packages through Tor.

New in this release: support for HTTPS over Tor, to keep up with people.debian.org. :)

I haven't mentioned it before on this blog. To get it working, you need to "apt-get install apt-transport-tor", and then use sources.list lines like so:

deb tor+http://http.debian.net/debian unstable main

Note the use of http.debian.net in order to pick a mirror near to whichever Tor exit node. Throughput is surprisingly good.

On the TODO list: reproducible builds? It would be nice to have some mirrors offer Tor hidden services, although I have yet to think about the logistics of this, such as how the load could be balanced (maybe a service like http.debian.net). I also need to look at how cowbuilder etc. can be made to play nicely with Tor. And then Debian installer support!

21 July, 2014 12:17PM

hackergotchi for Francois Marier

Francois Marier

Creating a modern tiling desktop environment using i3

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian.

After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script

As soon as I log into my desktop, my startup script starts a few programs, including:

Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:

dconf write /org/gnome/settings-daemon/plugins/cursor/active false

Screensaver

In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:

xautolock -time 30 -locker "gnome-screensaver-command --lock" &

to lock the screen using gnome-screensaver after 30 minutes of inactivity.

I can also trigger it manually using the following shortcut defined in my ~/.i3/config:

bindsym Ctrl+Mod1+l exec xautolock -locknow

Keyboard shortcuts

While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:

# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'

# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5
bindsym XF86AudioMute exec /usr/bin/pactl set-sink-mute @DEFAULT_SINK@ toggle

# show battery stats
bindsym XF86Battery exec gnome-power-statistics

to make volume control, screen brightness and battery status buttons work as expected on my laptop.

These bindings require the following packages:

Keyboard layout switcher

Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard.

To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:

bindsym $mod+u exec /home/francois/bin/toggle-xkbmap

Suspend script

Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:

HandleLidSwitch=lock

Instead, when I want to suspend to ram, I use the following keyboard shortcut:

bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram

which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep.

To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:

francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend

Window and workspace placement hacks

While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications

A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:

for_window [class="VidyoDesktop"] floating enable

You can get the Xorg class of the offending application by running this command:

xprop | grep WM_CLASS

before clicking on the window.

Keeping IM windows on the first workspace

I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:

assign [class="Pidgin"] 1

Automatically moving workspaces when docking

Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:

# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1

You can get these output names by running:

xrandr --display :0 | grep " connected"

Finally, because X sometimes fail to detect my external monitor when docking/undocking, I also wrote a script to set the displays properly and bound it to the appropriate key on my laptop:

bindsym XF86Display exec /home/francois/bin/external-monitor

21 July, 2014 11:03AM

hackergotchi for DebConf team

DebConf team

Talks review and selection process. (Posted by René Mayorga)

Today we finished the talk selection process. We are very grateful to everyone who decided to submit talks and events for DebConf14.

If you have submitted an event, please check your email :). If you have not received any confirmation regarding your talk status, please contact us on talks@debconf.org

During the selection process, we bore in mind the number of talk slots during the conference, as well as maintaining a balance among the different submitted topics. We are pleased to announce that we have received a total of 115 events, of which 80 have been approved (69%). Approval means your event will be scheduled during the conference and you will have video coverage.

The list of approved talks can be found on the following link: https://summit.debconf.org/debconf14/all/

If you got an email telling your talk have being approved, and your talk is not listed, don’t panic. Check the status on summit, and make sure to select a track, if you have some track suggestions please mail us and tell us about it.

This year, we expect to also have a sort of “unconference” schedule. This will take place during the designated “hacking time”. During that time the talks rooms will be empty, and ad hoc meetings can be scheduled on-site while we are in the Conference. The method for booking a room for your ad hoc meeting will be decided and announced later, but is expected to be flexible (i.e: open scheduling board / 1 day or less in advance booking), Please don’t abuse the system: bear in mind the space will be limited, and only book your event if you gather enough people to work on your idea.

Please make sure to read the email regarding your talk. :) and prepare yourself.

Time is ticking and we will be happy to meet you in Portland.

21 July, 2014 10:30AM by DebConf Organizers

hackergotchi for Keith Packard

Keith Packard

Glamorous Intel

Reworking Intel Glamor

The original Intel driver Glamor support was based on the notion that it would be better to have the Intel driver capture any fall backs and try to make them faster than Glamor could do internally. Now that Glamor has reasonably complete acceleration, and its fall backs aren’t terrible, this isn’t as useful as it once was, and because this uses Glamor in a weird way, we’re making the Glamor code harder to maintain.

Fixing the Intel driver to not use Glamor in this way took a bit of effort; the UXA support is all tied into the overall operation of the driver.

Separating out UXA functions

The first task was to just identify which functions were UXA-specific by adding “_uxa” to their names. A couple dozen sed runs and now a bunch of the driver is looking better.

Next, a pile of UXA-specific functions were actually inside the non-UXA parts of the code. Those got moved out, and a new ‘intel_uxa.h” file was created to hold all of the definitions.

Finally, a few non UXA-specific functions were actually in the uxa files; those got moved over to the generic code.

Removing the Glamor paths in UXA

Each one of the UXA functions had a little piece of code at the top like:

if (uxa_screen->info->flags & UXA_USE_GLAMOR) {
    int ok = 0;

    if (uxa_prepare_access(pDrawable, UXA_GLAMOR_ACCESS_RW)) {
        ok = glamor_fill_spans_nf(pDrawable,
                      pGC, n, ppt, pwidth, fSorted);
        uxa_finish_access(pDrawable, UXA_GLAMOR_ACCESS_RW);
    }

    if (!ok)
        goto fallback;

    return;
}

Pulling those out shrank the UXA code by quite a bit.

Selecting Acceleration (or not)

The intel driver only supported UXA before; Glamor was really just a slightly different mode for UXA. I switched the driver from using a bit in the UXA flags to having an ‘accel’ variable which could be one of three options:

  • ACCEL_GLAMOR.
  • ACCEL_UXA.
  • ACCEL_NONE

I added ACCEL_NONE to give us a dumb frame buffer mode. That actually supports DRI3 so that we can bring up Mesa and run it under X before we have any acceleration code ready; avoiding a dependency loop when doing new hardware. All that it requires is a kernel that offers mode setting and buffer allocation.

Initializing Glamor

With UXA no longer supporting Glamor, it was time to plug the Glamor support into the top of the driver. That meant changing a bunch of the entry points to select appropriate Glamor or UXA functionality, instead of just calling into UXA. So, now we’ve got lots of places that look like:

        switch (intel->accel) {
#if USE_GLAMOR
        case ACCEL_GLAMOR:
                if (!intel_glamor_create_screen_resources(screen))
                        return FALSE;
                break;
#endif
#if USE_UXA
        case ACCEL_UXA:
                if (!intel_uxa_create_screen_resources(screen))
                        return FALSE;
        break;
#endif
        case ACCEL_NONE:
                if (!intel_none_create_screen_resources(screen))
                        return FALSE;
                break;
        }

Using a switch means that we can easily elide code that isn’t wanted in a particular build. Of course ‘accel’ is an enum, so places which are missing one of the required paths will cause a compiler warning.

It’s not all perfectly clean yet; there are piles of UXA-only paths still.

Making It Build Without UXA

The final trick was to make the driver build without UXA turned on; that took several iterations before I had the symbols sorted out appropriately.

I built the driver with various acceleration options and then tried to count the lines of source code. What I did was just list the source files named in the driver binary itself. This skips all of the header files and the render program source code, and ignores the fact that there are a bunch of #ifdef’s in the uxa directory selecting between uxa, glamor and none.

    Accel                    Lines          Size(B)
    -----------             ------          -------
    none                      7143            73039
    glamor                    7397            76540
    uxa                      25979           283777
    sna                     118832          1303904

    none legacy              14449           152480
    glamor legacy            14703           156125
    uxa legacy               33285           350685
    sna legacy              126138          1395231

The ‘legacy’ addition supports i810-class hardware, which is needed for a complete driver.

Along The Way, Enable Tiling for the Front Buffer

While hacking the code, I discovered that the initial frame buffer allocated for the screen was created without tiling (!) because a few parameters that depend on the GTT size were not initialized until after that frame buffer was allocated. I haven’t analyzed what effect this has on performance.

Page Flipping and Resize

Page flipping (or just flipping) means switching the entire display from one frame buffer to another. It’s generally the fastest way of updating the screen as you don’t have to copy any bits.

The trick with flipping is that a client hands you a random pixmap and you need to stuff that into the KMS API. With UXA, that’s pretty easy as all pixmaps are managed through the UXA API which knows which underlying kernel BO is tied with each pixmap. Using Glamor, only the underlying GL driver knows the mapping. Fortunately (?), we have the EGL Image extension, which lets us take a random GL texture and turn it into a file descriptor for a DMA-BUF kernel object. So, we have this cute little dance:

fd = glamor_fd_from_pixmap(screen,
                               pixmap,
                               &stride,
                               &size);


bo = drm_intel_bo_gem_create_from_prime(intel->bufmgr, fd, size);
    close(fd);
    intel_glamor_get_pixmap(pixmap)->bo = bo;

That last bit remembers the bo in some local memory so we don’t have to do this more than once for each pixmap. glamorfdfrompixmap ends up calling eglCreateImageKHR followed by gbmbo_import and then a kernel ioctl to convert a prime handle into an fd. It’s all quite round-about, but it does seem to work just fine.

After I’d gotten Glamor mostly working, I tried a few OpenGL applications and discovered flipping wasn’t working. That turned out to have an unexpected consequence — all full-screen applications would run flat-out, and not be limited to frame rate. Present ‘recovers’ from a failed flip queue operation by immediately performing a CopyArea; not waiting for vblank. This needs to get fixed in Present by having it re-queued the CopyArea for the right time. What I did in the intel driver was to add a bunch more checks for tiling mode, pixmap stride and other things to catch pixmaps that were going to fail before the operation was queued and forcing them to fall back to CopyArea at the right time.

The second adventure was with XRandR. Glamor has an API to fix up the screen pixmap for a new frame buffer, but that pulls the size of the frame buffer out of the pixmap instead of out of the screen. XRandR leaves the pixmap size set to the old screen size during this call; fixing that just meant getting the pixmap size set correctly before calling into glamor. I think glamor should get fixed to use the screen size rather than the pixmap size.

Painting Root before Mode set

The X server has generally done initialization in one order:

  1. Create root pixmap
  2. Set video modes
  3. Paint root window

Recently, we’ve added a ‘-background none’ option to the X server which causes it to set the root window background to none and have the driver fill in that pixmap with whatever contents were on the screen before the X server started.

In a pre-Glamor world, that was done by hacking the video driver to copy the frame buffer console contents to the root pixmap as it was created. The trouble here is that the root pixmap is created long before the upper layers of the X server are ready for drawing, so you can’t use the core rendering paths. Instead, UXA had kludges to call directly into the acceleration functions.

What we really want though is to change the order of operations:

  1. Create root pixmap
  2. Paint root window
  3. Set video mode

That way, the normal root window painting operation will take care of getting the image ready before that pixmap is ever used for scanout. I can use regular core X rendering to get the original frame buffer contents into the root window, and even if we’re not using -background none and are instead painting the root with some other pattern (like the root weave), I get that presented without an intervening black flash.

That turned out to be really easy — just delay the call to I830EnterVT (which sets the modes) until the server is actually running. That required one additional kludge — I needed to tell the DIX level RandR functions about the new modes; the mode setting operation used during server init doesn’t call up into RandR as RandR lists the current configuration after the screen has been initialized, which is when the modes used to be set.

Calling xf86RandR12CreateScreenResources does the trick nicely. Getting the root window bits from fbcon, setting video modes and updating the RandR/Xinerama DIX info is now all done from the BlockHandler the first time it is called.

Performance

I ran the current glamor version of the intel driver with the master branch of the X server and there were not any huge differences since my last Glamor performance evaluation aside from GetImage. The reason is that UXA/Glamor never called Glamor’s image functions, and the UXA GetImage is pretty slow. Using Mesa’s image download turns out to have a huge performance benefit:

1. UXA/Glamor from April
2. Glamor from today

       1                 2                 Operation
------------   -------------------------   -------------------------
     50700.0        56300.0 (     1.110)   ShmGetImage 10x10 square 
     12600.0        26200.0 (     2.079)   ShmGetImage 100x100 square 
      1840.0         4250.0 (     2.310)   ShmGetImage 500x500 square 
      3290.0          202.0 (     0.061)   ShmGetImage XY 10x10 square 
        36.5          170.0 (     4.658)   ShmGetImage XY 100x100 square 
         1.5           56.4 (    37.600)   ShmGetImage XY 500x500 square 
     49800.0        50200.0 (     1.008)   GetImage 10x10 square 
      5690.0        19300.0 (     3.392)   GetImage 100x100 square 
       609.0         1360.0 (     2.233)   GetImage 500x500 square 
      3100.0          206.0 (     0.066)   GetImage XY 10x10 square 
        36.4          183.0 (     5.027)   GetImage XY 100x100 square 
         1.5           55.4 (    36.933)   GetImage XY 500x500 square

Running UXA from today the situation is even more dire; I suspect that enabling tiling has made CPU reads through the GTT even worse than before?

1: UXA today
2: Glamor today

       1                 2                 Operation
------------   -------------------------   -------------------------
     43200.0        56300.0 (     1.303)   ShmGetImage 10x10 square 
      2600.0        26200.0 (    10.077)   ShmGetImage 100x100 square 
       130.0         4250.0 (    32.692)   ShmGetImage 500x500 square 
      3260.0          202.0 (     0.062)   ShmGetImage XY 10x10 square 
        36.7          170.0 (     4.632)   ShmGetImage XY 100x100 square 
         1.5           56.4 (    37.600)   ShmGetImage XY 500x500 square 
     41700.0        50200.0 (     1.204)   GetImage 10x10 square 
      2520.0        19300.0 (     7.659)   GetImage 100x100 square 
       125.0         1360.0 (    10.880)   GetImage 500x500 square 
      3150.0          206.0 (     0.065)   GetImage XY 10x10 square 
        36.1          183.0 (     5.069)   GetImage XY 100x100 square 
         1.5           55.4 (    36.933)   GetImage XY 500x500 square

Of course, this is all just x11perf, which doesn’t represent real applications at all well. However, there are applications which end up doing more GetImage than would seem reasonable, and it’s nice to have this kind of speed up.

Status

I’m running this on my crash box to get some performance numbers and continue testing it. I’ll switch my desktop over when I feel a bit more comfortable with how it’s working. But, I think it’s feature complete at this point.

Where’s the Code

As usual, the code is in my personal repository. It’s on the ‘glamor’ branch.

git://people.freedesktop.org/~keithp/xf86-video-intel  glamor

21 July, 2014 07:39AM