January 28, 2015

hackergotchi for Andrea Veri

Andrea Veri

The GNOME Infrastructure Apprentice Program

Many times it happened seeing someone joining the #sysadmin IRC channel requesting participation to the team after having spent around 5 minutes trying to explain what the skills and the knowledge were and why this person felt it was the right figure for the position. And it was always very disappointing for me having to reject all these requests as we just didn’t have the infrastructure in place to let new people join the rest of the team with limited privileges.

With the introduction of FreeIPA, more fine-grained ACLs (and hiera-eyaml-gpg for securing tokens, secrets, passwords out of Puppet itself) we are so glad to announce the launch of the “GNOME Infrastructure Apprentice Program” (from now till the end of the post just “Program”). If you are familiar with the Fedora Infrastructure and how it works you might know what this is about already. If you don’t please read further ahead.

The Program will allow apprentices to join the Sysadmin Team with a limited set of privileges which mainly consist in being able to access the Puppet repository and all the stored configuration files that run the machines powering the GNOME Infrastructure every day. Once approved to the Program apprentices will be able to submit patches for review to the team and finally see their work merged on the production environment if the proposed changes matched the expectations and addressed comments.

While the Program is open to everyone to join, we have some prerequisites in place. The interested person should be:

  1. Part of an existing FOSS community
  2. Familiar with how a FOSS Project works behind the scenes
  3. Familiar with popular tools like Puppet, Git
  4. Familiar with RHEL as the OS of choice
  5. Familiar with popular Sysadmin tools, softwares and procedures
  6. Eager to learn new things, make constructive discussions with a team, provide feedback and new ideas

If you feel like having all the needed prerequisites and would be willing to join follow these steps:

  1. Subscribe to the gnome-infrastructure and infrastructure-announce mailing lists
  2. Join the #sysadmin IRC channel on irc.gnome.org
  3. Send a presentation e-mail to the gnome-infrastructure mailing list stating who you are, what your past experiences and plans are as an Apprentice
  4. Once the presentation has been sent an existing Sysadmin Team member will evaluate your application and follow-up with you introducing you to the Program

More information about the Program is available here.

28 January, 2015 04:59PM by Andrea Veri

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RInside 0.2.12

A new release 0.2.12 of RInside is now on CRAN. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by the Rcpp integration package.

This release adds new examples which were contributed by Christian Authmann, plus some updates and fixes including one requested by the CRAN maintainers regarding GNU extensions to Makefile. The NEWS extract below has more details.

Changes in RInside version 0.2.12 (2015-01-27)

  • Several new examples have been added (with most of the work done by Christian Authmann):

    • standard/rinside_sample15.cpp shows how to create a lattice plot (following a StackOverflow question)

    • standard/rinside_sample16.cpp shows object wrapping, and exposing of C++ functions

    • standard/rinside_sample17.cpp does the same via C++11

    • sandboxed_servers/ adds an entire framework of client/server communication outside the main process (but using a subset of supported types)

  • standard/rinside_module_sample9.cpp was repaired following a fix to InternalFunction in Rcpp

  • For the seven example directories which contain a Makefile, the Makefile was renamed GNUmakefile to please R CMD check as well as the CRAN Maintainers.

CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 January, 2015 02:24PM

Russell Coker

SE Linux Play Machine Over Tor

I work on SE Linux to improve security for all computer users. I think that my work has gone reasonably well in that regard in terms of directly improving security of computers and helping developers find and fix certain types of security flaws in apps. But a large part of the security problems we have at the moment are related to subversion of Internet infrastructure. The Tor project is a significant step towards addressing such problems. So to achieve my goals in improving computer security I have to support the Tor project. So I decided to put my latest SE Linux Play Machine online as a Tor hidden service. There is no real need for it to be hidden (for the record it’s in my bedroom), but it’s a learning experience for me and for everyone who logs in.

A Play Machine is what I call a system with root as the guest account with only SE Linux to restrict access.

Running a Hidden Service

A Hidden Service in TOR is just a cryptographically protected address that forwards to a regular TCP port. It’s not difficult to setup and the Tor project has good documentation [1]. For Debian the file to edit is /etc/tor/torrc.

I added the following 3 lines to my torrc to create a hidden service for SSH. I forwarded port 80 for test purposes because web browsers are easier to configure for SOCKS proxying than ssh.

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 22 192.168.0.2:22
HiddenServicePort 80 192.168.0.2:22

Generally when setting up a hidden service you want to avoid using an IP address that gives anything away. So it’s a good idea to run a hidden service on a virtual machine that is well isolated from any public network. My Play machine is hidden in that manner not for secrecy but to prevent it being used for attacking other systems.

SSH over Tor

Howtoforge has a good article on setting up SSH with Tor [2]. That has everything you need for setting up Tor for a regular ssh connection, but the tor-resolve program only works for connecting to services on the public Internet. By design the .onion addresses used by Hidden Services have no mapping to anything that reswemble IP addresses and tor-resolve breaks it. I believe that the fact that tor-resolve breaks thins in this situation is a bug, I have filed Debian bug report #776454 requesting that tor-resolve allow such things to just work [3].

Host *.onion
ProxyCommand connect -5 -S localhost:9050 %h %p

I use the above ssh configuration (which can go in ~/.ssh/config or /etc/ssh/ssh_config) to tell the ssh client how to deal with .onion addresses. I also had to install the connect-proxy package which provides the connect program.

ssh root@zp7zwyd5t3aju57m.onion
The authenticity of host ‘zp7zwyd5t3aju57m.onion ()
ECDSA key fingerprint is 3c:17:2f:7b:e2:f6:c0:c2:66:f5:c9:ab:4e:02:45:74.
Are you sure you want to continue connecting (yes/no)?

I now get the above message when I connect, the ssh developers have dealt with connecting via a proxy that doesn’t have an IP address.

Also see the general information page about my Play Machine, that information page has the root password [4].

28 January, 2015 07:44AM by etbe

January 27, 2015

Laura Arjona

Upgrading my computers to Debian Jessie: Husband’s laptop (Acer Aspire 5250)

This is an old laptop, with AMD E-300 processor, 6 GB RAM, Radeon HD 6310 VGA and Atheros AR9485 wireless network adapter.

It was running Windows 7 (preinstalled). The hard disk failed, and I put the hard disk of another laptop (a broken Acer Aspire One D255) on it. Surprisingly, the Windows 7 on it booted (after some self-configuration that took quite long), but it was a Windows 7 Home 32bits, so it was only recognizing 4 GB RAM. That was the perfect excuse to convince my husband to install Debian in the laptop and begin his transition to a free OS. Yay!

I installed Debian Jessie from scratch last summer. Everything went well (the installer went fine, 8 months before than its RC1 release, congrats Debian-boot team!).

I needed the non-free radeon driver for the graphical display :/

Jessie is running GNOME3 desktop, and I’ve been seeing all these months the transition to 3.14 version, and later, the integration of the “Lines” theme (by Juliette Belin), which I like very much.

I have problems to watch high quality videos, in every player that I tried (VLC, Totem, mplayer) the audio and video are not synced, and video sometimes freezes. I’m almost sure that the problem is what mplayer says: “Your system is too SLOW to play this!”.

I tried to install the ATI non-free driver for better performance, but after successfully install it and reboot, GNOME was not starting (I got a black screen, no gdm greeting me). I could log in tty2, though. I don’t know if I did something wrong, how to solve the problem, and I don’t wanted to waste time, so I uninstalled it and returned to the non-free firmware that goes to the Linux kernel. For now, when I need to watch a video that gives those problems, I upload the file to my GNU MediaGoblin site, or use WinFF to reduce size/quality.

Overall impression

Fine! Both my husband and me are very happy.

The installation went really well.

I’m not a GNOME expert user but I find it easy, intuitive, and he found it easy too.

My husband uses the computer to surf the web, watch some videos and online series (we had to install non-free flash plugin from Adobe #grr), read mail from the browser, write something in LibreOffice and print it (hey! we just plug the printer/scanner and it works, no need to install drivers!), scan some image and send it by email… I set Debian as default in GRUB, and the switch from Windows has been very natural for him (he was already using Firefox and LibreOffice in Windows. He still says “I’m a Windows user” although he is just using Debian for months!).

He bought an IPhone 4S (#grr!) and I tried to connect it as shown in the corresponding Debian wiki page, but it didn’t work (I got “segmentation fault” when connecting the phone). However, it is recognized by Shotwell and we can copy all the photos and videos to the computer, which is what we wanted to do. So no problem on that side, either.

In conclussion, one more computer at home running Debian (“future stable”), and we don’t run Windows at home anymore :)


Filed under: My experiences and opinion Tagged: Debian, English, Moving into free software

27 January, 2015 11:39PM by larjona

Upgrading my computers to Debian Jessie

Until now, I usually run Debian stable at work (in my desktop PC) and stable or testing at home in my laptop. I was upgrading to testing during the freeze, and then, stay in testing (future stable) or stable (when it’s published) until the next freeze.

I have changed this ‘conservative’ pattern. I’ve been running Jessie for many months now, and here I’ll document the different experiences in the computers that I use.

Upgrade or clean install?

I decided to upgrade my computers instead of making a clean install (except in the ones  that were not running Debian).

Although the upgrade process have been fine, I’m still not sure which is the best for my needs. Installing from the beginning forces me to re-read the feature list of the different pieces of software and choose the one that fits best (not the one that I was using some years ago). And maybe I just don’t need that non-free driver anymore because there’s free replacement already, the installer is wise. OTOH, upgrading is easier and quicker, and I got all my software and configurations (and my rubbish) there, nothing is lost.

The computers

Here I will link the blog posts of each computer that I upgrade, when I finish writing the corresponding articles:

  • Husband’s laptop (Acer 5250): Clean install – Done, and OK!
  • My laptop (Compaq Mini 110c): Upgrade – Done and OK!
  • Home server (HP Microserver N54L G7): Upgrade – Done and OK!
  • PC at work (motherboard Asus P5KPL-AM-SE): Upgrade – Done, some issues.
  • Mini-laptop Airis Kira N7000 (ARM board, 128MB RAM) – Clean install – Pending

Filed under: My experiences and opinion Tagged: Debian, English, Moving into free software

27 January, 2015 10:40PM by larjona

Matthias Klumpp

AppStream 0.8 released!

Yesterday I released version 0.8 of AppStream, the cross-distribution standard for software metadata, that is currently used by GNOME-Software, Muon and Apper in to display rich metadata about applications and other software components.

 What’s new?

The new release contains some tweaks on AppStreams documentation, and extends the specification with a few more tags and refinements. For example we now recommend sizes for screenshots. The recommended sizes are the ones GNOME-Software already uses today, and it is a good idea to ship those to make software-centers look great, as others SCs are planning to use them as well. Normal sizes as well as sizes for HiDPI displays are defined. This change affects only the distribution-generated data, the upstream metadata is unaffected by this (the distro-specific metadata generator will resize the screenshots anyway).

Another addition to the spec is the introduction of an optional <source_pkgname/> tag, which holds the source package name the packages defined in <pkgname/> tags are built from. This is mainly for internal use by the distributor, e.g. it can decide to use this information to link to internal resources (like bugtrackers, package-watch etc.). It may also be used by software-center applications as additional information to group software components.

Furthermore, we introduced a <bundle/> tag for future use with 3rd-party application installation solutions. The tag notifies a software-installer about the presence of a 3rd-party application bundle, and provides the necessary information on how to install it. In order to do that, the software-center needs to support the respective installation solution. Currently, the Limba project and Xdg-App bundles are supported. For software managers, it is a good idea to implement support for 3rd-party app installers, as soon as the solutions are ready. Currently, the projects are worked on heavily. The new tag is currently already used by Limba, which is the reason why it depends on the latest AppStream release.

How do I get it?

All AppStream libraries, libappstream, libappstream-qt and libappstream-glib, are supporting the 0.8 specification in their latest version – so in case you are using one of these, you don’t need to do anything. For Debian, the DEP-11 spec is being updated at time, and the changes will land in the DEP-11 tools soon.

Improve your metadata!

This call goes especilly to many KDE projects! Getting good data is partly a task for the distributor, since packaging issues can result in incorrect or broken data, screenshots need to be properly resized etc. However, flawed upstream data can also prevent software from being shown, since software with broken data or missing data will not be incorporated in the distro XML AppStream data file.

Richard Hughes of Fedora has created a nice overview of software failing to be included. You can see the failed-list here – the data can be filtered by desktop environment etc. For KDE projects, a Comment= field is often missing in their .desktop files (or a <summary/> tag needs to be added to their AppStream upstream XML file). Keep in mind that you are not only helping Fedora by fixing these issues, but also all other distributions cosuming the metadata you ship upstream.

For Debian, we will have a similar overview soon, since it is also a very helpful tool to find packaging issues.

If you want to get more information on how to improve your upstream metadata, and how new metadata should look like, take a look at the quickstart guide in the AppStream documentation.

27 January, 2015 04:48PM by Matthias

Jingjie Jiang

Yet another post.

In the middle of OPW internship

I originally thought taking part in FOSSOPW is a great chance to lift my coding skills and I shouldn’t miss it. As time passes by, I now have a new thought towards it.

# 1 It do improves your coding skill.

Zack, Matthieu and I often have discussions on coding style. For example, once Zack said, “For code like this, you should explicitly use an if/else clause, not if-return.” I was totally unaware of this sort of issue. Actually I even didn’t know how I should call this problem. Matthieu gave me a detailed FYI link on this in no time.

Besides, my completeness of thinking is also trained. Recently, I was fixing a “HTTP GET Method ?suite=suite-name” issue. It’s a trivial task. And you know most trivial tasks require lots of scattered modification on the source code base. I did have fixed most places, such as the pages of “/src/packagename” and “/search/”. Zack did a thorough review, and pointed out that the pages rendered by “/prefix” has some malformed urls in the HTML. Waiiiit, I should have noticed it. But somehow I missed it. Maybe because my mind was wandering at that time? This made me think, I shall have a thorough view of what I should do before getting my hands dirty. Or more preferably, if I could write down what I exactly want to achieve before coding, then silly problems definitely wouldn’t occur. This may sound a little bit like TDD. ;).

# 2 It makes you look like a (not-that-good) ninja.

I use a macbook. It’s not my fault! I’ve tried several times, but I never successfully find a laptop that is not capable of boiling eggs when running Debian. (and especially KDE+Debian). So I have no way but switched to OSX. The development of Debsources happens on a remote Ubuntu LTS (now Debian SID, haha) virtual machine. Of course I have to install all the dependency on my own, e.g., Postgres, set up port-forwarding, e.g., ssh -D, write automate shell scripts, e.g., dash, but more importantly, I am forced to live under the dark terminal with no GUI. You know the feeling when pain hurts? Yes, exactly! But I survived. How shall I call myself now? A dedicated with-a-lot-of-useless-plugin-installed vimmer? A fond-of-fancy-window tmux-er? Yep, both. I finally found a comfort zone under the black-white-blinking screen. I wonder how people feel when they see a girl hanging out in the library, facing a full-screened black console, typing at a speed of 140wpm (Yeah, I am kidding). I don’t know, but please don’t call me a geek. Show me your respect, I am a ninja!

# 3 It tells you communication is the most important.

I bet anyone who has participated in a group-based project would understand what I mean. For one perspective, communication helps to eliminate misunderstanding. So I won’t doing some useless stuff for all day and finally find out that it totally doesn’t meet the requirement. On the other hand, it speeds up your learning process. I often have problems on git. So in the email I will complain if I mess up with the git repo. After a short while, my dear mentors will reply in detail on how to correctly do the git stuff.

My OPW journey is cool! ;).


27 January, 2015 02:21PM by sophiejjj

hackergotchi for Thomas Goirand

Thomas Goirand

OpenStack debian image available from cdimage.debian.org

About a year and a half after I started writing the openstack-debian-images package, I’m very happy to announce to everyone that, thanks to Steve McIntyre’s help, the official OpenStack Debian image is now generated at the same time as the official Debian CD ISO images. If you are a cloud user, if you use OpenStack on a private cloud, or if you are a public cloud operator, then you may want to download the weekly build of the OpenStack image from here:

http://cdimage.debian.org/cdimage/openstack/testing/

Note that for the moment, there’s only the amd64 arch available, but I don’t think this is a problem: so far, I haven’t found any public cloud provider offering anything else than Intel 64 bits arch. Maybe this will change over the course of this year, and we will need arm64, but this can be added later on.

Now, for later plans: I still have 2 bugs to fix on the openstack-debian-images package (the default 1GB size is now just a bit too small for Jessie, and the script exits with zero in case of error), but nothing that prevents its use right now. I don’t think it will be a problem for the release team to accept these small changes before Jessie is out.

When generating the image, Steve also wants to generate a sources.tar.gz containing all the source packages that we include on the image. He already has the script (which is used as a hook script when running the build-openstack-debian-image script), and I am planning to add it as a documentation in /usr/share/doc/openstack-debian-images.

Last, probably it would be a good idea to install grub-xen, just as Ian Campbell suggested to make it possible for this image to run in AWS or other Xen based clouds. I would need to be able to test this though. If you can contribute with this kind of test, please get in touch.

Feel free to play with all of this, and customize your Jessie images if you need to. The script is (on purpose) very small (around 400 lines of shell script) and easy to understand (no function, it’s mostly linear from top to bottom of the file), so it is also very easy to hack, plus it has a convenient hook script facility where you can do all sorts of things (copying files, apt-get install stuff, running things in the chroot, etc.).

Again, thanks so much to Steve for working on using the script during the CD builds. This feels me with joy that Debian finally has official images for OpenStack.

27 January, 2015 12:30PM by Goirand Thomas

hackergotchi for Steve Kemp

Steve Kemp

Recording gym-visits on Linux.

I go to the gym every couple of days. I lift things up, then put them down, and sometimes I repeat this process another 30 times. When I'm done I write down what I've done, how many times I did the lifty-droppy thing, and so on.

I want to see pretty graphs. I want to have records of different things. I guess I just need some simple text-boxes:

   deadlift  3 x 7 @ 210lbs.

etc. Sometimes I use machines so I'd say instead:

  converging seated-row  3 x 8 @ 150lbs

Anyway that's it. I want a simple GUI, a bit like a spreadsheet where I can easily add rows of each session. (A session might have 10-15 exercises in it, so not many.) I imagine some kind of SQLite database for the back-end. Or CSV. Either works.

Writing GUI software is hard. I guess I should look at GtK or Qt over the next few days and see if it is going to be easier to do it online via a jQuery + CGI system instead. To be honest I expect doing it "online" is liable to be more popular, but I think a desktop toy-application is just as useful.

27 January, 2015 12:00AM

January 26, 2015

hackergotchi for Daniel Pocock

Daniel Pocock

Get your Nagios issues as an iCalendar feed

The other day I demonstrated how to get your Github issues/bugs as an iCalendar feed.

I'm planning to take this concept further and I just whipped up another Python script, exposing Nagios issues as an iCalendar feed.

The script is nagios-icalendar. Usage is explained concisely in the README file, it takes just minutes to get up and running.

One interesting feature is that you can append a contact name to the URL and just get the issues for that contact, e.g.:

http://nagios-server.example.org:5001?contact=daniel

Screenshots

Here I demonstrate using Mozilla Lightning / Iceowl-extension to aggregate issues from Nagios, the Fedora instance of Bugzilla and Lumicall's Github issues into a single to-do list.

26 January, 2015 09:37PM by Daniel.Pocock

Vincent Fourmond

Linux kernels for a macbook pro retina

I was unhappy about the recent Linux (3.14-3.16, and I think 3.17 too) kernels on my Macbook Pro Retina (15'), for a few reasons:

  • the nouveau graphics driver was not handling the graphics card very well (hangs when using the DRM after putting the computer to sleep once, garbage screen on various apps, slow 3D rendering), and I could never get the proprietary nvidia drivers to work (would give blank screen at boot time)
  • very unstable wireless (at least with my box home, but not with all the ones I've tried)
  • and the most painful was the need to recompile the kernel by hand, with the following modifications from stock debian kernel:
    -CONFIG_X86_SYSFB=y
    +# CONFIG_X86_SYSFB is not set
    -CONFIG_FB_SIMPLE=y
    +# CONFIG_FB_SIMPLE is not set
    
    Without these modifications, the screen would be garbled some 5-6 seconds after boot (but SSH would still work, as far as I remember).
The latest 3.18-trunk kernel fixes essentially all the above problems, which is just great. Kudos to everyone involved ! Hope it helps...

26 January, 2015 08:47PM by Vincent Fourmond (noreply@blogger.com)

hackergotchi for Tanguy Ortolo

Tanguy Ortolo

Scale manufacturers…

Dear manufacturers of kitchen scales, could you please stop considering your clients as idiots, and start developing useful features?

Liquid measurement: this is one feature that is available on almost every electronic scale available. Except it is completely useless to people that use the metric system, as all it does is replace the usual display in grammes by centilitres and divide the number on display by ten. Thank you, but no person that has been to school in a country that uses the metric system needs electronic assistance to determine the volume corresponding to a given weight of water, and for people that have not, a simple note written on the scale, stating that “for water or milk, divide the weight in grammes by ten to get the volume in centilitres” should be enough.

Now, there is still one thing that an electronic scale could be useful for, which is determining the volume of liquids other than water (density 1 g/ml) or milk (density approx. equal to 1 g/ml), most importantly: oil (density approx. equal to .92 g/ml for edible oils like sunflower, peanut, olive and canola).

26 January, 2015 01:54PM by Tanguy

hackergotchi for Francois Marier

Francois Marier

Using unattended-upgrades on Rackspace's Debian and Ubuntu servers

I install the unattended-upgrades package on almost all of my Debian and Ubuntu servers in order to ensure that security updates are automatically applied. It works quite well except that I still need to login manually to upgrade my Rackspace servers whenever a new rackspace-monitoring-agent is released because it comes from a separate repository that's not covered by unattended-upgrades.

It turns out that unattended-upgrades can be configured to automatically upgrade packages outside of the standard security repositories but it's not very well documented and the few relevant answers you can find online are still using the old whitelist syntax.

Initial setup

The first thing to do is to install the package if it's not already done:

apt-get install unattended-upgrades

and to answer yes to the automatic stable update question.

If you don't see the question (because your debconf threshold is too low -- change it with dpkg-reconfigure debconf), you can always trigger the question manually:

dpkg-reconfigure -plow unattended-upgrades

Once you've got that installed, the configuration file you need to look at is /etc/apt/apt.conf.d/50unattended-upgrades.

Whitelist matching criteria

Looking at the unattended-upgrades source code, I found the list of things that can be used to match on in the whitelist:

  • origin (shortcut: o)
  • label (shortcut: l)
  • archive (shortcut: a)
  • suite (which is the same as archive)
  • component (shortcut: c)
  • site (no shortcut)

You can find the value for each of these fields in the appropriate _Release file under /var/lib/apt/lists/.

Note that the value of site is the hostname of the package repository, also present in the first part these *_Release filenames (stable.packages.cloudmonitoring.rackspace.com in the example below).

In my case, I was looking at the following inside /var/lib/apt/lists/stable.packages.cloudmonitoring.rackspace.com_debian-wheezy-x86%5f64_dists_cloudmonitoring_Release:

Origin: Rackspace
Codename: cloudmonitoring
Date: Fri, 23 Jan 2015 18:58:49 UTC
Architectures: i386 amd64
Components: main
...

which means that, in addition to site, the only things I could match on were origin and component since there are no Suite or Label fields in the Release file.

This is the line I ended up adding to my /etc/apt/apt.conf.d/50unattended-upgrades:

 Unattended-Upgrade::Origins-Pattern {
         // Archive or Suite based matching:
         // Note that this will silently match a different release after
         // migration to the specified archive (e.g. testing becomes the
         // new stable).
 //      "o=Debian,a=stable";
 //      "o=Debian,a=stable-updates";
 //      "o=Debian,a=proposed-updates";
         "origin=Debian,archive=stable,label=Debian-Security";
         "origin=Debian,archive=oldstable,label=Debian-Security";
+        "origin=Rackspace,component=main";
 };

Testing

To ensure that the config is right and that unattended-upgrades will pick up rackspace-monitoring-agent the next time it runs, I used:

unattended-upgrade --dry-run --debug

which should output something like this:

Initial blacklisted packages: 
Starting unattended upgrades script
Allowed origins are: ['origin=Debian,archive=stable,label=Debian-Security', 'origin=Debian,archive=oldstable,label=Debian-Security', 'origin=Rackspace,component=main']
Checking: rackspace-monitoring-agent (["<Origin component:'main' archive:'' origin:'Rackspace' label:'' site:'stable.packages.cloudmonitoring.rackspace.com' isTrusted:True>"])
pkgs that look like they should be upgraded: rackspace-monitoring-agent
...
Option --dry-run given, *not* performing real actions
Packages that are upgraded: rackspace-monitoring-agent

Making sure that automatic updates are happening

In order to make sure that all of this is working and that updates are actually happening, I always install apticron on all of the servers I maintain. It runs once a day and emails me a list of packages that need to be updated and it keeps doing that until the system is fully up-to-date.

The only thing missing from this is getting a reminder whenever a package update (usually the kernel) requires a reboot to take effect. That's where the update-notifier-common package comes in.

Because that package will add a hook that will create the /var/run/reboot-required file whenever a kernel update has been installed, all you need to do is create a cronjob like this in /etc/cron.daily/reboot-required:

#!/bin/sh
cat /var/run/reboot-required 2> /dev/null || true

assuming of course that you are already receiving emails sent to the root user (if not, add the appropriate alias in /etc/aliases and run newaliases).

26 January, 2015 08:25AM

NOKUBI Takatsugu

Weak ssh public keys in github

A presentation slide, named ”Attacking against 5 millions SSH public keys – 偶然にも500万個のSSH公開鍵を手に入れた俺たちは” is published, it is a lightning talk in “Edomae security seminar” in Jan 24, 2015.

 He grabbed ssh public keys with  GitHub API (https://github.com/${user}.key), the API is obsoleted, but not closed.

He found short (<= 512 bit) DSA/RSA keys and can solve prime decomposition 256bit RSA key in 3 seconds.

And he repoted there are 208 weak ssh keys generated by Debian/Ubuntu (CVE-2008-0166). It was already announced  by GitHub.

On the other hand, such ssh keys couldn’t solve prime decomposition with fastgcd. It means almost ssh keys in GitHub has no bias in almost random number generators implementations, it is a good news.

26 January, 2015 05:15AM by knok

January 25, 2015

Richard Hartmann

KDE battery monitor

Dear lazyweb,

using a ThinkPad X1 Carbon with Debian unstable and KDE 4.14.2, I have not had battery warnings for a few weeks, now.

The battery status can be read out via acpi -V as well as via the KDE widget. Hibernation via systemctl hibernate works as well.

What does not work is the warning when my battery is low, or automagic hibernation when shutting the lid or when the battery level is critical.

From what I gather, something in the communication between upower and KDE broke down, but I can't find what it is. I have also been told that Cinnamon is affected as well, so this seems to be a more general problem

Sadly, me and anyone else who's affected has been unable to fix this.

So, dear lazyweb, please help.

In loosely related news, this old status is still valid. UMTS is stable-ish now but even though I saved the SIM's PIN, KDE always displays a "SIM PIN unlock request" prompt after booting or hibernating. Once I enter that PIN, systemd tells me that a system policy prevents the change and wants my user password. If anyone knows how to get rid of that, I would also appreciate any pointers.

25 January, 2015 09:11PM by Richard 'RichiH' Hartmann

hackergotchi for Chris Lamb

Chris Lamb

Recent Redis hacking

I've done a bunch of hacking on the Redis key/value database server recently:

  • Lua-based maxmemory eviction scripts. (#2319)

    (This changeset was sponsored by an anonymous client.)

    Redis typically stores the entire data set in memory, using the operating system's virtual memory facilities if required. However, one can use Redis more like a cache or ring buffer by enabling a "maxmemory policy" where a RAM limit is set and then data is evicted when required based on a predefined algorithm.

    This change enables entirely custom control over exactly what data to remove from RAM when this maxmemory limit is reached. This is an advantage over the existing policies of, say, removing entire keys based on the existing TTL, Least Recently Used (LRU) or random eviction strategies as it permits bespoke behaviours based on application-specific requirements, crucially without maintaining a private fork of Redis.

    As an example behaviour of what is possible with this change, to remove the lowest ranked member of an arbitrary sorted set, you could load the following eviction policy script:

    local bestkey = nil
    local bestval = 0
    
    for s = 1, 5 do
       local key = redis.call("RANDOMKEY")
       local type_ = redis.call("TYPE", key)
    
       if type_.ok == "zset"
       then
           local tail = redis.call("ZRANGE", key, "0", "0", "WITHSCORES")
           local val = tonumber(tail[2])
           if not bestkey or val < bestval
           then
               bestkey = key
               bestval = val
           end
       end
    end
    
    if not bestkey
    then
        -- We couldn't find anything to remove, so return an error
        return false
    end
    
    redis.call("ZREMRANGEBYRANK", bestkey, "0", "0")
    return true
    
  • TCP_FASTOPEN support. (#2307)

    The aim of TCP_FASTOPEN is to eliminate one roundtrip from a TCP conversation by allowing data to be included as part of the SYN segment that initiates the connection. (More info.)

  • Support infinitely repeating commands in redis-cli. (#2297)

  • Add --failfast option to testsuite runner. (#2290)

  • Add a -q (quiet) argument to redis-cli. (#2305)

  • Making some Redis Sentinel defaults a little saner. (#2292)


I also made the following changes to the Debian packaging:

  • Add run-parts(8) directories to be executed at various points in the daemon's lifecycle. (e427f8)

    This is especially useful for loading Lua scripts as they are not persisted across restarts.

  • Split out Redis Sentinel into its own package. (#775414, 39f642)

    This makes it possible to run Sentinel sanely on Debian systems without bespoke scripts, etc.

  • Ensure /etc/init.d/redis-server start idempotency with --oknodo (60b7dd)

    Idempotency in initscripts is especially important given the rise of configuration managment systems.

  • Uploaded 3.0.0 RC2 to Debian experimental. (37ac55)

  • Re-enabled the testsuite. (7b9ed1)

25 January, 2015 08:52PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.4.600.4.0

Conrad put up a maintenance release 4.600.4 of Armadillo a few days ago. As in the past, we tested this with number of pre-releases and test builds against the now over one hundred CRAN dependents of our RcppArmadillo package. The tests passed fine as usual, and results are as always in the rcpp-logs repository.

Changes are summarized below based on the NEWS.Rd file.

Changes in RcppArmadillo version 0.4.600.4.0 (2015-01-23)

  • Upgraded to Armadillo release Version 4.600.4 (still "Off The Reservation")

    • Speedups in the transpose operation

    • Small bug fixes

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 January, 2015 08:19PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Frontier: First Encounters

Cobra mk. 3

Cobra mk. 3

Four years ago, whilst looking for something unrelated, I stumbled across Tom Morton's port of "Frontier: Elite II" for the Atari to i386/OpenGL. This took me right back to playing Frontier on my Amiga in the mid-nineties. I spent a bit of time replaying Frontier and its sequel, First Encounters, for which there exists an interesting family of community-written game engines based on a reverse-engineering of the original DOS release.

I made some scrappy notes about engines, patches etc. at the time, which are on my frontier page.

With the recent release of Elite: Dangerous, I thought I'd pick up where I left in 2010 and see if I could get the Thargoid ship. I'm nowhere near yet, but I've spent some time trying to maximize income during the game's initial Soholian Fever period. My record in a JJFFE-derived engine (and winning the Wiccan Ware race during the same period) is currently £727,800. Can you do better?

25 January, 2015 01:18PM

hackergotchi for Joey Hess

Joey Hess

making propellor safer with GADTs and type families

Since July, I have been aware of an ugly problem with propellor. Certain propellor configurations could have a bug. I've tried to solve the problem at least a half-dozen times without success; it's eaten several weekends.

Today I finally managed to fix propellor so it's impossible to write code that has the bug, bending the Haskell type checker to my will with the power of GADTs and type-level functions.

the bug

Code with the bug looked innocuous enough. Something like this:

foo :: Property
foo = property "foo" $
    unlessM (liftIO $ doesFileExist "/etc/foo") $ do
        bar <- liftIO $ readFile "/etc/foo.template"
        ensureProperty $ setupFoo bar

The problem comes about because some properties in propellor have Info associated with them. This is used by propellor to introspect over the properties of a host, and do things like set up DNS, or decrypt private data used by the property.

At the same time, it's useful to let a Property internally decide to run some other Property. In the example above, that's the ensureProperty line, and the setupFoo Property is run only sometimes, and is passed data that is read from the filesystem.

This makes it very hard, indeed probably impossible for Propellor to look inside the monad, realize that setupFoo is being used, and add its Info to the host.

Probably, setupFoo doesn't have Info associated with it -- most properties do not. But, it's hard to tell, when writing such a Property if it's safe to use ensureProperty. And worse, setupFoo could later be changed to have Info.

Now, in most languages, once this problem was noticed, the solution would probably be to make ensureProperty notice when it's called on a Property that has Info, and print a warning message. That's Good Enough in a sense.

But it also really stinks as a solution. It means that building propellor isn't good enough to know you have a working system; you have to let it run on each host, and watch out for warnings. Ugh, no!

the solution

This screams for GADTs. (Well, it did once I learned how what GADTs are and what they can do.)

With GADTs, Property NoInfo and Property HasInfo can be separate data types. Most functions will work on either type (Property i) but ensureProperty can be limited to only accept a Property NoInfo.

data Property i where
    IProperty :: Desc -> ... -> Info -> Property HasInfo
    SProperty :: Desc -> ... -> Property NoInfo

data HasInfo
data NoInfo

ensureProperty :: Property NoInfo -> Propellor Result

Then the type checker can detect the bug, and refuse to compile it.

Yay!

Except ...

Property combinators

There are a lot of Property combinators in propellor. These combine two or more properties in various ways. The most basic one is requires, which only runs the first Property after the second one has successfully been met.

So, what's it's type when used with GADT Property?

requires :: Property i1 -> Property i2 -> Property ???

It seemed I needed some kind of type class, to vary the return type.

class Combine x y r where
    requires :: x -> y -> r

Now I was able to write 4 instances of Combines, for each combination of 2 Properties with HasInfo or NoInfo.

It type checked. But, type inference was busted. A simple expression like

foo `requires` bar

blew up:

   No instance for (Requires (Property HasInfo) (Property HasInfo) r0)
      arising from a use of `requires'
    The type variable `r0' is ambiguous
    Possible fix: add a type signature that fixes these type variable(s)
    Note: there is a potential instance available:
      instance Requires
                 (Property HasInfo) (Property HasInfo) (Property HasInfo)
        -- Defined at Propellor/Types.hs:167:10

To avoid that, it needed ":: Property HasInfo" appended -- I didn't want the user to need to write that.

I got stuck here for an long time, well over a month.

type level programming

Finally today I realized that I could fix this with a little type-level programming.

class Combine x y where
    requires :: x -> y -> CombinedType x y

Here CombinedType is a type-level function, that calculates the type that should be used for a combination of types x and y. This turns out to be really easy to do, once you get your head around type level functions.

type family CInfo x y
type instance CInfo HasInfo HasInfo = HasInfo
type instance CInfo HasInfo NoInfo = HasInfo
type instance CInfo NoInfo HasInfo = HasInfo
type instance CInfo NoInfo NoInfo = NoInfo
type family CombinedType x y
type instance CombinedType (Property x) (Property y) = Property (CInfo x y)

And, with that change, type inference worked again! \o/

(Bonus: I added some more intances of CombinedType for combining things like RevertableProperties, so propellor's property combinators got more powerful too.)

Then I just had to make a massive pass over all of Propellor, fixing the types of each Property to be Property NoInfo or Property HasInfo. I frequently picked the wrong one, but the type checker was able to detect and tell me when I did.

A few of the type signatures got slightly complicated, to provide the type checker with sufficient proof to do its thing...

before :: (IsProp x, Combines y x, IsProp (CombinedType y x)) => x -> y -> CombinedType y x
before x y = (y `requires` x) `describe` (propertyDesc x)

onChange
    :: (Combines (Property x) (Property y))
    => Property x
    => Property y
    => CombinedType (Property x) (Property y)
onChange = -- 6 lines of code omitted

fallback :: (Combines (Property p1) (Property p2)) => Property p1 -> Property p2 -> Property (CInfo p1 p2)
fallback = -- 4 lines of code omitted

.. This mostly happened in property combinators, which is an acceptable tradeoff, when you consider that the type checker is now being used to prove that propellor can't have this bug.

Mostly, things went just fine. The only other annoying thing was that some things use a [Property], and since a haskell list can only contain a single type, while Property Info and Property NoInfo are two different types, that needed to be dealt with. Happily, I was able to extend propellor's existing (&) and (!) operators to work in this situation, so a list can be constructed of properties of several different types:

propertyList "foos" $ props
    & foo
    & foobar
    ! oldfoo    

conclusion

The resulting 4000 lines of changes will be in the next release of propellor. Just as soon as I test that it always generates the same Info as before, and perhaps works when I run it. (eep)

These uses of GADTs and type families are not new; this is merely the first time I used them. It's another Haskell leveling up for me.

Anytime you can identify a class of bugs that can impact a complicated code base, and rework the code base to completely avoid that class of bugs, is a time to celebrate!

25 January, 2015 03:54AM

January 24, 2015

hackergotchi for Daniel Pocock

Daniel Pocock

Get your Github issues as an iCalendar feed

I've just whipped up a Python script that renders Github issue lists from your favourite projects as an iCalendar feed.

The project is called github-icalendar. It uses Python Flask to expose the iCalendar feed over HTTP.

It is really easy to get up and running. All the dependencies are available on a modern Linux distribution, for example:

$ sudo apt-get install python-yaml python-icalendar python-flask python-pygithub

Just create an API token in Github and put it into a configuration file with a list of your repositories like this:

api_token: 6b36b3d7579d06c9f8e88bc6fb33864e4765e5fac4a3c2fd1bc33aad
bind_address: ::0
bind_port: 5000
repositories:
- repository: your-user-name/your-project
- repository: your-user-name/another-project

Run it from the shell:

$ ./github_icalendar/main.py github-ics.cfg

and connect to it with your favourite iCalendar client.

Consolidating issue lists from Bugzilla, Github, Debian BTS and other sources

A single iCalendar client can usually support multiple sources and thereby consolidate lists of issues from multiple bug trackers.

This can be much more powerful than combining RSS bug feeds because iCalendar has built-in support for concepts such as priority and deadline. The client can use these to help you identify the most critical issues across all your projects, no matter which bug tracker they use.

Bugzilla bugtrackers already expose iCalendar feeds directly, just look for the iCalendar link at the bottom of any search results page. Here is an example URL from the Mozilla instance of Bugzilla.

The Ultimate Debian Database consolidates information from the Debian and Ubuntu universe and can already export it as an RSS feed, there is discussion about extrapolating that to an iCalendar feed too.

Further possibilities

  • Prioritizing the issues in Github and mapping these priorities to iCalendar priorities
  • Creating tags in Github that allow issues to be ignored/excluded from the feed (e.g. excluding wishlist items)
  • Creating summary entries instead of listing all the issues, e.g. a single task entry with the title Fix 2 critical bugs for project foo

Screenshots

The screenshots below are based on the issue list of the Lumicall secure SIP phone for Android.

Screenshot - Mozilla Thunderbird/Lightning (Icedove/Iceowl-extension on Debian)

24 January, 2015 11:07PM by Daniel.Pocock

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.11.4

A new release 0.11.4 of Rcpp is now on the CRAN network for GNU R, and an updated Debian package will be uploaded in due course.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 323 packages on CRAN depend on Rcpp for making analyses go faster and further; BioConductor adds another 41 packages, and casual searches on GitHub suggests dozens mores.

This release once again adds a large number of small bug fixes, polishes and enhancements. And like the last time, these changes were made by a group of seven different contributors (counting code commits) plus three more providing concrete suggestions. This shows that the Rcpp development and maintenance rests a large number of (broad) shoulders.

See below for a detailed list of changes extracted from the NEWS file.

Changes in Rcpp version 0.11.4 (2015-01-20)

  • Changes in Rcpp API:

    • The ListOf<T> class gains the .attr and .names methods common to other Rcpp vectors.

    • The [dpq]nbinom_mu() scalar functions are now available via the R:: namespace when R 3.1.2 or newer is used.

    • Add an additional test for AIX before attempting to include execinfo.h.

    • Rcpp::stop now supports improved printf-like syntax using the small tinyformat header-only library (following a similar implementation in Rcpp11)

    • Pairlist objects are now protected via an additional Shield<> as suggested by Martin Morgan on the rcpp-devel list.

    • Sorting is now prohibited at compile time for objects of type List, RawVector and ExpressionVector.

    • Vectors now have a Vector::const_iterator that is 'const correct' thanks to fix by Romain following a bug report in rcpp-devel by Martyn Plummer.

    • The mean() sugar function now uses a more robust two-pass method, and new unit tests for mean() were added at the same time.

    • The mean() and var() functions now support all core vector types.

    • The setequal() sugar function has been corrected via suggestion by Qiang Kou following a bug report by Søren Højsgaard.

    • The macros major, minor, and makedev no longer leak in from the (Linux) system header sys/sysmacros.h.

    • The push_front() string function was corrected.

  • Changes in Rcpp Attributes:

    • Only look for plugins in the package's namespace (rather than entire search path).

    • Also scan header files for definitions of functions to be considerd by Attributes.

    • Correct the regular expression for source files which are scanned.

  • Changes in Rcpp unit tests

    • Added a new binary test which will load a pre-built package to ensure that the Application Binary Interface (ABI) did not change; this test will (mostly or) only run at Travis where we have reasonable control over the platform running the test and can provide a binary.

    • New unit tests for sugar functions mean, setequal and var were added as noted above.

  • Changes in Rcpp Examples:

    • For the (old) examples ConvolveBenchmarks and OpenMP, the respective Makefile was renamed to GNUmakefile to please R CMD check as well as the CRAN Maintainers.

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 January, 2015 03:44PM

RcppGSL 0.2.4

A new version of RcppGSL is now on CRAN. This package provides an interface from R to the GNU GSL using our Rcpp package.

This follows on the heels on the recent RcppGSL 0.2.3 release and extends the excellent point made by Qiang Kou in a contributed section of the vignette: We now not only allow to turn the GSL error handler off (to not abort() on error) but do so on package initialisation.

No other user-facing changes were made.

The NEWS file entries follows below:

Changes in version 0.2.4 (2015-01-24)

  • Two new helper function to turn the default GSL error handler off (and to restore it) were added. The default handler is now turned off when the package is attached so that GSL will no longer abort an R session on error. Users will have to check the error code.

  • The RcppGSL-intro.Rnw vignette was expanded with a short section on the GSL error handler (thanks to Qiang Kou).

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 January, 2015 03:20PM

RcppAnnoy 0.0.5

A new version of RcppAnnoy is now on CRAN. RcppAnnoy wraps the small, fast, and lightweight C++ template header library Annoy written by Erik Bernhardsson for use at Spotify. RcppAnnoy uses Rcpp Modules to offer the exact same functionality as the Python module wrapped around Annoy.

This version contains a trivial one-character change requested by CRAN to cleanse the Makevars file of possible GNU Make-isms. Oh well. This release also overcomes an undefined behaviour sanitizer bug noticed by CRAN that took somewhat more effort to deal with. As mentioned recently in another blog post, it took some work to create a proper Docker container with the required compiler and subsequent R setup, but we have one now, and the aforementioned blog post has details on how we replicated the CRAN finding of an UBSAN issue. It also took Erik some extra efforts to set something up for his C++/Python side, but eventually an EC2 instance with Ubuntu 14.10 did the task as my Docker sales skills are seemingly not convincing enough. In any event, he very quickly added the right fix, and I synced RcppAnnoy with his Annoy code.

Courtesy of CRANberries, there is also a diffstat report for this release. More detailed information is on the RcppAnnoy page page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 January, 2015 02:22PM

hackergotchi for Diego Escalante Urrelo

Diego Escalante Urrelo

January 23, 2015

hackergotchi for Chris Lamb

Chris Lamb

Slack integration for Django

I recently started using the Slack group chat tool in a few teams. Wishing to add some vanity notifications such as sales and user growth milestones from some Django-based projects, I put together an easy-to-use integration between the two called django-slack.

Whilst you can use any generic Python-based method of sending messages to Slack, using a Django-specific integration has some advantages:

  • It can use the Django templating system, rather than constructing messages "by hand" in views.py and models.py which violates abstraction layers and often requires unwieldy and ugly string manipulation routines that would be trivial inside a regular template.
  • It can easily enabled and disabled in certain environments, preventing DRY violations by centralising logic to avoid sending messages in development, staging environments, etc.
  • It can use other Django idioms such as a pluggable backend system for greater control over exactly how messages are transmitted to the Slack API (eg. sent asynchronously using your queuing system, avoiding slowing down clients).

Here is an example of how to send a message from a Django view:

from django_slack import slack_message

@login_required
def view(request, item_id):
    item = get_object_or_404(Item, pk=item_id)

    slack_message('items/viewed.slack', {
        'item': item,
        'user': request.user,
    })

    return render(request, 'items/view.html', {
        'item': item,
    })

Where items/viewed.slack (in your templates directory) might contain:

{% extends django_slack %}

{% block text %}
{{ user.get_full_name }} just viewed {{ item.title }} ({{ item.content|urlize }}).
{% endblock %}

.slack files are regular Django templates — text is automatically escaped as appropriate and that you can use the regular template filters and tags such as urlize, loops, etc.

By default, django-slack posts to the #general channel, but it can be overridden on a per-message basis by specifying a channel block:

{% block channel %}
#mychannel
{% endblock %}

You can also set the icon, URL and emoji in a similar fashion. You can set global defaults for all of these attributes to avoid DRY violations within .slack templates as well.

For more information please see the project homepage or read the documentation. Patches and other contributions are welcome via the django-slack GitHub project.

23 January, 2015 10:46PM

Richard Hartmann

Release Critical Bug report for Week 04

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1117 (Including 191 bugs affecting key packages)
    • Affecting Jessie: 187 (key packages: 116) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 132 (key packages: 89) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 24 bugs are tagged 'patch'. (key packages: 15) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 4 bugs are marked as done, but still affect unstable. (key packages: 3) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 104 bugs are neither tagged patch, nor marked done. (key packages: 71) Help make a first step towards resolution!
      • Affecting Jessie only: 55 (key packages: 27) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 25 bugs are in packages that are unblocked by the release team. (key packages: 8)
        • 30 bugs are in packages that are not unblocked. (key packages: 19)

>How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144) ???
51 178 (124+54) 323 (190+133) 189 (134+55)
52 115 (78+37) 289 (190+99) 147 (112+35)
1 93 (60+33) 287 (171+116) 140 (104+36)
1 93 (60+33) 287 (171+116) 140 (104+36)
2 82 (46+36) 271 (162+109) 157 (124+33)
3 25 (15+10) 249 (165+84) 172 (128+44)
4 14 (8+6) 244 (176+68) 187 (132+55)
5 2 (0+2) 224 (132+92)
6 release! 212 (129+83)
7 release+1 194 (128+66)
8 release+2 206 (144+62)
9 release+3 174 (105+69)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

23 January, 2015 05:59PM by Richard 'RichiH' Hartmann

Enrico Zini

mozilla-facepalm

Mozilla marketplace facepalm

This made me sad.

My view, which didn't seem to be considered in that discussion, is that people concerned about software freedom and security are likely to stay the hell away from such an app market and its feedback forms.

Also, that thread made me so sad about the state of that developer community that I seriously do not feel like investing energy into going through the hoops of getting an account in their bugtracker to point this out.

Sigh.

23 January, 2015 02:13PM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Mini-Debconf Mumbai 2015

Last weekend I went to Mumbai to attend the Mini-Debconf held at IIT-Bombay. These are my impressions of the trip.

Arrival and Impressions of Mumbai

Getting there was a quite an adventure in itself. Unlike during my ill-fated attempt to visit a Debian event in Kerala last year when a bureaucratic snafu left me unable to get a visa, the organizers started the process much earlier at their end this time and with proper permissions. Yet in India, the wheels only turn as fast as they want to turn so despite their efforts, it was only literally at the last minute that I actually managed to secure my visa. I should note however that Indian government has done a lot to improve the process compared to the hell I remember from, say, a decade ago. It's fairly straightforward for tourist visas now and I trust they will get around to doing the same for conference visas in the fullness of time. I didn't want to commit to buying a plane ticket until I had the visa so I became concerned that the only flights left would be either really expensive or on the type of airline that flies you over Syria or under the Indian Ocean. I lucked out and got a good price on a Swiss Air flight, not non-stop but you can't have everything.

So Thursday afternoon I set off for JFK. With only one small suitcase getting there by subway was no problem and I arrived and checked in with plenty of time. Even TSA passed me through with only a minimal amount of indignity. The first leg of my journey took me to Zurich in about eight hours. We were only in Zurich for an hour and then (by now Friday) it was another 9 hours to Mumbai. Friday was Safala Ekadashi but owing to the necessity of staying hydrated on a long flight I drank a lot of water and ate some fruit which I don't normally do on a fasting day. It was tolerable but not too pleasant; I definitely want to try and make travel plans to avoid such situations in the future.

Friday evening local time I got to Mumbai. Chhattrapati Shivaji airport has improved a lot since I saw t last and now has all the amenities an international traveller needs including unrestricted free wifi (Zurich airport are you taking notes?) But here my first ominous piece of bad luck began. No sign of my suitcase. Happily some asking around revealed that it had somehow gotten on some earlier Swiss Air flight instead of the one I was on and was actually waiting for me. I got outside and Debian Developer Praveen Arimbrathodiyil was waiting to pick me up.

Normally I don't lke staying in Mumbai very much even though I have relatives there but that's because we usually went during July-August—the monsoon season—when Mumbai reverts back to the swampy archipelago it was originally built on. This time the weather was nice, cold by local standards, but lovely and spring-like to someone from snowy New Jersey. There have been a lot of improvements to the road infrastructure and people are actually obeying the traffic laws. (Within reason of course. Whether or not a family of six can arrange themselves on one Bajaj scooter is no business of the cops.)

The Hotel Tuliip (yes, two i's. Manager didn't know why.) Residency where I was to stay while not quite a five star establishment was adequate for my needs with a bed, hot water shower, and air conditioning. And a TV which to the bellhops great confusion I did not want turned on. (He asked about five times.) There was no Internet access per se but the manager offered to hook up a wireless router to a cable. Which on closer inspection turned out to have been severed at the base. He assured me it would be fixed tomorrow so I didn't complain and decided to do something more productive thank checking my email like sleeping.

The next day I woke up in total darkness. Apparently there had been some kind of power problem during the night which tripped a fuse or something. A call to the front desk got them to fix that and then the second piece of bad luck happened. I plugged my Thinkpad in and woke it up from hibernation and a minute later there was a loud pop from the power adapter. Note I have a travel international plug adapter with surge protector so nothing bad ought to have happened but the laptop would on turning on display the message "critical low battery error" and immediately power off. I was unable to google what that meant without Internet access but I decided not to panic and continue getting ready. I would have plenty of opportunity to troubleshoot at the conference venue. Or so I thought...

I took an autorickshaw to IIT. There also there have been positive improvements. Being quite obviously a foreigner I was fully prepared to be taken along the "scenic route." But now there are fair zones and the rickshaws all have (tamperproof!) digital fare meters so I was deposited at the main gate without fuss. After reading a board with a scary list of dos and don'ts I presented myself at security only to be inexplicably waved through without a second glance. Later I found out they've abandoned all the security theatre but not got around to updating the signs yet. Mumbai is one of the biggest, densely populated cities in the world but the IIT campus is an oasis of tranquility on the shores of Lake Powai. It's a lot bigger than it looked on the map so I had to wander around a bit before I reached the conference venue but I did make for the official registration time.

Registration

I was happy to meet several old friends (Such as Kartik Mistry and Kumar Appiah who along with Praveen and myself were the other DDs there,) people who I've corresponded with but never met, and many new people. I'm told 200+ people registered altogether. Most seemed to be students from IIT and elsewhere in Mumbai but there were also some Debian enthusiasts from further afield and most hearteningly some "civilians" who wanted to know what this was all about.

With the help of a borrowed Thinkpad adapter I got my laptop running again. (Thankfully, despite the error message, the battery itself was unharmed.) However, my streak of bad luck was not yet over. It was that very weekend that IIT had a freak campus-wide network outage something that had never happened before. And as the presentation for the talk I was to give had apparently been open when I hibernated my laptop the night before, the sudden forced shutdown had trashed the file. (ls showed it as 0 length. An fsck didn't help.) I possibly had a backup on my server but with no Internet access I had no way to retrieve it. I still remained cool. The talk was scheduled for the second day so I could recover it at the hotel.

Keynotes

Professor Kannan Maudgalya of the FOSSEE (Free and Open Source Software for Education) Project which is part of the central government Ministry for Human Resource Development spoke about various activities of his project. Of particular interest to us are:

  • A scheme to get labs and college engineering/computer science departments off proprietary software by helping them identify relevant free software (writing it if necessary.) and helping them transition to it. Similarly getting curricula away from textbooks that use proprietary software by rewriting exercises to use free equivalents.
  • A series of videos for self-instruction kind of like Khan Academy but geared to the challenges of being used in places where there might not be a net connection or even a trained teacher.
  • The Vidyut tablet. A very low cost (~5000 Rupees) ARM-based netbook that runs Linux or Android software. You may have heard about earlier plans for a cheap tablet like this. Vidyut is the next generation correcting some flaws in previous attempts. Not only the software but the hardware is free too. It is currently running a stripped down version of Ubuntu but there was a request to port it to Debian and I'm happy to report several Debian users have accepted the challenge.
FOSSEE is well funded, backed by the government and has enthusiastic staff so we should be seeing a lot more from them in the future.

Veteran Free Software activist Venky Hariharan spoke about his experiences in lobbying the government on tech issues. He noted that there has been a sea change in attitudes towards Linux and Open source in the bureacracy of late. Several states have been aggressively mandating the use of it as have several national ministries and agencies. We the community can provide a valuable service by helping them in the transition. They also need to be educated on how to work with the community (contributing changes back, not working behind closed doors etc.)

Debian History and Debian Cycle

Shirish Agarwal spoke about the Debian philosophy and foundational documents such as the social contract and DFSG and how the release cycle works. Nothing new to an experienced user but informative to the newcomers in the audience and sparked some questions and discussion.

Keysigning

One of my main missions in attending was to help get as many isolated people as possible into the web of trust. Unfortunately the keysigning was not adequately publicized and few people were ready. I would have led them through the process of creating a new key there and then but with the lack of connectivity that idea had to be abandoned. I did manage to sign about 8-10 keys during other times.

Future Directions for Debian-IN BOF

I led this one. Lots of spirited discussion and I found feedback from new users in particular to be very helpful. Some take aways are:

  • Some people said it is hard to find concise, easily digestible information about what Debian can do. (I.e. Can I surf the web? Can I play a certain game? etc.) Debian-IN's web presence in particular needs a lot of improvement. We should also consider other channels such as a facebook page. A volunteer stepped up to look into these issues.
  • Along these lines it was felt that we cannot just wait for people to come to us, we should do more outreach. I pointed out that one group that we need to reach out more to is the Debian Project at large. We need to do more publicity in debian-project, DWN, Planet etc. to let everyone know whats going on in India. I also felt that we have a strong base amongst CS/engineering students but should do more to attract other demographics.
  • Debian events have suffered from organizational problems. Partly this is because the people involved are not professional event planners. They are learning how to do it which is an ongoing process and execution is improving with each iteration so no worries there but problems also arise because Debian-IN is dependent on other entities for many things and those entities do not always have, shall we say, the same sense of urgency. Therefore we need legal standing of our own for accepting donations, inviting foreign guests etc. This doesn't necessarily have to be a separate organization. Affiliating with an existing group is an option providing they share our ideology. Swathanthra Malayalam Computing was one suggestion.
  • There is still not much Debian presence in the North and East of India. (Which includes large cities like Delhi and Kolkata.) Unfortunately until we can find volunteers in those areas to take the lead on organizing something there is not a lot we can do to rectify the situation.
  • We must have Debian-IN t-shirts.

Lil' Debi

Kumar Sukhani was a Debian GSoC student and his project which he demonstrated was to be able to install Debian on an Android phone. Why would you want to do this? Apart from the evergreen "Because I can", you can run server software such as sshd on your phone or even use it as an ARM development board. Unfortunately my phone uses Blackberry 10 OS which can run android apps (emulated under QNX) but wouldn't be able to use this. When I get a real Android phone I will try it out.

Debian on ARM

Siji Sunny gave this talk which was geared more towards hardware types which I am not but one thing I learned was thee difference between all the different ARM subarchitectures. I knew Siji first from a previous incarnation when he worked at CDAC with the late and much lamented Prof. R.K. Joshi. We had a long conversation about those days. Prof. Joshi/CDAC had developed an Indic rendering system called Indix which alas became the Betamax to Pango's VHS but he was also very involved in other Indic computing issues such as working with the Unicode Consortium and the preseration of Sanskrit manuscripts which is also an interest of mine. One good thing that cameout of Indix was some rather nice fonts. I had thought they were still buried in the dungeons of CDAC but apparently they were freed at one point. That's one more thing for me to look into.

Evening/Next morning<

My cousin met me and we had a leisurely dinner together. It was quite late by the time I got back to the hotel. FOSSEE had kindly lent me one of their tablets (which incidently are powerful enough to run LibreOffice comfortably.) so I thought I might be able to quickly redo my presentation before bedtime. Well, wouldn't you know it the wifi was not fixed. As I should have guessed but all the progress I'd had made me giddily optimistic. There was an option of trying to find an Internet cafe in a commercial area 15-20 minutes walk away. If this had been Gujarat I would have tried it but although I can more or less understand Hindi I can barely put together two sentences and Marathi I don't know at all. So I gave up that idea. I redid the slides from memory as best I could and went to sleep.

In the morning I checked out and ferried myself and my suitcase via rickshaw back to the IIT campus. This time I got the driver to take me all the way in to the conference venue. Prof. Maudgalya kindly offered to let me keep the tablet to develop stuff on. I respectfully had to decline because although I love to collect bits of tech the fact it is it would have just gathered dust and ought to go to someone who can make a real contribution with it. I transferred my files to a USB key and borrowed a loaner laptop for my talk.

Debian Packaging Workshop

While waiting to do my talk I sat in on a workshop Praveen ran taking participants through the whole process of creating a Debian package (a ruby gem was the example.) He's done this before so it was a good presentation and well attended but the lack of connectivity did put a damper on things.

Ask Me Anything

It turned out the schedule had to be shuffled a bit so my talk was moved later from the announced time. A few people had already showed up so I took some random questions about Debian from them instead.

GNOME Shell Accessibility With Orca

Krishnakant Mane is remarkable. Although he is blind, he is a developer and a major contributor to Open Source projects. He talked about the Accessibility features of GNOME and compared them (favorably I might add) with proprietary screen readers. Not a subject that's directly useful to me but I found it interesting nonetheless.

Rust: The memory safe language

Manish Goregaokar talked about one of the new fad programming languages that have gotten a lot of buzz lately. This one is backed by Mozilla and it's interesting enough but I'll stick with C++ and Perl until one of the new ones "wins."

Building a Mail Server With Debian

Finally I got to give my talk and, yup, the video out on my borrowed laptop was incompatible with the projector. A slight delay to transfer everything to another laptop and I was able to begin. I talked about setting up BIND, postfix, and of course dovecot along with spamassassin, clamav etc. It turned out I had more than enough material and I went atleast 30 minutes over time and even then I had to rush at the end. People said they liked it so I'm happy.

The End

I gave the concluding remarks. Various people were thanked (including myself) mementos were given and pictures were taken. Despite a few mishaps I enjoyed myself and I am glad I attended. The level of enthusiasm was very high and lessons were learned so the next Debian-IN event should be even better.

My departing flight wasn't due to leave until 1:20AM so I killed a few hours with my family before the flight. Once again I was stopping in Zurich, this time for most of a day. The last of my blunders was not to take my coat out of my suitcase and the temperature outside was 29F so I had to spend that whole time enjoing the (not so) many charms of Zurich airport. Atleast the second flight took me to Newark instead of JFK so I was able to get home a little earlier on Monday evening, exhausted but happy I made the trip.

23 January, 2015 06:47AM

hackergotchi for Michael Prokop

Michael Prokop

check-mk: monitor switches for GBit links

For one of our customers we are using the Open Monitoring Distribution which includes Check_MK as monitoring system. We’re monitoring the switches (Cisco) via SNMP. The switches as well as all the servers support GBit connections, though there are some systems in the wild which are still operating at 100MBit (or even worse on 10MBit). Recently there have been some performance issues related to network access. To make sure it’s not the fault of a server or a service we decided to monitor the switch ports for their network speed. By default we assume all ports to be running at GBit speed. This can be configured either manually via:

cat etc/check_mk/conf.d/wato/rules.mk
[...]
checkgroup_parameters.setdefault('if', [])

checkgroup_parameters['if'] = [
  ( {'speed': 1000000000}, [], ['switch1', 'switch2', 'switch3', 'switch4'], ALL_SERVICES, {'comment': u'GBit links should be used as default on all switches'} ),
] + checkgroup_parameters['if']

or by visting Check_MK’s admin web-interface at ‘WATO Configuration’ -> ‘Host & Service Parameters’ -> ‘Parameters for Inventorized Checks’ -> ‘Networking’ -> ‘Network interfaces and switch ports’ and creating a rule for the ‘Explicit hosts’ switch1, switch2, etc and setting ‘Operating speed’ to ‘1 GBit/s’ there.

So far so straight forward and this works fine. Thanks to this setup we could identify several systems which used 100Mbit and 10MBit links. Definitely something to investigate on the according systems with their auto-negotiation configuration. But to avoid flooding the monitoring system and its notifications we want to explicitly ignore those systems in the monitoring setup until those issues have been resolved.

First step: identify the checks and their format by either invoking `cmk -D switch2` or looking at var/check_mk/autochecks/switch2.mk:

OMD[synpros]:~$ cat var/check_mk/autochecks/switch2.mk
[
  ("switch2", "cisco_cpu", None, cisco_cpu_default_levels),
  ("switch2", "cisco_fan", 'Switch#1, Fan#1', None),
  ("switch2", "cisco_mem", 'Driver text', cisco_mem_default_levels),
  ("switch2", "cisco_mem", 'I/O', cisco_mem_default_levels),
  ("switch2", "cisco_mem", 'Processor', cisco_mem_default_levels),
  ("switch2", "cisco_temp_perf", 'SW#1, Sensor#1, GREEN', None),
  ("switch2", "if64", '10101', {'state': ['1'], 'speed': 1000000000}),
  ("switch2", "if64", '10102', {'state': ['1'], 'speed': 1000000000}),
  ("switch2", "if64", '10103', {'state': ['1'], 'speed': 1000000000}),
  [...]
  ("switch2", "snmp_info", None, None),
  ("switch2", "snmp_uptime", None, {}),
]
OMD[synpros]:~$

Second step: translate this into the according format for usage in etc/check_mk/main.mk:

checks = [
  ( 'switch2', 'if64', '10105', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:42:de:ad:be:af,  10MBit
  ( 'switch2', 'if64', '10107', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:23:de:ad:be:af, 100MBit
  ( 'switch2', 'if64', '10139', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:42:de:ad:be:af, 100MBit
  [...]
]

Using this configuration we ignore the operation speed on ports 10105, 10107 and 10139 of switch2 using the the if64 check. We kept the state setting untouched where sensible (‘1′ means that the expected operational status of the interface is to be ‘up’). The errors settings specifies the error rates in percent for warnings (0.01%) and critical (0.1%). For further details refer to the online documentation or invoke ‘cmk -M if64′.

Final step: after modifying the checks’ configuration make sure to run `cmk -IIu switch2 ; cmk -R` to renew the inventory for switch2 and apply the changes. Do not forget to verify the running configuration by invoking ‘cmk -D switch2′:

Screenshot of 'cmk -D switch2' execution

23 January, 2015 12:04AM by mika

January 22, 2015

hackergotchi for Erich Schubert

Erich Schubert

Year 2014 in Review as Seen by a Trend Detection System

We ran our trend detection tool Signi-Trend (published at KDD 2014) on news articles collected for the year 2014. We removed the category of financial news, which is overrepresented in the data set. Below are the (described) results, from the top 50 trends (I will push the raw result to appspot if possible due to file limits).
I have highlighted the top 10 trends in bold, but otherwise ordered them chronologically.
Updated: due to an error in a regexp, I had filtered out too many stories. The new results use more articles.

January
2014-01-29: Obama's state of the union address
February
2014-02-07: Sochi Olympics gay rights protests
2014-02-08: Sochi Olympics first results
2014-02-19: Violence in Ukraine and Maidan in Kiev
2014-02-20: Wall street reaction to Facebook buying WhatsApp
2014-02-22: Yanukovich leaves Kiev
2014-02-28: Crimea crisis begins
March
2014-03-01: Crimea crisis escalates futher
2014-03-02: NATO meeting on Crimea crisis
2014-03-04: Obama presents U.S. fiscal budget 2015 plan
2014-03-08: Malaysia Airlines MH-370 missing in South China Sea
2014-03-08: MH-370: many Chinese on board of missing airplane
2014-03-15: Crimean status referencum (upcoming)
2014-03-18: Crimea now considered part of Russia by Putin
2014-03-21: Russian stocks fall after U.S. sanctions.
April
2014-04-02: Chile quake and tsunami warning
2014-04-09: False positive? experience + views
2014-04-13: Pro-russian rebels in Ukraine's Sloviansk
2014-04-17: Russia-Ukraine crisis continues
2014-04-22: French deficit reduction plan pressure
2014-04-28: Soccer World Cup coverage: team lineups
May
2014-05-14: MERS reports in Florida, U.S.
2014-05-23: Russia feels sanctions impact
2014-05-25: EU elections
June
2014-06-06: World cup coverage
2014-06-13: Islamic state Camp Speicher massacre in Iraq
2014-06-14: Soccer world cup: Spain surprisingly destoyed by Netherlands
July
2014-07-05: Soccer world cup quarter finals
2014-07-17: Malaysian Airlines MH-17 shot down over Ukraine
2014-07-18: Russian blamed for 298 dead in airline downing
2014-07-19: Independent crash site investigation demanded
2014-07-20: Israel shelling Gaza causes 40+ casualties in a day
August
2014-08-07: Russia bans food imports from EU and U.S.
2014-08-08: Obama orders targeted air strikes in Iraq
2014-08-20: IS murders journalist James Foley, air strikes continue
2014-08-30: EU increases sanctions against Russia
September
2014-09-05: NATO summit with respect to IS and Ukraine conflict
2014-09-11: Scottish referendum upcoming - poll results are close
2014-09-23: U.N. on legality of U.S. air strikes in Syria against IS
2014-09-26: Star manager Bill Gross leaves Allianz/PIMCO for Janus
October
2014-10-22: Ottawa parliament shooting
2014-10-26: EU banking review
November
2014-11-05: U.S. Senate and governor elections
2014-11-12: Foreign exchange manipulation investigation results
2014-11-17: Japan recession
December
2014-12-11: CIA prisoner and U.S. torture centers revieled
2014-12-15: Sydney cafe hostage siege
2014-12-17: U.S. and Cuba relations improve unexpectedly
2014-12-18: Putin criticizes NATO, U.S., Kiev
2014-12-28: AirAsia flight QZ-8501 missing

As you can guess, we are really happy with this result - just like the result for 2013 it mentiones (almost) all the key events.
There probably is one "false positive" there: 2014-04-09 has a lot of articles talking about "experience" and "views", but not all refer to the same topic (we did not do topic modeling yet).
There are also some events missing that we would have liked to appear; many of these barely did not make it into the top 50, but do appear in the top 100, such as the Sony cyberattack (#51) and the Fergusson riots on November 11 (#66).
You can also explore the results online in a snapshot.

22 January, 2015 07:00PM

hackergotchi for MJ Ray

MJ Ray

Outsourcing email to Google means SPF allows phishing?

I expect this is obvious to many people but bahumbug To Phish, or Not to Phish? just woke me up to the fact that if Google hosts your company email then its Sender Policy Framework might make other Google-sent emails look legitimate for your domain. When combined with the unsupportive support of the big free webmail hosts, is this another black mark against SPF?

22 January, 2015 03:57AM by mjr

hackergotchi for Diego Escalante Urrelo

Diego Escalante Urrelo

January 21, 2015

Tomasz Buchert

Expired keys in Debian keyring

A new version of Stellarium was recently released (0.13.2), so I wanted to upload it to Debian unstable as I usually do. And so I did, but it was rejected without me even knowing, since I got no e-mail response from ftp-masters.

It turns out that my GPG key in the Debian keyring expired recently and so my upload was rightfully rejected. Not a big deal, actually, since you can easily move the expiration date (even after its expiration!). I did it already and the updated key is already propagated, but be aware that Debian keyring does not synchronize with other keyservers! To update your key in Debian (if you are a Debian Developer or Mantainer) you must send your updated keys to keyring.debian.org like that (you should replace my ID with your own):

$ gpg --keyserver keyring.debian.org --send-keys 24B17D29

Debian keyring is distributed as a standard DEB package and apparently it may take up to a month to have your updated key in Debian. It seems that I may be unable to upload packages for some time.

But the whole story made me thinking: am I the only one who forgot to update his key in Debian keyring? To verify it I wrote the following snippet (works in Python 2 and 3!) which shows keys expired in the Debian keyring (well, two of them). As a bonus, it also shows keys that have non-UTF8 characters in UIDs – see #738483 for more information.

#
# be sure to do "apt-get install python-gnupg"
#

import gnupg
import datetime

def check_keys(keyring, tab = ""):
    gpg = gnupg.GPG(keyring = keyring)
    gpg.decode_errors = 'replace' # see: https://bugs.debian.org/738483
    keys = gpg.list_keys()
    now = datetime.datetime.now()
    for key in keys:
        uids = key['uids']
        uid = uids[0]
        if key['expires'] != '':
            expire = datetime.datetime.fromtimestamp(int(key['expires']))
            diff = expire - now
            if diff.days < 0:
                print(u'{}EXPIRED: Key of {} expired {} days ago.'.format(tab, uid, -diff.days))
        mangled_uids = [ u for u in uids if u'\ufffd' in u ]
        if len(mangled_uids) > 0:
            print(u'{}MANGLED: Key of {} has some mangled uids: {}'.format(tab, uid, mangled_uids))

keyrings = [
    "/usr/share/keyrings/debian-keyring.gpg",
    "/usr/share/keyrings/debian-maintainers.gpg"
]

for keyring in keyrings:
    print(u"CHECKING {}".format(keyring))
    check_keys(keyring, tab = "    ")

I’m not going to show the output of this code, because it contains names and e-mail adresses which I really shouldn’t post. But you can run it yourself. You will see that there is a small group of people with expired keys (including me!). Interestingly, some keys have expired a long time ago: there is one that expired more than 7 years ago!

The outcome of the story is: yes, you should have an expiration date on your key for safety reasons, but be careful - it can surprise you at the worst moment.

21 January, 2015 09:00PM

hackergotchi for Chris Lamb

Chris Lamb

Sprezzatura

Wolf Hall on Twitter et al:

He says, "Majesty, we were talking of Castiglione's book. You have found time to read it?"

"Indeed. He extrolls sprezzatura. The art of doing everything gracefully and well, without the appearance of effort. A quality princes should cultivate."

"Yes. But besides sprezzatura one must exhibit at all times a dignified public restraint..."

21 January, 2015 10:31AM

Enrico Zini

miniscreen

Playing with python, terminfo and command output

I am experimenting with showing progress on the terminal for a subcommand that is being run, showing what is happening without scrolling away the output of the main program, and I came out with this little toy. It shows the last X lines of a subcommand output, then gets rid of everything after the command has ended.

Usability-wise, it feels like a tease to me: it looks like I'm being shown all sorts of information then they are taken away from me before I managed to make sense of them. However, I find it cute enough to share:

#!/usr/bin/env python3
#coding: utf-8
# Copyright 2015 Enrico Zini <enrico@enricozini.org>.  Licensed under the terms
# of the GNU General Public License, version 2 or any later version.

import argparse
import fcntl
import select
import curses
import contextlib
import subprocess
import os
import sys
import collections
import shlex
import shutil
import logging

def stream_output(proc):
    """
    Take a subprocess.Popen object and generate its output, line by line,
    annotated with "stdout" or "stderr". At process termination it generates
    one last element: ("result", return_code) with the return code of the
    process.
    """
    fds = [proc.stdout, proc.stderr]
    bufs = [b"", b""]
    types = ["stdout", "stderr"]
    # Set both pipes as non-blocking
    for fd in fds:
        fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK)
    # Multiplex stdout and stderr with different prefixes
    while len(fds) > 0:
        s = select.select(fds, (), ())
        for fd in s[0]:
            idx = fds.index(fd)
            buf = fd.read()
            if len(buf) == 0:
                fds.pop(idx)
                if len(bufs[idx]) != 0:
                    yield types[idx], bufs.pop(idx)
                types.pop(idx)
            else:
                bufs[idx] += buf
                lines = bufs[idx].split(b"\n")
                bufs[idx] = lines.pop()
                for l in lines:
                    yield types[idx], l
    res = proc.wait()
    yield "result", res

@contextlib.contextmanager
def miniscreen(has_fancyterm, name, maxlines=3, silent=False):
    """
    Show the output of a process scrolling in a portion of the screen.

    has_fancyterm: true if the terminal supports fancy features; if false, just
    write lines to standard output

    name: name of the process being run, to use as a header

    maxlines: maximum height of the miniscreen

    silent: do nothing whatsoever, used to disable this without needing to
            change the code structure

    Usage:
        with miniscreen(True, "my process", 5) as print_line:
            for i in range(10):
                print_line(("stdout", "stderr")[i % 2], "Line #{}".format(i))
    """
    if not silent and has_fancyterm:
        # Discover all the terminal control sequences that we need
        output_normal = str(curses.tigetstr("sgr0"), "ascii")
        output_up = str(curses.tigetstr("cuu1"), "ascii")
        output_clreol = str(curses.tigetstr("el"), "ascii")
        cols, lines = shutil.get_terminal_size()
        output_width = cols

        fg_color = (curses.tigetstr("setaf") or
                    curses.tigetstr("setf") or "")
        sys.stdout.write(str(curses.tparm(fg_color, 6), "ascii"))

        output_lines = collections.deque(maxlen=maxlines)

        def print_lines():
            """
            Print the lines in our buffer, then move back to the beginning
            """
            sys.stdout.write("{} progress:".format(name))
            sys.stdout.write(output_clreol)
            for msg in output_lines:
                sys.stdout.write("\n")
                sys.stdout.write(msg)
                sys.stdout.write(output_clreol)
            sys.stdout.write(output_up * len(output_lines))
            sys.stdout.write("\r")

        try:
            print_lines()

            def _progress_line(type, line):
                """
                Print a new line to the miniscreen
                """
                # Add the new line to our output buffer
                msg = "{} {}".format("." if type == "stdout" else "!", line)
                if len(msg) > output_width - 4:
                    msg = msg[:output_width - 4] + "..."
                output_lines.append(msg)
                # Update the miniscreen
                print_lines()

            yield _progress_line

            # Clear the miniscreen by filling our ring buffer with empty lines
            # then printing them out
            for i in range(maxlines):
                output_lines.append("")
            print_lines()
        finally:
            sys.stdout.write(output_normal)
    elif not silent:
        def _progress_line(type, line):
            print("{}: {}".format(type, line))
        yield _progress_line
    else:
        def _progress_line(type, line):
            pass
        yield _progress_line

def run_command_fancy(name, cmd, env=None, logfd=None, fancy=True, debug=False):
    quoted_cmd = " ".join(shlex.quote(x) for x in cmd)
    log.info("%s running command %s", name, quoted_cmd)
    if logfd: print("runcmd:", quoted_cmd, file=logfd)

    # Run the script itself on an empty environment, so that what was
    # documented is exactly what was run
    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)

    with miniscreen(fancy, name, silent=debug) as progress:
        stderr = []
        for type, val in stream_output(proc):
            if type == "stdout":
                val = val.decode("utf-8")
                if logfd: print("stdout:", val, file=logfd)
                log.debug("%s stdout: %s", name, val)
                progress(type, val)
            elif type == "stderr":
                val = val.decode("utf-8")
                if logfd: print("stderr:", val, file=logfd)
                stderr.append(val)
                log.debug("%s stderr: %s", name, val)
                progress(type, val)
            elif type == "result":
                if logfd: print("retval:", val, file=logfd)
                log.debug("%s retval: %d", name, val)
                retval = val

    if retval != 0:
        lastlines = min(len(stderr), 5)
        log.error("%s exited with code %s", name, retval)
        log.error("Last %d lines of standard error:", lastlines)
        for line in stderr[-lastlines:]:
            log.error("%s: %s", name, line)

    return retval


parser = argparse.ArgumentParser(description="run a command showing only a portion of its output")
parser.add_argument("--logfile", action="store", help="specify a file where the full execution log will be written")
parser.add_argument("--debug", action="store_true", help="debugging output on the terminal")
parser.add_argument("--verbose", action="store_true", help="verbose output on the terminal")
parser.add_argument("command", nargs="*", help="command to run")
args = parser.parse_args()

if args.debug:
    loglevel = logging.DEBUG
elif args.verbose:
    loglevel = logging.INFO
else:
    loglevel = logging.WARN
logging.basicConfig(level=loglevel, stream=sys.stderr)
log = logging.getLogger()

fancy = False
if not args.debug and sys.stdout.isatty():
    curses.setupterm()
    if curses.tigetnum("colors") > 0:
        fancy = True

if args.logfile:
    logfd = open("output.log", "wt")
else:
    logfd = None

retval = run_command_fancy("miniscreen example", args.command, logfd=logfd)

sys.exit(retval)

21 January, 2015 10:13AM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Moving to Jekyll

I’ve been meaning to move away from Movable Type for a while; they no longer provide the “Open Source” variant, I’ve had some issues with the commenting side of things (more the fault of spammers than Movable Type itself) and there are a few minor niggles that I wanted to resolve. Nothing has been particularly pressing me to move and I haven’t been blogging as much so while I’ve been keeping an eye open for a replacement I haven’t exerted a lot of energy into the process. I have a little bit of time at present so I asked around on IRC for suggestions. One was ikiwiki, which I use as part of helping maintain the SPI website (and think is fantastic for that), the other was Jekyll. Both are available as part of Debian Jessie.

Jekyll looked a bit fancier out of the box (I’m no web designer so pre-canned themes help me a lot), so I decided to spend some time investigating it a bit more. I’d found a Movable Type to ikiwiki converter which provided a starting point for exporting from the SQLite3 DB I was using for MT. Most of my posts are in markdown, the rest (mostly from my Blosxom days) are plain HTML, so there wasn’t any need to do any conversion on the actual content. A minor amount of poking convinced Jekyll to use the same URL format (permalink: /:year/:month/:title.html in the _config.yml did what I wanted) and I had to do a few bits of fix up for some images that had been uploaded into MT, but overall fairly simple stuff.

Next I had to think about comments. My initial thought was to just ignore them for the moment; they weren’t really working on the MT install that well so it’s not a huge loss. I then decided I should at least see what the options were. Google+ has the ability to embed in your site, so I had a play with that. It worked well enough but I didn’t really want to force commenters into the Google ecosystem. Next up was Disqus, which I’ve seen used in various places. It seems to allow logins via various 3rd parties, can cope with threading and deals with the despamming. It was easy enough to integrate to play with, and while I was doing so I discovered that it could cope with importing comments. So I tweaked my conversion script to generate a WXR based file of the comments. This then imported easily into Disqus (and also I double checked that the export system worked).

I’m sure the use of a third party to handle comments will put some people off, but given the ability to export I’m confident if I really feel like dealing with despamming comments again at some point I can switch to something locally hosted. I do wish it didn’t require Javascript, but again it’s a trade off I’m willing to make at present.

Anyway. Thanks to Tollef for the pointer (and others who made various suggestions). Hopefully I haven’t broken (or produced a slew of “new” posts for) any of the feed readers pointed at my site (but you should update to use feed.xml rather than any of the others - I may remove them in the future once I see usage has died down).

(On the off chance it’s useful to someone else the conversion script I ended up with is available. There’s a built in Jekyll importer that may be a better move, but I liked ending up with a git repository containing a commit for each post.)

21 January, 2015 10:00AM

hackergotchi for Jo Shields

Jo Shields

mono-project.com Linux packages, January 2015 edition

The latest version of Mono has released (actually, it happened a week ago, but it took me a while to get all sorts of exciting new features bug-checked and shipshape).

Stable packages

This release covers Mono 3.12, and MonoDevelop 5.7. These are built for all the same targets as last time, with a few caveats (MonoDevelop does not include F# or ASP.NET MVC 4 support). ARM packages will be added in a few weeks’ time, when I get the new ARM build farm working at Xamarin’s Boston office.

Ahead-of-time support

This probably seems silly since upstream Mono has included it for years, but Mono on Debian has never shipped with AOT’d mscorlib.dll or mcs.exe, for awkward package-management reasons. Mono 3.12 fixes this, and will AOT these assemblies – optimized for your computer – on installation. If you can suggest any other assemblies to add to the list, we now support a simple manifest structure so any assembly can be arbitrarily AOT’d on installation.

Goodbye Mozroots!

I am very pleased to announce that as of this release, Mono users on Linux no longer need to run “mozroots” to get SSL working. A new command, “cert-sync”, has been added to this release, which synchronizes the Mono SSL certificate store against your OS certificate store – and this tool has been integrated into the packaging system for all mono-project.com packages, so it is automatically used. Just make sure the ca-certificates-mono package is installed on Debian/Ubuntu (it’s always bundled on RPM-based) to take advantage! It should be installed on fresh installs by default. If you want to invoke the tool manually (e.g. you installed via make install, not packages) use

cert-sync /path/to/ca-bundle.crt

On Debian systems, that’s

cert-sync /etc/ssl/certs/ca-certificates.crt

and on Red Hat derivatives it’s

cert-sync /etc/pki/tls/certs/ca-bundle.crt

Your distribution might use a different path, if it’s not derived from one of those.

Windows installer back from the dead

Thanks to help from Alex Koeplinger, I’ve brought the Windows installer back from the dead. The last release on the website was for 3.2.3 (it’s actually not this version at all – it’s complicated…), so now the Windows installer has parity with the Linux and OSX versions. The Windows installer (should!) bundles everything the Mac version does – F#, PCL facades, IronWhatever, etc, along with Boehm and SGen builds of the Mono runtime done with Visual Studio 2013.

An EXPERIMENTAL OH MY GOD DON’T USE THIS IN PRODUCTION 64-bit installer is in the works, when I have the time to try and make a 64-build of Gtk#.

21 January, 2015 01:26AM by directhex

Dimitri John Ledkov

Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available

I'm happy to announce that Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available for consumption.

These are 1.10.3 & 0.155 respectfully.

This means that everyone should start porting their reports, tools, and scriptage to python3.

ubuntu-dev-tools has the library portion ported to python3, as I did not dare to switch individual scripts to python3 without thorough interactive testing. Please help out porting those and/or file bug reports against the python3 port. Feel free to subscribe me to the bug reports on launchpad.

For the time being, I believe some things will not be easy to port to python3 because of the elephant in the room - bzrlib. For some things like lp-shell, it should be easy to move away from bzrlib, as non-vcs things are used there. For other things the current suggestion is to probably fork to bzr binary or a python2 process. I ponder if a minimal usable python3-bzrlib wrapper around python2 bzrlib is possible to satisfy the needs of basic and common scripts.

On a side note, launchpadlib & lazr.restfulclient have out of the box proxy support enabled. This makes things like add-apt-repository work behind networks with such setup. I think a few people will be happy about that.

All of these goodies are available in Ubuntu 15.04 (Vivid Vervet) or Debian Experimental (and/or NEW queue).

21 January, 2015 12:06AM by Dimitri John Ledkov (noreply@blogger.com)

January 20, 2015

Jonathan Wiltshire

Never too late for bug-squashing

With over a hundred RC bugs still outstanding for Jessie, there’s never been a better time to host a bug-squashing party in your local area. Here’s how I do it.

  1. At home is fine, if you don’t mind guests. You don’t need to seek out a sponsor and borrow or hire office space. If there isn’t room for couch-surfers, the project can help towards travel and accommodation expenses. My address isn’t secret, but I still don’t announce it – it’s fine to share it only with the attendees once you know who they are.
  2. You need a good work area. There should be room for people to sit and work comfortably – a dining room table and chairs is ideal. It should be quiet and free from distractions. A local mirror is handy, but a good internet connection is essential.
  3. Hungry hackers eat lots of snacks. This past weekend saw five of us get through 15 litres of soft drinks, two loaves of bread, half a kilo of cheese, two litres of soup, 22 bags of crisps, 12 jam tarts, two pints of milk, two packs of chocolate cake bars, and a large bag of biscuits (and we went out for breakfast and supper). Make sure there is plenty available before your attendees arrive, along with a good supply of tea and coffee.
  4. Have a work plan. Pick a shortlist of RC bugs to suit attendees’ strengths, or work on a particular packaging group’s bugs, or have a theme, or something. Make sure there’s a common purpose and you don’t just end up being a bunch of people round a table.
  5. Be an exemplary host. As the host you’re allowed to squash fewer bugs and instead make sure your guests are comfortable, know where the bathroom is, aren’t going hungry, etc. It’s an acceptable trade-off. (The reverse is true: if you’re attending, be an exemplary guest – and don’t spend the party reading news sites.)

Now, go host a BSP of your own, and let’s release!


Never too late for bug-squashing is a post from: jwiltshire.org.uk | Flattr

20 January, 2015 10:20PM by Jon

Sven Hoexter

Heads up: possible changes in fonts-lyx

Today the super nice upstream developers of LyX reached out to me (and pelle@) as the former and still part time lyx package maintainers to inform us of an ongoing discussion in http://www.lyx.org/trac/ticket/9229. The current aproach to fix this bug might result in a name change of all fonts shipped in fonts-lyx with the next LyX release.

Why is it relevant for people not using LyX?

For some historic reasons beyond my knowledge the LyX project ships a bunch of math symbol fonts converted to ttf files. From a seperate source package they moved to be part of the lyx source package and are currently delivered via the fonts-lyx package.

Over time a bunch of other packages picked this font package up as a dependency. Among them also rather popular packages like icedove, which results in a rather fancy popcon graph. Drawback as usual is that changes might have a visible impact in places where you do not expect them.

So if you've some clue about fonts, or depend on fonts-lyx in some way, you might want to follow that issue cited above and/or get in contact with the LyX developers.

If you've some spare time feel also invited to contribute to the lyx packaging in Debian. It really deserves a lot more love then what it seldomly gets today by the brave Nick Andrik, Per and myself.

20 January, 2015 08:02PM

hackergotchi for Daniel Pocock

Daniel Pocock

Quantifying the performance of the Microserver

In my earlier blog about choosing a storage controller, I mentioned that the Microserver's on-board AMD SB820M SATA controller doesn't quite let the SSDs perform at their best.

Just how bad is it?

I did run some tests with the fio benchmarking utility.

Lets have a look at those random writes, they simulate the workload of synchronous NFS write operations:

rand-write: (groupid=3, jobs=1): err= 0: pid=1979
  write: io=1024.0MB, bw=22621KB/s, iops=5655 , runt= 46355msec

Now compare it to the HP Z800 on my desk, it has the Crucial CT512MX100SSD1 on a built-in LSI SAS 1068E controller:

rand-write: (groupid=3, jobs=1): err= 0: pid=21103
  write: io=1024.0MB, bw=81002KB/s, iops=20250 , runt= 12945msec

and then there is the Thinkpad with OCZ-NOCTI mSATA SSD:

rand-write: (groupid=3, jobs=1): err= 0: pid=30185
  write: io=1024.0MB, bw=106088KB/s, iops=26522 , runt=  9884msec

That's right, the HP workstation is four times faster than the Microserver, but the Thinkpad whips both of them.

I don't know how much I can expect of the PCI bus in the Microserver but I suspect that any storage controller will help me get some gain here.

20 January, 2015 07:53PM by Daniel.Pocock

Sven Hoexter

python-ipcalc bumped from 0.3 to 1.1.3

I've helped a friend to get started with Debian packaging and he has now adopted python-ipcalc. Since I've no prior experience with packaging of Python modules and there were five years of upstream development in between, I've uploaded to experimental to give it some exposure.

So if you still use the python-ipcalc package, which is part of all current Debian releases and the upcoming jessie release, please check out the package from experimental. I think the only reverse dependency within Debian is sshfp, that one of course also requires some testing.

20 January, 2015 07:16PM

Raphael Geissert

Edit Debian, with iceweasel

Soon after publishing the chromium/chrome extension that allows you to edit Debian online, Moez Bouhlel sent a pull request to the extension's git repository: all the changes needed to make a firefox extension!

After another session of browser extensions discovery, I merged the commits and generated the xpi. So now you can go download the Debian online editing firefox extension and hack the world, the Debian world.

Install it and start contributing to Debian from your browser. There's no excuse now.

20 January, 2015 07:00AM by Raphael Geissert (noreply@blogger.com)

January 19, 2015

hackergotchi for Daniel Pocock

Daniel Pocock

jSMPP project update, 2.1.1 and 2.2.1 releases

The jSMPP project on Github stopped processing pull requests over a year ago and appeared to be needing some help.

I've recently started hosting it under https://github.com/opentelecoms-org/jsmpp and tried to merge some of the backlog of pull requests myself.

There have been new releases:

  • 2.1.1 works in any project already using 2.1.0. It introduces bug fixes only.
  • 2.2.1 introduces some new features and API changes and bigger bug fixes

The new versions are easily accessible for Maven users through the central repository service.

Apache Camel has already updated to use 2.1.1.

Thanks to all those people who have contributed to this project throughout its history.

19 January, 2015 09:29PM by Daniel.Pocock

Storage controllers for small Linux NFS networks

While contemplating the disk capacity upgrade for my Microserver at home, I've also been thinking about adding a proper storage controller.

Currently I just use the built-in controller in the Microserver. It is an AMD SB820M SATA controller. It is a bottleneck for the SSD IOPS.

On the disks, I prefer to use software RAID (such as md or BtrFs) and not become dependent on the metadata format of any specific RAID controller. The RAID controllers don't offer the checksumming feature that is available in BtrFs and ZFS.

The use case is NFS for a small number of workstations. NFS synchronous writes block the client while the server ensures data really goes onto the disk. This creates a performance bottleneck. It is actually slower than if clients are writing directly to their local disks through the local OS caches.

SSDs on an NFS server offer some benefit because they can complete write operations more quickly and the NFS server can then tell the client the operation is complete. The more performant solution (albeit with a slight risk of data corruption) is to use a storage controller with a non-volatile (battery-backed or flash-protected) write cache.

Many RAID controllers have non-volatile write caches. Some online discussions of BtrFs and ZFS have suggested staying away from full RAID controllers though, amongst other things, to avoid the complexities of RAID controllers adding their metadata to the drives.

This brings me to the first challenge though: are there suitable storage controllers that have a non-volatile write cache but without having other RAID features?

Or a second possibility: out of the various RAID controllers that are available, do any provide first-class JBOD support?

Observations

I looked at specs and documentation for various RAID controllers and identified some of the following challenges:

Next steps

Are there other options to look at, for example, alternatives to NFS?

If I just add in a non-RAID HBA to enable faster IO to the SSDs will this be enough to make a noticeable difference on the small number of NFS clients I'm using?

Or is it inevitable that I will have to go with one of the solutions that involves putting a vendor's volume metadata onto JBOD volumes? If I do go that way, which of the vendors' metadata formats are most likely to be recognized by free software utilities in the future if I ever connect the disk to a generic non-RAID HBA?

Thanks to all those people who provided comments about choosing drives for this type of NAS usage.

Related reading

19 January, 2015 01:59PM by Daniel.Pocock

January 18, 2015

Jonathan Wiltshire

Alcester BSP, day three

We have had a rather more successful weekend then I feared, as you can see from our log on the wiki page. Steve reproduced and wrote patches for several installer/bootloader bugs, and Neil and I spent significant time in a maze of twist zope packages (we have managed to provide more diagnostics on the bug, even if we couldn’t resolve it). Ben and Adam have ploughed through a mixture of bugs and maintenance work.

I wrongly assumed we would only be able to touch a handful of bugs, since they are now mostly quite difficult, so it was rather pleasant to recap our progress this evening and see that it’s not all bad after all.


Alcester BSP, day three is a post from: jwiltshire.org.uk | Flattr

18 January, 2015 10:27PM by Jon

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2014/51-2015/03

I have to admit that I was a bit lazy when it comes to working on RC bugs in the last weeks. here's my not-so-stellar summary:

  • #729220 – pdl: "pdl: problems upgrading from wheezy due to triggers"
    investigate (unsuccessfully), later fixed by maintainer
  • #772868 – gxine: "gxine: Trigger cycle causes dpkg to fail processing"
    switch trigger from "interest" to "interest-noawait", upload to DELAYED/2
  • #774584 – rtpproxy: "rtpproxy: Deamon does not start as init script points to wrong executable path"
    adjust path in init script, upload to DELAYED/2
  • #774791 – src:xine-ui: "xine-ui: Creates dpkg trigger cycle via libxine2-ffmpeg, libxine2-misc-plugins or libxine2-x"
    add trigger patch from Michael Gilbert, upload to DELAYED/2
  • #774862 – ciderwebmail: "ciderwebmail: unhandled symlink to directory conversion: /usr/share/ciderwebmail/root/static/images/mimeicons"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion (pkg-perl)
  • #774867 – lirc-x: "lirc-x: unhandled symlink to directory conversion: /usr/share/doc/PACKAGE"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion, upload to DELAYED/2
  • #775640 – src:libarchive-zip-perl: "libarchive-zip-perl: FTBFS in jessie: Tests failures"
    start to investigate (pkg-perl)

18 January, 2015 09:41PM

Mark Brown

Heating the Internet of Things

Internet of Things seems to be trendy these days, people like the shiny apps for controlling things and typically there are claims that the devices will perform better than their predecessors by offloading things to the cloud – but this makes some people worry that there are potential security issues and it’s not always clear that internet usage is actually delivering benefits over something local. One of the more widely deployed applications is smart thermostats for central heating which is something I’ve been playing with. I’m using Tado, there’s also at least Nest and Hive who do similar things, all relying on being connected to the internet for operation.

The main thing I’ve noticed has been that the temperature regulation in my flat is better, my previous thermostat allowed the temperature to vary by a couple of degrees around the target temperature in winter which got noticeable, with this the temperature generally seems to vary by a fraction of a degree at most. That does use the internet connection to get the temperature outside, though I’m fairly sure that most of this is just a better algorithm (the thermostat monitors how quickly the flat heats up when heating and uses this when to turn off rather than waiting for the temperature to hit the target then seeing it rise further as the radiators cool down) and performance would still be substantially improved without it.

The other thing that these systems deliver which does benefit much more from the internet connection is that it’s easy to control them remotely. This in turn makes it a lot easier to do things like turn the heating off when it’s not needed – you can do it remotely, and you can turn the heating back on without being in the flat so that you don’t need to remember to turn it off before you leave or come home to a cold building. The smarter ones do this automatically based on location detection from smartphones so you don’t need to think about it.

For example, when I started this post this I was sitting in a coffee shop so the heating had been turned off based on me taking my phone with me and as a result the temperature gone had down a bit. By the time I got home the flat was back up to normal temperature all without any meaningful intervention or visible difference on my part. This is particularly attractive for me given that I work from home – I can’t easily set a schedule to turn the heating off during the day like someone who works in an office so the heating would be on a lot of the time. Tado and Nest will to varying extents try to do this automatically, I don’t know about Hive. The Tado one at least works very well, I can’t speak to the others.

I’ve not had a bill for a full winter yet but I’m fairly sure looking at the meter that between the two features I’m saving a substantial amount of energy (and hence money and/or the environment depending on what you care about) and I’m also seeing a more constant temperature within the flat, my guess would be that most of the saving is coming from the heating being turned off when I leave the flat. For me at least this means that having the thermostat internet connected is worthwhile.

18 January, 2015 09:23PM by Mark Brown

hackergotchi for EvolvisForge blog

EvolvisForge blog

Debian/m68k hacking weekend cleanup

OK, time to clean up ↳ tarent so people can work again tomorrow.

Not much to clean though (the participants were nice and cleaned up after themselves ☺), so it’s mostly putting stuff back to where it belongs. Oh, and drinking more of the cool Belgian beer Geert (Linux upstream) brought ☻

We were productive, reporting and fixing kernel bugs, fixing hardware, swapping and partitioning discs, upgrading software, getting buildds (mostly Amiga) back to work, trying X11 (kdrive) on a bare metal Atari Falcon (and finding a window manager that works with it), etc. – I hope someone else writes a report; for now we have a photo and a screenshot (made with trusty xwd). Watch the debian-68k mailing list archives for things to come.

I think that, issues with electric cars aside, everyone liked the food places too ;-)

18 January, 2015 04:16PM by Thorsten Glaser

Andreas Metzler

Another new toy

Given that snow is yet a little bit sparse for snowboarding and the weather could be improved on I have made myself a late christmas present: Torggler TS 120 Tourenrodel Spezial

It is a rather sporty rodel (Torggler TS 120 Tourenrodel Spezial 2014/15, 9kg weight, with fast (non stainless) "racing rails" and 22° angle of the runners) but not a competition model. I wish I had bought this years ago. It is a lot more comfortable than a classic sled ("Davoser Schlitten"), since one is sitting in instead of on top of the sled somehow like in a hammock. Being able to steer without putting a foot into the snow has the nice side effect that the snow stays on the ground instead of ending up in my face. Obviously it is also faster which is a huge improvement even for recreational riding, since it makes the difference between riding the sledge or pulling it on flattish stretches. Strongly recommended.

FWIW I ordered this via rodelfuehrer.de (they started with a guidebook of luge tracks, which translates to "Rodelführer"), where I would happily order again.

18 January, 2015 03:35PM by Andreas Metzler

hackergotchi for Chris Lamb

Chris Lamb

Adjusting a backing track with SoX

Earlier today I came across some classical sheet music that included a "playalong" CD, just like a regular recording except it omits the solo cello part. After a quick listen it became clear there were two problems:

  • The recording was made at A=442, rather than the more standard A=440.
  • The tempi of the movements was not to my taste, either too fast or too slow.

SoX, the "Swiss Army knife of sound processing programs", can easily adjust the latter, but to remedy the former it must be provided with a dimensionless "cent" unit—ie. 1/100th of a semitone—rather than the 442Hz and 440Hz reference frequencies.

First, we calculate the cent difference with:

https://d1icoid1cnixnp.cloudfront.net/yadt/blog.Image/image/original/24.jpeg

Next, we rip the material from the CD:

$ sudo apt-get install ripit flac
[..]
$ ripit --coder 2 --eject --nointeraction
[..]

And finally we adjust the tempo and pitch:

$ apt-get install sox libsox-fmt-mp3
[..]
$ sox 01.flac 01.mp3 pitch -7.85 tempo 1.00 # (Tuning notes)
$ sox 02.flac 02.mp3 pitch -7.85 tempo 0.95 # Too fast!
$ sox 03.flac 03.mp3 pitch -7.85 tempo 1.01 # Close..
$ sox 04.flac 04.mp3 pitch -7.85 tempo 1.03 # Too slow!

(I'm converting to MP3 at the same time it'll be more convenient on my phone.)

18 January, 2015 12:28PM

Ian Campbell

Using Grub 2 as a bootloader for Xen PV guests on Debian Jessie

I recently wrote a blog post on using grub 2 as a Xen PV bootloader for work. See Using Grub 2 as a bootloader for Xen PV guests over on https://blog.xenproject.org.

Rather than repeat the whole thing here I'll just briefly cover the stuff which is of interest for Debian users (if you want all full background and the stuff on building grub from source etc then see the original post).

TL;DR: With Jessie, install grub-xen-host in your domain 0 and grub-xen in your PV guests then in your guest configuration, depending on whether you want a 32- or 64-bit PV guest write either:

kernel = "/usr/lib/grub-xen/grub-i386-xen.bin"

or

kernel = "/usr/lib/grub-xen/grub-x86_64-xen.bin"

(instead of bootloader = ... or other kernel = ..., also omit ramdisk = ... and any command line related stuff (e.g. root = ..., extra = ..., cmdline = ... ) and your guests will boot using Grub 2, much like on native.

In slightly more detail:

The forthcoming Debian 8.0 (Jessie) release will contain support for both host and guest pvgrub2. This was added in version 2.02~beta2-17 of the package (bits were present before then, but -17 ties it all together).

The package grub-xen-host contains grub binaries configured for the host, these will attempt to chainload an in-guest grub image (following the Xen x86 PV Bootloader Protocol) and fall back to searching for a grub.cfg in the guest filesystems. grub-xen-host is Recommended by the Xen meta-packages in Debian or can be installed by hand.

The package grub-xen-bin contains the grub binaries for both the i386-xen and x86_64-xen platforms, while the grub-xen package integrates this into the running system by providing the actual pvgrub2 image (i.e. running grub-install at the appropriate times to create an image tailored to the system) and integration with the kernel packages (i.e. running update-grub at the right times), so it is the grub-xen which should be installed in Debian guests.

At this time the grub-xen package is not installed in a guest automatically so it will need to be done manually (something which perhaps could be addressed for Stretch).

18 January, 2015 09:23AM

hackergotchi for Guido Günther

Guido Günther

whatmaps 0.0.9

I have released whatmaps 0.0.9 a tool to check which processes map shared objects of a certain package. It can integrate into apt to automatically restart services after a security upgrade.

This release fixes the integration with recent systemd (as in Debian Jessie), makes logging more consistent and eases integration into downstream distributions. It's available in Debian Sid and Jessie and will show up in Wheezy-backports soon.

This blog is flattr enabled.

18 January, 2015 09:17AM

hackergotchi for Rogério Brito

Rogério Brito

Uploading SICP to Youtube

Intro

I am not alone in considering Harold Abelson and Gerald Jay Sussman's recorded lectures based on their book "Structure and Interpretation of Computer Programs" is a masterpiece.

There are many things to like about the content of the lectures, beginning with some pearls and wisdom about the craft of writing software (even though this is not really a "software enginneering" book), the clarity with which the concepts are described, the Freedom-friendly aspects of the authors regarding the material that they produced and much, the breadth of the subjects covered and much more.

The videos, their length, and splitting them

The course consists of 20 video files and they are all uploaded on Youtube already.

There is one thing, though: while the lectures are naturally divided into segments (the instructors took a break in after every 30 minutes or so worth of lectures), the videos corresponding to each lecture have all the segments concatenated.

To better watch them, accounting for the easier possibility to put a few of the lectures in a mobile device or to avoid fast forwarding long videos from my NAS when I am watching them on my TV (and some other factors), I decided to sit down, take notes for each video of where the breaks where, and write a simple Python script to help split the videos in segments, and, then, reencode the segments.

I decided not to take the videos from Youtube to perform my splitting activities, but, instead, to operate on one of the "sources" that the authors once had in their homepage (videos encoded in DivX and audio in MP3). The videos are still available as a torrent file (with a magnet link for the hash 650704e4439d7857a33fe4e32bcfdc2cb1db34db), with some very good souls still seeding it (I can seed it too, if desired). Alas, I have not found a source for the higher quality MPEG1 videos, but I think that the videos are legible enough to avoid bothering with a larger download.

I soon found out that there are some beneficial side-effects of splitting the videos, like not having to edit/equalize the entire audio of the videos when only a segment was bad (which is understandable, as these lectures were recorded almost 30 years ago and technology was not as advanced as things are today).

So, since I already have the split videos lying around here, I figured out that, perhaps, other people may want to download them, as they may be more convenient to watch (say, during commutes or whatever/whenever/wherever it best suits them).

Of course, uploading all the videos is going to take a while and I would only do it if people would really benefit from them. If you think so, let me know here (or if you know someone who would like the split version of the videos, spread the word).

18 January, 2015 01:52AM

January 17, 2015

Jonathan Wiltshire

Alcester BSP, day two

Neil has abandoned his reputation as an RM machine, and instead concentrated on making the delayed queue as long as he can. I’m reliably informed that it’s now at a 3-year high. Steve is delighted that his reigning-in work is finally having an effect.


Alcester BSP, day two is a post from: jwiltshire.org.uk | Flattr

17 January, 2015 11:02PM by Jon

Tim Retout

CPAN PR Challenge - January - IO-Digest

I signed up to the CPAN Pull Request Challenge - apparently I'm entrant 170 of a few hundred.

My assigned dist for January was IO-Digest - this seems a fairly stable module. To get the ball rolling, I fixed the README, but this was somehow unsatisfying. :)

To follow-up, I added Travis-CI support, with a view to validating the other open pull request - but that one looks likely to be a platform-specific problem.

Then I extended the Travis file to generate coverage reports, and separately realised the docs weren't quite fully complete, so fixed this and added a test.

Two of these have already been merged by the author, who was very responsive.

Part of me worries that Github is a centralized, proprietary platform that we now trust most of our software source code to. But activities such as this are surely a good thing - how much harder would it be to co-ordinate 300 volunteers to submit patches in a distributed fashion? I suppose you could do something similar with the list of Debian source packages and metadata about the upstream VCS, say...

17 January, 2015 10:01PM

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Updating a profile in Debian’s apparmor-profiles-extra package

I have gotten my first patch to the Pidgin AppArmor profile accepted upstream. One of my mentors thus suggested that I’d patch the updated profile in the Debian package myself. This is fairly easy and requires simply that one knows how to use Git.

If you want to get write access to the apparmor-profiles-extra package in Debian, you first need to request access to the Collaborative Maintenance Alioth project, collab-maint in short. This also requires setting up an account on Alioth.

Once all is set up, one can export the apparmor-profiles-extra Git repository.
If you simply want to submit a patch, it’s sufficient to clone this repository anonymously.
Otherwise, one should use the “–auth” parameter with “debcheckout”. The “debcheckout” command is part of the “devscripts” package:

debcheckout --auth apparmor-profiles-extra

Go into the apparmor-profiles-extra folder and create a new working branch:

git branch workingtitle
git checkout workingtitle

Get the latest version of profiles from upstream. In “profiles”, one can edit the profiles.

Test.

The debian/README.Debian file should be edited: add what relevant changes one just imported from upstream.

Then, one could either push the branch to collab-maint:

git commit -a
git push origin workingtitle

or simply submit a patch to the Debian Bug Tracking System against the apparmor-profiles-extra package.

The Debian AppArmor packaging team mailing list will receive a notification of this commit. This way, commits can be peer reviewed and merged by the team.

17 January, 2015 03:00PM by u

hackergotchi for Guido Günther

Guido Günther

krb5-auth-dialog 3.15.4

To keep up with GNOMEs schedule I've released krb5-auth-dialog 3.15.4. The changes of 3.15.1 and 3.15.4 include among updated translations, the replacement of deprecated GTK+ widgets, minor UI cleanups and bug fixes a header bar fix that makes us only use header bar buttons iff the desktop environment has them enabled:

krb5-auth-dialog with header bar krb5-auth-dialog without header bar

This makes krb5-auth-dialog better ingtegrated into other desktops again thanks to mclasen's awesome work.

This blog is flattr enabled.

17 January, 2015 09:42AM

hackergotchi for Diego Escalante Urrelo

Diego Escalante Urrelo

Jonathan Wiltshire

Alcester BSP, day one

Perhaps I should say evening one, since we didn’t get going until nine or so. I have mostly been processing unblocks – 13 in all. We have a delayed upload and a downgrade in the pipeline, plus a tested diff for Django. Predictably, Neil had the one and only removal request so far.


Alcester BSP, day one is a post from: jwiltshire.org.uk | Flattr

17 January, 2015 12:25AM by Jon