January 26, 2015

NOKUBI Takatsugu

Weak ssh public keys in github

A presentation slide, named ”Attacking against 5 millions SSH public keys – 偶然にも500万個のSSH公開鍵を手に入れた俺たちは” is published, it is a lightning talk in “Edomae security seminar” in Jan 24, 2015.

 He grabbed ssh public keys with  GitHub API (https://github.com/${user}.key), the API is obsoleted, but not closed.

He found short (<= 512 bit) DSA/RSA keys and can solve prime decomposition 256bit RSA key in 3 seconds.

And he repoted there are 208 weak ssh keys generated by Debian/Ubuntu (CVS-2008-0166). It was already announced  by GitHub.

On the other hand, such ssh keys couldn’t solve prime decomposition with fastgcd. It means almost ssh keys in GitHub has no bias in almost random number generators implementations, it is a good news.

26 January, 2015 05:15AM by knok

January 25, 2015

Richard Hartmann

KDE battery monitor

Dear lazyweb,

using a ThinkPad X1 Carbon with Debian unstable and KDE 4.14.2, I have not had battery warnings for a few weeks, now.

The battery status can be read out via acpi -V as well as via the KDE widget. Hibernation via systemctl hibernate works as well.

What does not work is the warning when my battery is low, or automagic hibernation when shutting the lid or when the battery level is critical.

From what I gather, something in the communication between upower and KDE broke down, but I can't find what it is. I have also been told that Cinnamon is affected as well, so this seems to be a more general problem

Sadly, me and anyone else who's affected has been unable to fix this.

So, dear lazyweb, please help.

In loosely related news, this old status is still valid. UMTS is stable-ish now but even though I saved the SIM's PIN, KDE always displays a "SIM PIN unlock request" prompt after booting or hibernating. Once I enter that PIN, systemd tells me that a system policy prevents the change and wants my user password. If anyone knows how to get rid of that, I would also appreciate any pointers.

25 January, 2015 09:11PM by Richard 'RichiH' Hartmann

hackergotchi for Chris Lamb

Chris Lamb

Recent Redis hacking

I've done a bunch of hacking on the Redis key/value database server recently:

  • Lua-based maxmemory eviction scripts. (#2319)

    (This changeset was sponsored by an anonymous client.)

    Redis typically stores the entire data set in memory, using the operating system's virtual memory facilities if required. However, one can use Redis more like a cache or ring buffer by enabling a "maxmemory policy" where a RAM limit is set and then data is evicted when required based on a predefined algorithm.

    This change enables entirely custom control over exactly what data to remove from RAM when this maxmemory limit is reached. This is an advantage over the existing policies of, say, removing entire keys based on the existing TTL, Least Recently Used (LRU) or random eviction strategies as it permits bespoke behaviours based on application-specific requirements, crucially without maintaining a private fork of Redis.

    As an example behaviour of what is possible with this change, to remove the lowest ranked member of an arbitrary sorted set, you could load the following eviction policy script:

    local bestkey = nil
    local bestval = 0
    
    for s = 1, 5 do
       local key = redis.call("RANDOMKEY")
       local type_ = redis.call("TYPE", key)
    
       if type_.ok == "zset"
       then
           local tail = redis.call("ZRANGE", key, "0", "0", "WITHSCORES")
           local val = tonumber(tail[2])
           if not bestkey or val < bestval
           then
               bestkey = key
               bestval = val
           end
       end
    end
    
    if not bestkey
    then
        -- We couldn't find anything to remove, so return an error
        return false
    end
    
    redis.call("ZREMRANGEBYRANK", bestkey, "0", "0")
    return true
    
  • TCP_FASTOPEN support. (#2307)

    The aim of TCP_FASTOPEN is to eliminate one roundtrip from a TCP conversation by allowing data to be included as part of the SYN segment that initiates the connection. (More info.)

  • Support infinitely repeating commands in redis-cli. (#2297)

  • Add --failfast option to testsuite runner. (#2290)

  • Add a -q (quiet) argument to redis-cli. (#2305)

  • Making some Redis Sentinel defaults a little saner. (#2292)


I also made the following changes to the Debian packaging:

  • Add run-parts(8) directories to be executed at various points in the daemon's lifecycle. (e427f8)

    This is especially useful for loading Lua scripts as they are not persisted across restarts.

  • Split out Redis Sentinel into its own package. (#775414, 39f642)

    This makes it possible to run Sentinel sanely on Debian systems without bespoke scripts, etc.

  • Ensure /etc/init.d/redis-server start idempotency with --oknodo (60b7dd)

    Idempotency in initscripts is especially important given the rise of configuration managment systems.

  • Uploaded 3.0.0 RC2 to Debian experimental. (37ac55)

  • Re-enabled the testsuite. (7b9ed1)

25 January, 2015 08:52PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.4.600.4.0

Conrad put up a maintenance release 4.600.4 of Armadillo a few days ago. As in the past, we tested this with number of pre-releases and test builds against the now over one hundred CRAN dependents of our RcppArmadillo package. The tests passed fine as usual, and results are as always in the rcpp-logs repository.

Changes are summarized below based on the NEWS.Rd file.

Changes in RcppArmadillo version 0.4.600.4.0 (2015-01-23)

  • Upgraded to Armadillo release Version 4.600.4 (still "Off The Reservation")

    • Speedups in the transpose operation

    • Small bug fixes

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 January, 2015 08:19PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Frontier: First Encounters

Cobra mk. 3

Cobra mk. 3

Four years ago, whilst looking for something unrelated, I stumbled across Tom Morton's port of "Frontier: Elite II" for the Atari to i386/OpenGL. This took me right back to playing Frontier on my Amiga in the mid-nineties. I spent a bit of time replaying Frontier and its sequel, First Encounters, for which there exists an interesting family of community-written game engines based on a reverse-engineering of the original DOS release.

I made some scrappy notes about engines, patches etc. at the time, which are on my frontier page.

With the recent release of Elite: Dangerous, I thought I'd pick up where I left in 2010 and see if I could get the Thargoid ship. I'm nowhere near yet, but I've spent some time trying to maximize income during the game's initial Soholian Fever period. My record in a JJFFE-derived engine (and winning the Wiccan Ware race during the same period) is currently £727,800. Can you do better?

25 January, 2015 01:18PM

hackergotchi for Joey Hess

Joey Hess

making propellor safer with GADTs and type families

Since July, I have been aware of an ugly problem with propellor. Certain propellor configurations could have a bug. I've tried to solve the problem at least a half-dozen times without success; it's eaten several weekends.

Today I finally managed to fix propellor so it's impossible to write code that has the bug, bending the Haskell type checker to my will with the power of GADTs and type-level functions.

the bug

Code with the bug looked innocuous enough. Something like this:

foo :: Property
foo = property "foo" $
    unlessM (liftIO $ doesFileExist "/etc/foo") $ do
        bar <- liftIO $ readFile "/etc/foo.template"
        ensureProperty $ setupFoo bar

The problem comes about because some properties in propellor have Info associated with them. This is used by propellor to introspect over the properties of a host, and do things like set up DNS, or decrypt private data used by the property.

At the same time, it's useful to let a Property internally decide to run some other Property. In the example above, that's the ensureProperty line, and the setupFoo Property is run only sometimes, and is passed data that is read from the filesystem.

This makes it very hard, indeed probably impossible for Propellor to look inside the monad, realize that setupFoo is being used, and add its Info to the host.

Probably, setupFoo doesn't have Info associated with it -- most properties do not. But, it's hard to tell, when writing such a Property if it's safe to use ensureProperty. And worse, setupFoo could later be changed to have Info.

Now, in most languages, once this problem was noticed, the solution would probably be to make ensureProperty notice when it's called on a Property that has Info, and print a warning message. That's Good Enough in a sense.

But it also really stinks as a solution. It means that building propellor isn't good enough to know you have a working system; you have to let it run on each host, and watch out for warnings. Ugh, no!

the solution

This screams for GADTs. (Well, it did once I learned how what GADTs are and what they can do.)

With GADTs, Property NoInfo and Property HasInfo can be separate data types. Most functions will work on either type (Property i) but ensureProperty can be limited to only accept a Property NoInfo.

data Property i where
    IProperty :: Desc -> ... -> Info -> Property HasInfo
    SProperty :: Desc -> ... -> Property NoInfo

data HasInfo
data NoInfo

ensureProperty :: Property NoInfo -> Propellor Result

Then the type checker can detect the bug, and refuse to compile it.

Yay!

Except ...

Property combinators

There are a lot of Property combinators in propellor. These combine two or more properties in various ways. The most basic one is requires, which only runs the first Property after the second one has successfully been met.

So, what's it's type when used with GADT Property?

requires :: Property i1 -> Property i2 -> Property ???

It seemed I needed some kind of type class, to vary the return type.

class Combine x y r where
    requires :: x -> y -> r

Now I was able to write 4 instances of Combines, for each combination of 2 Properties with HasInfo or NoInfo.

It type checked. But, type inference was busted. A simple expression like

foo `requires` bar

blew up:

   No instance for (Requires (Property HasInfo) (Property HasInfo) r0)
      arising from a use of `requires'
    The type variable `r0' is ambiguous
    Possible fix: add a type signature that fixes these type variable(s)
    Note: there is a potential instance available:
      instance Requires
                 (Property HasInfo) (Property HasInfo) (Property HasInfo)
        -- Defined at Propellor/Types.hs:167:10

To avoid that, it needed ":: Property HasInfo" appended -- I didn't want the user to need to write that.

I got stuck here for an long time, well over a month.

type level programming

Finally today I realized that I could fix this with a little type-level programming.

class Combine x y where
    requires :: x -> y -> CombinedType x y

Here CombinedType is a type-level function, that calculates the type that should be used for a combination of types x and y. This turns out to be really easy to do, once you get your head around type level functions.

type family CInfo x y
type instance CInfo HasInfo HasInfo = HasInfo
type instance CInfo HasInfo NoInfo = HasInfo
type instance CInfo NoInfo HasInfo = HasInfo
type instance CInfo NoInfo NoInfo = NoInfo
type family CombinedType x y
type instance CombinedType (Property x) (Property y) = Property (CInfo x y)

And, with that change, type inference worked again! \o/

(Bonus: I added some more intances of CombinedType for combining things like RevertableProperties, so propellor's property combinators got more powerful too.)

Then I just had to make a massive pass over all of Propellor, fixing the types of each Property to be Property NoInfo or Property HasInfo. I frequently picked the wrong one, but the type checker was able to detect and tell me when I did.

A few of the type signatures got slightly complicated, to provide the type checker with sufficient proof to do its thing...

before :: (IsProp x, Combines y x, IsProp (CombinedType y x)) => x -> y -> CombinedType y x
before x y = (y `requires` x) `describe` (propertyDesc x)

onChange
    :: (Combines (Property x) (Property y))
    => Property x
    => Property y
    => CombinedType (Property x) (Property y)
onChange = -- 6 lines of code omitted

fallback :: (Combines (Property p1) (Property p2)) => Property p1 -> Property p2 -> Property (CInfo p1 p2)
fallback = -- 4 lines of code omitted

.. This mostly happened in property combinators, which is an acceptable tradeoff, when you consider that the type checker is now being used to prove that propellor can't have this bug.

Mostly, things went just fine. The only other annoying thing was that some things use a [Property], and since a haskell list can only contain a single type, while Property Info and Property NoInfo are two different types, that needed to be dealt with. Happily, I was able to extend propellor's existing (&) and (!) operators to work in this situation, so a list can be constructed of properties of several different types:

propertyList "foos" $ props
    & foo
    & foobar
    ! oldfoo    

conclusion

The resulting 4000 lines of changes will be in the next release of propellor. Just as soon as I test that it always generates the same Info as before, and perhaps works when I run it. (eep)

These uses of GADTs and type families are not new; this is merely the first time I used them. It's another Haskell leveling up for me.

Anytime you can identify a class of bugs that can impact a complicated code base, and rework the code base to completely avoid that class of bugs, is a time to celebrate!

25 January, 2015 03:54AM

January 24, 2015

hackergotchi for Daniel Pocock

Daniel Pocock

Get your Github issues as an iCalendar feed

I've just whipped up a Python script that renders Github issue lists from your favourite projects as an iCalendar feed.

The project is called github-icalendar. It uses Python Flask to expose the iCalendar feed over HTTP.

It is really easy to get up and running. All the dependencies are available on a modern Linux distribution, for example:

$ sudo apt-get install python-yaml python-icalendar python-flask python-pygithub

Just create an API token in Github and put it into a configuration file with a list of your repositories like this:

api_token: 6b36b3d7579d06c9f8e88bc6fb33864e4765e5fac4a3c2fd1bc33aad
bind_address: ::0
bind_port: 5000
repositories:
- repository: your-user-name/your-project
- repository: your-user-name/another-project

Run it from the shell:

$ ./github_icalendar/main.py github-ics.cfg

and connect to it with your favourite iCalendar client.

Consolidating issue lists from Bugzilla, Github, Debian BTS and other sources

A single iCalendar client can usually support multiple sources and thereby consolidate lists of issues from multiple bug trackers.

This can be much more powerful than combining RSS bug feeds because iCalendar has built-in support for concepts such as priority and deadline. The client can use these to help you identify the most critical issues across all your projects, no matter which bug tracker they use.

Bugzilla bugtrackers already expose iCalendar feeds directly, just look for the iCalendar link at the bottom of any search results page. Here is an example URL from the Mozilla instance of Bugzilla.

The Ultimate Debian Database consolidates information from the Debian and Ubuntu universe and can already export it as an RSS feed, there is discussion about extrapolating that to an iCalendar feed too.

Further possibilities

  • Prioritizing the issues in Github and mapping these priorities to iCalendar priorities
  • Creating tags in Github that allow issues to be ignored/excluded from the feed (e.g. excluding wishlist items)
  • Creating summary entries instead of listing all the issues, e.g. a single task entry with the title Fix 2 critical bugs for project foo

Screenshots

The screenshots below are based on the issue list of the Lumicall secure SIP phone for Android.

Screenshot - Mozilla Thunderbird/Lightning (Icedove/Iceowl-extension on Debian)

24 January, 2015 11:07PM by Daniel.Pocock

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.11.4

A new release 0.11.4 of Rcpp is now on the CRAN network for GNU R, and an updated Debian package will be uploaded in due course.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 323 packages on CRAN depend on Rcpp for making analyses go faster and further; BioConductor adds another 41 packages, and casual searches on GitHub suggests dozens mores.

This release once again adds a large number of small bug fixes, polishes and enhancements. And like the last time, these changes were made by a group of seven different contributors (counting code commits) plus three more providing concrete suggestions. This shows that the Rcpp development and maintenance rests a large number of (broad) shoulders.

See below for a detailed list of changes extracted from the NEWS file.

Changes in Rcpp version 0.11.4 (2015-01-20)

  • Changes in Rcpp API:

    • The ListOf<T> class gains the .attr and .names methods common to other Rcpp vectors.

    • The [dpq]nbinom_mu() scalar functions are now available via the R:: namespace when R 3.1.2 or newer is used.

    • Add an additional test for AIX before attempting to include execinfo.h.

    • Rcpp::stop now supports improved printf-like syntax using the small tinyformat header-only library (following a similar implementation in Rcpp11)

    • Pairlist objects are now protected via an additional Shield<> as suggested by Martin Morgan on the rcpp-devel list.

    • Sorting is now prohibited at compile time for objects of type List, RawVector and ExpressionVector.

    • Vectors now have a Vector::const_iterator that is 'const correct' thanks to fix by Romain following a bug report in rcpp-devel by Martyn Plummer.

    • The mean() sugar function now uses a more robust two-pass method, and new unit tests for mean() were added at the same time.

    • The mean() and var() functions now support all core vector types.

    • The setequal() sugar function has been corrected via suggestion by Qiang Kou following a bug report by Søren Højsgaard.

    • The macros major, minor, and makedev no longer leak in from the (Linux) system header sys/sysmacros.h.

    • The push_front() string function was corrected.

  • Changes in Rcpp Attributes:

    • Only look for plugins in the package's namespace (rather than entire search path).

    • Also scan header files for definitions of functions to be considerd by Attributes.

    • Correct the regular expression for source files which are scanned.

  • Changes in Rcpp unit tests

    • Added a new binary test which will load a pre-built package to ensure that the Application Binary Interface (ABI) did not change; this test will (mostly or) only run at Travis where we have reasonable control over the platform running the test and can provide a binary.

    • New unit tests for sugar functions mean, setequal and var were added as noted above.

  • Changes in Rcpp Examples:

    • For the (old) examples ConvolveBenchmarks and OpenMP, the respective Makefile was renamed to GNUmakefile to please R CMD check as well as the CRAN Maintainers.

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 January, 2015 03:44PM

RcppGSL 0.2.4

A new version of RcppGSL is now on CRAN. This package provides an interface from R to the GNU GSL using our Rcpp package.

This follows on the heels on the recent RcppGSL 0.2.3 release and extends the excellent point made by Qiang Kou in a contributed section of the vignette: We now not only allow to turn the GSL error handler off (to not abort() on error) but do so on package initialisation.

No other user-facing changes were made.

The NEWS file entries follows below:

Changes in version 0.2.4 (2015-01-24)

  • Two new helper function to turn the default GSL error handler off (and to restore it) were added. The default handler is now turned off when the package is attached so that GSL will no longer abort an R session on error. Users will have to check the error code.

  • The RcppGSL-intro.Rnw vignette was expanded with a short section on the GSL error handler (thanks to Qiang Kou).

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 January, 2015 03:20PM

RcppAnnoy 0.0.5

A new version of RcppAnnoy is now on CRAN. RcppAnnoy wraps the small, fast, and lightweight C++ template header library Annoy written by Erik Bernhardsson for use at Spotify. RcppAnnoy uses Rcpp Modules to offer the exact same functionality as the Python module wrapped around Annoy.

This version contains a trivial one-character change requested by CRAN to cleanse the Makevars file of possible GNU Make-isms. Oh well. This release also overcomes an undefined behaviour sanitizer bug noticed by CRAN that took somewhat more effort to deal with. As mentioned recently in another blog post, it took some work to create a proper Docker container with the required compiler and subsequent R setup, but we have one now, and the aforementioned blog post has details on how we replicated the CRAN finding of an UBSAN issue. It also took Erik some extra efforts to set something up for his C++/Python side, but eventually an EC2 instance with Ubuntu 14.10 did the task as my Docker sales skills are seemingly not convincing enough. In any event, he very quickly added the right fix, and I synced RcppAnnoy with his Annoy code.

Courtesy of CRANberries, there is also a diffstat report for this release. More detailed information is on the RcppAnnoy page page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 January, 2015 02:22PM

hackergotchi for Diego Escalante Urrelo

Diego Escalante Urrelo

January 23, 2015

hackergotchi for Chris Lamb

Chris Lamb

Slack integration for Django

I recently started using the Slack group chat tool in a few teams. Wishing to add some vanity notifications such as sales and user growth milestones from some Django-based projects, I put together an easy-to-use integration between the two called django-slack.

Whilst you can use any generic Python-based method of sending messages to Slack, using a Django-specific integration has some advantages:

  • It can use the Django templating system, rather than constructing messages "by hand" in views.py and models.py which violates abstraction layers and often requires unwieldy and ugly string manipulation routines that would be trivial inside a regular template.
  • It can easily enabled and disabled in certain environments, preventing DRY violations by centralising logic to avoid sending messages in development, staging environments, etc.
  • It can use other Django idioms such as a pluggable backend system for greater control over exactly how messages are transmitted to the Slack API (eg. sent asynchronously using your queuing system, avoiding slowing down clients).

Here is an example of how to send a message from a Django view:

from django_slack import slack_message

@login_required
def view(request, item_id):
    item = get_object_or_404(Item, pk=item_id)

    slack_message('items/viewed.slack', {
        'item': item,
        'user': request.user,
    })

    return render(request, 'items/view.html', {
        'item': item,
    })

Where items/viewed.slack (in your templates directory) might contain:

{% extends django_slack %}

{% block text %}
{{ user.get_full_name }} just viewed {{ item.title }} ({{ item.content|urlize }}).
{% endblock %}

.slack files are regular Django templates — text is automatically escaped as appropriate and that you can use the regular template filters and tags such as urlize, loops, etc.

By default, django-slack posts to the #general channel, but it can be overridden on a per-message basis by specifying a channel block:

{% block channel %}
#mychannel
{% endblock %}

You can also set the icon, URL and emoji in a similar fashion. You can set global defaults for all of these attributes to avoid DRY violations within .slack templates as well.

For more information please see the project homepage or read the documentation. Patches and other contributions are welcome via the django-slack GitHub project.

23 January, 2015 10:46PM

Richard Hartmann

Release Critical Bug report for Week 04

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1117 (Including 191 bugs affecting key packages)
    • Affecting Jessie: 187 (key packages: 116) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 132 (key packages: 89) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 24 bugs are tagged 'patch'. (key packages: 15) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 4 bugs are marked as done, but still affect unstable. (key packages: 3) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 104 bugs are neither tagged patch, nor marked done. (key packages: 71) Help make a first step towards resolution!
      • Affecting Jessie only: 55 (key packages: 27) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 25 bugs are in packages that are unblocked by the release team. (key packages: 8)
        • 30 bugs are in packages that are not unblocked. (key packages: 19)

>How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144) ???
51 178 (124+54) 323 (190+133) 189 (134+55)
52 115 (78+37) 289 (190+99) 147 (112+35)
1 93 (60+33) 287 (171+116) 140 (104+36)
1 93 (60+33) 287 (171+116) 140 (104+36)
2 82 (46+36) 271 (162+109) 157 (124+33)
3 25 (15+10) 249 (165+84) 172 (128+44)
4 14 (8+6) 244 (176+68) 187 (132+55)
5 2 (0+2) 224 (132+92)
6 release! 212 (129+83)
7 release+1 194 (128+66)
8 release+2 206 (144+62)
9 release+3 174 (105+69)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

23 January, 2015 05:59PM by Richard 'RichiH' Hartmann

Enrico Zini

mozilla-facepalm

Mozilla marketplace facepalm

This made me sad.

My view, which didn't seem to be considered in that discussion, is that people concerned about software freedom and security are likely to stay the hell away from such an app market and its feedback forms.

Also, that thread made me so sad about the state of that developer community that I seriously do not feel like investing energy into going through the hoops of getting an account in their bugtracker to point this out.

Sigh.

23 January, 2015 02:13PM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Mini-Debconf Mumbai 2015

Last weekend I went to Mumbai to attend the Mini-Debconf held at IIT-Bombay. These are my impressions of the trip.

Arrival and Impressions of Mumbai

Getting there was a quite an adventure in itself. Unlike during my ill-fated attempt to visit a Debian event in Kerala last year when a bureaucratic snafu left me unable to get a visa, the organizers started the process much earlier at their end this time and with proper permissions. Yet in India, the wheels only turn as fast as they want to turn so despite their efforts, it was only literally at the last minute that I actually managed to secure my visa. I should note however that Indian government has done a lot to improve the process compared to the hell I remember from, say, a decade ago. It's fairly straightforward for tourist visas now and I trust they will get around to doing the same for conference visas in the fullness of time. I didn't want to commit to buying a plane ticket until I had the visa so I became concerned that the only flights left would be either really expensive or on the type of airline that flies you over Syria or under the Indian Ocean. I lucked out and got a good price on a Swiss Air flight, not non-stop but you can't have everything.

So Thursday afternoon I set off for JFK. With only one small suitcase getting there by subway was no problem and I arrived and checked in with plenty of time. Even TSA passed me through with only a minimal amount of indignity. The first leg of my journey took me to Zurich in about eight hours. We were only in Zurich for an hour and then (by now Friday) it was another 9 hours to Mumbai. Friday was Safala Ekadashi but owing to the necessity of staying hydrated on a long flight I drank a lot of water and ate some fruit which I don't normally do on a fasting day. It was tolerable but not too pleasant; I definitely want to try and make travel plans to avoid such situations in the future.

Friday evening local time I got to Mumbai. Chhattrapati Shivaji airport has improved a lot since I saw t last and now has all the amenities an international traveller needs including unrestricted free wifi (Zurich airport are you taking notes?) But here my first ominous piece of bad luck began. No sign of my suitcase. Happily some asking around revealed that it had somehow gotten on some earlier Swiss Air flight instead of the one I was on and was actually waiting for me. I got outside and Debian Developer Praveen Arimbrathodiyil was waiting to pick me up.

Normally I don't lke staying in Mumbai very much even though I have relatives there but that's because we usually went during July-August—the monsoon season—when Mumbai reverts back to the swampy archipelago it was originally built on. This time the weather was nice, cold by local standards, but lovely and spring-like to someone from snowy New Jersey. There have been a lot of improvements to the road infrastructure and people are actually obeying the traffic laws. (Within reason of course. Whether or not a family of six can arrange themselves on one Bajaj scooter is no business of the cops.)

The Hotel Tuliip (yes, two i's. Manager didn't know why.) Residency where I was to stay while not quite a five star establishment was adequate for my needs with a bed, hot water shower, and air conditioning. And a TV which to the bellhops great confusion I did not want turned on. (He asked about five times.) There was no Internet access per se but the manager offered to hook up a wireless router to a cable. Which on closer inspection turned out to have been severed at the base. He assured me it would be fixed tomorrow so I didn't complain and decided to do something more productive thank checking my email like sleeping.

The next day I woke up in total darkness. Apparently there had been some kind of power problem during the night which tripped a fuse or something. A call to the front desk got them to fix that and then the second piece of bad luck happened. I plugged my Thinkpad in and woke it up from hibernation and a minute later there was a loud pop from the power adapter. Note I have a travel international plug adapter with surge protector so nothing bad ought to have happened but the laptop would on turning on display the message "critical low battery error" and immediately power off. I was unable to google what that meant without Internet access but I decided not to panic and continue getting ready. I would have plenty of opportunity to troubleshoot at the conference venue. Or so I thought...

I took an autorickshaw to IIT. There also there have been positive improvements. Being quite obviously a foreigner I was fully prepared to be taken along the "scenic route." But now there are fair zones and the rickshaws all have (tamperproof!) digital fare meters so I was deposited at the main gate without fuss. After reading a board with a scary list of dos and don'ts I presented myself at security only to be inexplicably waved through without a second glance. Later I found out they've abandoned all the security theatre but not got around to updating the signs yet. Mumbai is one of the biggest, densely populated cities in the world but the IIT campus is an oasis of tranquility on the shores of Lake Powai. It's a lot bigger than it looked on the map so I had to wander around a bit before I reached the conference venue but I did make for the official registration time.

Registration

I was happy to meet several old friends (Such as Kartik Mistry and Kumar Appiah who along with Praveen and myself were the other DDs there,) people who I've corresponded with but never met, and many new people. I'm told 200+ people registered altogether. Most seemed to be students from IIT and elsewhere in Mumbai but there were also some Debian enthusiasts from further afield and most hearteningly some "civilians" who wanted to know what this was all about.

With the help of a borrowed Thinkpad adapter I got my laptop running again. (Thankfully, despite the error message, the battery itself was unharmed.) However, my streak of bad luck was not yet over. It was that very weekend that IIT had a freak campus-wide network outage something that had never happened before. And as the presentation for the talk I was to give had apparently been open when I hibernated my laptop the night before, the sudden forced shutdown had trashed the file. (ls showed it as 0 length. An fsck didn't help.) I possibly had a backup on my server but with no Internet access I had no way to retrieve it. I still remained cool. The talk was scheduled for the second day so I could recover it at the hotel.

Keynotes

Professor Kannan Maudgalya of the FOSSEE (Free and Open Source Software for Education) Project which is part of the central government Ministry for Human Resource Development spoke about various activities of his project. Of particular interest to us are:

  • A scheme to get labs and college engineering/computer science departments off proprietary software by helping them identify relevant free software (writing it if necessary.) and helping them transition to it. Similarly getting curricula away from textbooks that use proprietary software by rewriting exercises to use free equivalents.
  • A series of videos for self-instruction kind of like Khan Academy but geared to the challenges of being used in places where there might not be a net connection or even a trained teacher.
  • The Vidyut tablet. A very low cost (~5000 Rupees) ARM-based netbook that runs Linux or Android software. You may have heard about earlier plans for a cheap tablet like this. Vidyut is the next generation correcting some flaws in previous attempts. Not only the software but the hardware is free too. It is currently running a stripped down version of Ubuntu but there was a request to port it to Debian and I'm happy to report several Debian users have accepted the challenge.
FOSSEE is well funded, backed by the government and has enthusiastic staff so we should be seeing a lot more from them in the future.

Veteran Free Software activist Venky Hariharan spoke about his experiences in lobbying the government on tech issues. He noted that there has been a sea change in attitudes towards Linux and Open source in the bureacracy of late. Several states have been aggressively mandating the use of it as have several national ministries and agencies. We the community can provide a valuable service by helping them in the transition. They also need to be educated on how to work with the community (contributing changes back, not working behind closed doors etc.)

Debian History and Debian Cycle

Shirish Agarwal spoke about the Debian philosophy and foundational documents such as the social contract and DFSG and how the release cycle works. Nothing new to an experienced user but informative to the newcomers in the audience and sparked some questions and discussion.

Keysigning

One of my main missions in attending was to help get as many isolated people as possible into the web of trust. Unfortunately the keysigning was not adequately publicized and few people were ready. I would have led them through the process of creating a new key there and then but with the lack of connectivity that idea had to be abandoned. I did manage to sign about 8-10 keys during other times.

Future Directions for Debian-IN BOF

I led this one. Lots of spirited discussion and I found feedback from new users in particular to be very helpful. Some take aways are:

  • Some people said it is hard to find concise, easily digestible information about what Debian can do. (I.e. Can I surf the web? Can I play a certain game? etc.) Debian-IN's web presence in particular needs a lot of improvement. We should also consider other channels such as a facebook page. A volunteer stepped up to look into these issues.
  • Along these lines it was felt that we cannot just wait for people to come to us, we should do more outreach. I pointed out that one group that we need to reach out more to is the Debian Project at large. We need to do more publicity in debian-project, DWN, Planet etc. to let everyone know whats going on in India. I also felt that we have a strong base amongst CS/engineering students but should do more to attract other demographics.
  • Debian events have suffered from organizational problems. Partly this is because the people involved are not professional event planners. They are learning how to do it which is an ongoing process and execution is improving with each iteration so no worries there but problems also arise because Debian-IN is dependent on other entities for many things and those entities do not always have, shall we say, the same sense of urgency. Therefore we need legal standing of our own for accepting donations, inviting foreign guests etc. This doesn't necessarily have to be a separate organization. Affiliating with an existing group is an option providing they share our ideology. Swathanthra Malayalam Computing was one suggestion.
  • There is still not much Debian presence in the North and East of India. (Which includes large cities like Delhi and Kolkata.) Unfortunately until we can find volunteers in those areas to take the lead on organizing something there is not a lot we can do to rectify the situation.
  • We must have Debian-IN t-shirts.

Lil' Debi

Kumar Sukhani was a Debian GSoC student and his project which he demonstrated was to be able to install Debian on an Android phone. Why would you want to do this? Apart from the evergreen "Because I can", you can run server software such as sshd on your phone or even use it as an ARM development board. Unfortunately my phone uses Blackberry 10 OS which can run android apps (emulated under QNX) but wouldn't be able to use this. When I get a real Android phone I will try it out.

Debian on ARM

Siji Sunny gave this talk which was geared more towards hardware types which I am not but one thing I learned was thee difference between all the different ARM subarchitectures. I knew Siji first from a previous incarnation when he worked at CDAC with the late and much lamented Prof. R.K. Joshi. We had a long conversation about those days. Prof. Joshi/CDAC had developed an Indic rendering system called Indix which alas became the Betamax to Pango's VHS but he was also very involved in other Indic computing issues such as working with the Unicode Consortium and the preseration of Sanskrit manuscripts which is also an interest of mine. One good thing that cameout of Indix was some rather nice fonts. I had thought they were still buried in the dungeons of CDAC but apparently they were freed at one point. That's one more thing for me to look into.

Evening/Next morning<

My cousin met me and we had a leisurely dinner together. It was quite late by the time I got back to the hotel. FOSSEE had kindly lent me one of their tablets (which incidently are powerful enough to run LibreOffice comfortably.) so I thought I might be able to quickly redo my presentation before bedtime. Well, wouldn't you know it the wifi was not fixed. As I should have guessed but all the progress I'd had made me giddily optimistic. There was an option of trying to find an Internet cafe in a commercial area 15-20 minutes walk away. If this had been Gujarat I would have tried it but although I can more or less understand Hindi I can barely put together two sentences and Marathi I don't know at all. So I gave up that idea. I redid the slides from memory as best I could and went to sleep.

In the morning I checked out and ferried myself and my suitcase via rickshaw back to the IIT campus. This time I got the driver to take me all the way in to the conference venue. Prof. Maudgalya kindly offered to let me keep the tablet to develop stuff on. I respectfully had to decline because although I love to collect bits of tech the fact it is it would have just gathered dust and ought to go to someone who can make a real contribution with it. I transferred my files to a USB key and borrowed a loaner laptop for my talk.

Debian Packaging Workshop

While waiting to do my talk I sat in on a workshop Praveen ran taking participants through the whole process of creating a Debian package (a ruby gem was the example.) He's done this before so it was a good presentation and well attended but the lack of connectivity did put a damper on things.

Ask Me Anything

It turned out the schedule had to be shuffled a bit so my talk was moved later from the announced time. A few people had already showed up so I took some random questions about Debian from them instead.

GNOME Shell Accessibility With Orca

Krishnakant Mane is remarkable. Although he is blind, he is a developer and a major contributor to Open Source projects. He talked about the Accessibility features of GNOME and compared them (favorably I might add) with proprietary screen readers. Not a subject that's directly useful to me but I found it interesting nonetheless.

Rust: The memory safe language

Manish Goregaokar talked about one of the new fad programming languages that have gotten a lot of buzz lately. This one is backed by Mozilla and it's interesting enough but I'll stick with C++ and Perl until one of the new ones "wins."

Building a Mail Server With Debian

Finally I got to give my talk and, yup, the video out on my borrowed laptop was incompatible with the projector. A slight delay to transfer everything to another laptop and I was able to begin. I talked about setting up BIND, postfix, and of course dovecot along with spamassassin, clamav etc. It turned out I had more than enough material and I went atleast 30 minutes over time and even then I had to rush at the end. People said they liked it so I'm happy.

The End

I gave the concluding remarks. Various people were thanked (including myself) mementos were given and pictures were taken. Despite a few mishaps I enjoyed myself and I am glad I attended. The level of enthusiasm was very high and lessons were learned so the next Debian-IN event should be even better.

My departing flight wasn't due to leave until 1:20AM so I killed a few hours with my family before the flight. Once again I was stopping in Zurich, this time for most of a day. The last of my blunders was not to take my coat out of my suitcase and the temperature outside was 29F so I had to spend that whole time enjoing the (not so) many charms of Zurich airport. Atleast the second flight took me to Newark instead of JFK so I was able to get home a little earlier on Monday evening, exhausted but happy I made the trip.

23 January, 2015 06:47AM

hackergotchi for Michael Prokop

Michael Prokop

check-mk: monitor switches for GBit links

For one of our customers we are using the Open Monitoring Distribution which includes Check_MK as monitoring system. We’re monitoring the switches (Cisco) via SNMP. The switches as well as all the servers support GBit connections, though there are some systems in the wild which are still operating at 100MBit (or even worse on 10MBit). Recently there have been some performance issues related to network access. To make sure it’s not the fault of a server or a service we decided to monitor the switch ports for their network speed. By default we assume all ports to be running at GBit speed. This can be configured either manually via:

cat etc/check_mk/conf.d/wato/rules.mk
[...]
checkgroup_parameters.setdefault('if', [])

checkgroup_parameters['if'] = [
  ( {'speed': 1000000000}, [], ['switch1', 'switch2', 'switch3', 'switch4'], ALL_SERVICES, {'comment': u'GBit links should be used as default on all switches'} ),
] + checkgroup_parameters['if']

or by visting Check_MK’s admin web-interface at ‘WATO Configuration’ -> ‘Host & Service Parameters’ -> ‘Parameters for Inventorized Checks’ -> ‘Networking’ -> ‘Network interfaces and switch ports’ and creating a rule for the ‘Explicit hosts’ switch1, switch2, etc and setting ‘Operating speed’ to ‘1 GBit/s’ there.

So far so straight forward and this works fine. Thanks to this setup we could identify several systems which used 100Mbit and 10MBit links. Definitely something to investigate on the according systems with their auto-negotiation configuration. But to avoid flooding the monitoring system and its notifications we want to explicitly ignore those systems in the monitoring setup until those issues have been resolved.

First step: identify the checks and their format by either invoking `cmk -D switch2` or looking at var/check_mk/autochecks/switch2.mk:

OMD[synpros]:~$ cat var/check_mk/autochecks/switch2.mk
[
  ("switch2", "cisco_cpu", None, cisco_cpu_default_levels),
  ("switch2", "cisco_fan", 'Switch#1, Fan#1', None),
  ("switch2", "cisco_mem", 'Driver text', cisco_mem_default_levels),
  ("switch2", "cisco_mem", 'I/O', cisco_mem_default_levels),
  ("switch2", "cisco_mem", 'Processor', cisco_mem_default_levels),
  ("switch2", "cisco_temp_perf", 'SW#1, Sensor#1, GREEN', None),
  ("switch2", "if64", '10101', {'state': ['1'], 'speed': 1000000000}),
  ("switch2", "if64", '10102', {'state': ['1'], 'speed': 1000000000}),
  ("switch2", "if64", '10103', {'state': ['1'], 'speed': 1000000000}),
  [...]
  ("switch2", "snmp_info", None, None),
  ("switch2", "snmp_uptime", None, {}),
]
OMD[synpros]:~$

Second step: translate this into the according format for usage in etc/check_mk/main.mk:

checks = [
  ( 'switch2', 'if64', '10105', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:42:de:ad:be:af,  10MBit
  ( 'switch2', 'if64', '10107', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:23:de:ad:be:af, 100MBit
  ( 'switch2', 'if64', '10139', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:42:de:ad:be:af, 100MBit
  [...]
]

Using this configuration we ignore the operation speed on ports 10105, 10107 and 10139 of switch2 using the the if64 check. We kept the state setting untouched where sensible (‘1′ means that the expected operational status of the interface is to be ‘up’). The errors settings specifies the error rates in percent for warnings (0.01%) and critical (0.1%). For further details refer to the online documentation or invoke ‘cmk -M if64′.

Final step: after modifying the checks’ configuration make sure to run `cmk -IIu switch2 ; cmk -R` to renew the inventory for switch2 and apply the changes. Do not forget to verify the running configuration by invoking ‘cmk -D switch2′:

Screenshot of 'cmk -D switch2' execution

23 January, 2015 12:04AM by mika

January 22, 2015

hackergotchi for Erich Schubert

Erich Schubert

Year 2014 in Review as Seen by a Trend Detection System

We ran our trend detection tool Signi-Trend (published at KDD 2014) on news articles collected for the year 2014. We removed the category of financial news, which is overrepresented in the data set. Below are the (described) results, from the top 50 trends (I will push the raw result to appspot if possible due to file limits).
I have highlighted the top 10 trends in bold, but otherwise ordered them chronologically.

January
2014-01-29: Obama's State of the Union address
February
2014-02-05..23: Sochi Olympics (11x, including the four below)
2014-02-07: Gay rights protesters arrested at Sochi Olympics
2014-02-08: Sochi Olympics begins
2014-02-16: Injuries in Sochi Extreme Park
2014-02-17: Men's Snowboard cross finals called of because of fog
2014-02-19: Violence in Ukraine and Kiev
2014-02-22: Yanukovich leaves Kiev
2014-02-23: Sochi Olympics close
2014-02-28: Crimea crisis begins
March
2014-03-01..06: Crimea crisis escalates futher (3x)
2014-03-08: Malaysia Airlines MH-370 machine missing in South China Sea (2x)
2014-03-18: Crimea now considered part of Russia by Putin
2014-03-28: U.N. condemns Crimea's secession
April
2014-04-17..18: Russia-Ukraine crisis continues (3x)
2014-04-20: South Korea ferry accident
May
2014-05-18: Cannes film festival
2014-05-25: EU elections
June
2014-06-13: Islamic state Camp Speicher massacre in Iraq
2014-06-16: U.S. talks to Iran about Iraq
July
2014-07-17..19: Malaysian Airlines MH-17 shot down over Ukraine (3x, 2x top 10)
2014-07-20: Israel shelling Gaza kills 40+ in a day
August
2014-08-07: Russia bans EU food imports
2014-08-20: Obama orders U.S. air strikes in Iraq against IS
2014-08-30: EU increases sanctions against Russia
September
2014-09-04: NATO summit at Celtic Manor
2014-09-23: Obama orders more U.S. air strikes against IS
Oktober
2014-10-16: Ebola case in Dallas
2014-10-24: Ebola patient in New York is stable
November
2014-11-02: Elections: Romania, and U.S. rampup
2014-11-05: U.S. Senate elections
2014-11-25: Ferguson prosecution
Dezember
2014-12-08: IOC Olympics sport additions
2014-12-11: CIA prisoner center in Thailand
2014-12-15: Sydney cafe hostage siege
2014-12-17: U.S. and Cuba relations improve unexpectedly
2014-12-19: North Korea blamed for Sony cyber attack
2014-12-28: AirAsia flight QZ-8501 missing

As you can guess, we are really happy with this result - just like the result for 2013 it mentiones (almost) all the key events.
There is one "false positive" there: 2014-11-02 has a lot of articles talking about "president" and "elections", but not all refer to the same topic (we did not do topic modeling yet).
There are also some events missing that we would have liked to appear. For example the Chile/Peru earthquake. But I looked at the data: there were not many reports on this in the data source. Also, there is little about the Islamic State terror - but it has been going on throughout the year. Also Facebook bought Whatsapp on February 19 - which was a very visible trend on Twitter; but likely this was filtered out via the financials category in this data set.

22 January, 2015 07:00PM

hackergotchi for MJ Ray

MJ Ray

Outsourcing email to Google means SPF allows phishing?

I expect this is obvious to many people but bahumbug To Phish, or Not to Phish? just woke me up to the fact that if Google hosts your company email then its Sender Policy Framework might make other Google-sent emails look legitimate for your domain. When combined with the unsupportive support of the big free webmail hosts, is this another black mark against SPF?

22 January, 2015 03:57AM by mjr

hackergotchi for Diego Escalante Urrelo

Diego Escalante Urrelo

January 21, 2015

Tomasz Buchert

Expired keys in Debian keyring

A new version of Stellarium was recently released (0.13.2), so I wanted to upload it to Debian unstable as I usually do. And so I did, but it was rejected without me even knowing, since I got no e-mail response from ftp-masters.

It turns out that my GPG key in the Debian keyring expired recently and so my upload was rightfully rejected. Not a big deal, actually, since you can easily move the expiration date (even after its expiration!). I did it already and the updated key is already propagated, but be aware that Debian keyring does not synchronize with other keyservers! To update your key in Debian (if you are a Debian Developer or Mantainer) you must send your updated keys to keyring.debian.org like that (you should replace my ID with your own):

$ gpg --keyserver keyring.debian.org --send-keys 24B17D29

Debian keyring is distributed as a standard DEB package and apparently it may take up to a month to have your updated key in Debian. It seems that I may be unable to upload packages for some time.

But the whole story made me thinking: am I the only one who forgot to update his key in Debian keyring? To verify it I wrote the following snippet (works in Python 2 and 3!) which shows keys expired in the Debian keyring (well, two of them). As a bonus, it also shows keys that have non-UTF8 characters in UIDs – see #738483 for more information.

#
# be sure to do "apt-get install python-gnupg"
#

import gnupg
import datetime

def check_keys(keyring, tab = ""):
    gpg = gnupg.GPG(keyring = keyring)
    gpg.decode_errors = 'replace' # see: https://bugs.debian.org/738483
    keys = gpg.list_keys()
    now = datetime.datetime.now()
    for key in keys:
        uids = key['uids']
        uid = uids[0]
        if key['expires'] != '':
            expire = datetime.datetime.fromtimestamp(int(key['expires']))
            diff = expire - now
            if diff.days < 0:
                print(u'{}EXPIRED: Key of {} expired {} days ago.'.format(tab, uid, -diff.days))
        mangled_uids = [ u for u in uids if u'\ufffd' in u ]
        if len(mangled_uids) > 0:
            print(u'{}MANGLED: Key of {} has some mangled uids: {}'.format(tab, uid, mangled_uids))

keyrings = [
    "/usr/share/keyrings/debian-keyring.gpg",
    "/usr/share/keyrings/debian-maintainers.gpg"
]

for keyring in keyrings:
    print(u"CHECKING {}".format(keyring))
    check_keys(keyring, tab = "    ")

I’m not going to show the output of this code, because it contains names and e-mail adresses which I really shouldn’t post. But you can run it yourself. You will see that there is a small group of people with expired keys (including me!). Interestingly, some keys have expired a long time ago: there is one that expired more than 7 years ago!

The outcome of the story is: yes, you should have an expiration date on your key for safety reasons, but be careful - it can surprise you at the worst moment.

21 January, 2015 09:00PM

hackergotchi for Chris Lamb

Chris Lamb

Sprezzatura

Wolf Hall on Twitter et al:

He says, "Majesty, we were talking of Castiglione's book. You have found time to read it?"

"Indeed. He extrolls sprezzatura. The art of doing everything gracefully and well, without the appearance of effort. A quality princes should cultivate."

"Yes. But besides sprezzatura one must exhibit at all times a dignified public restraint..."

21 January, 2015 10:31AM

Enrico Zini

miniscreen

Playing with python, terminfo and command output

I am experimenting with showing progress on the terminal for a subcommand that is being run, showing what is happening without scrolling away the output of the main program, and I came out with this little toy. It shows the last X lines of a subcommand output, then gets rid of everything after the command has ended.

Usability-wise, it feels like a tease to me: it looks like I'm being shown all sorts of information then they are taken away from me before I managed to make sense of them. However, I find it cute enough to share:

#!/usr/bin/env python3
#coding: utf-8
# Copyright 2015 Enrico Zini <enrico@enricozini.org>.  Licensed under the terms
# of the GNU General Public License, version 2 or any later version.

import argparse
import fcntl
import select
import curses
import contextlib
import subprocess
import os
import sys
import collections
import shlex
import shutil
import logging

def stream_output(proc):
    """
    Take a subprocess.Popen object and generate its output, line by line,
    annotated with "stdout" or "stderr". At process termination it generates
    one last element: ("result", return_code) with the return code of the
    process.
    """
    fds = [proc.stdout, proc.stderr]
    bufs = [b"", b""]
    types = ["stdout", "stderr"]
    # Set both pipes as non-blocking
    for fd in fds:
        fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK)
    # Multiplex stdout and stderr with different prefixes
    while len(fds) > 0:
        s = select.select(fds, (), ())
        for fd in s[0]:
            idx = fds.index(fd)
            buf = fd.read()
            if len(buf) == 0:
                fds.pop(idx)
                if len(bufs[idx]) != 0:
                    yield types[idx], bufs.pop(idx)
                types.pop(idx)
            else:
                bufs[idx] += buf
                lines = bufs[idx].split(b"\n")
                bufs[idx] = lines.pop()
                for l in lines:
                    yield types[idx], l
    res = proc.wait()
    yield "result", res

@contextlib.contextmanager
def miniscreen(has_fancyterm, name, maxlines=3, silent=False):
    """
    Show the output of a process scrolling in a portion of the screen.

    has_fancyterm: true if the terminal supports fancy features; if false, just
    write lines to standard output

    name: name of the process being run, to use as a header

    maxlines: maximum height of the miniscreen

    silent: do nothing whatsoever, used to disable this without needing to
            change the code structure

    Usage:
        with miniscreen(True, "my process", 5) as print_line:
            for i in range(10):
                print_line(("stdout", "stderr")[i % 2], "Line #{}".format(i))
    """
    if not silent and has_fancyterm:
        # Discover all the terminal control sequences that we need
        output_normal = str(curses.tigetstr("sgr0"), "ascii")
        output_up = str(curses.tigetstr("cuu1"), "ascii")
        output_clreol = str(curses.tigetstr("el"), "ascii")
        cols, lines = shutil.get_terminal_size()
        output_width = cols

        fg_color = (curses.tigetstr("setaf") or
                    curses.tigetstr("setf") or "")
        sys.stdout.write(str(curses.tparm(fg_color, 6), "ascii"))

        output_lines = collections.deque(maxlen=maxlines)

        def print_lines():
            """
            Print the lines in our buffer, then move back to the beginning
            """
            sys.stdout.write("{} progress:".format(name))
            sys.stdout.write(output_clreol)
            for msg in output_lines:
                sys.stdout.write("\n")
                sys.stdout.write(msg)
                sys.stdout.write(output_clreol)
            sys.stdout.write(output_up * len(output_lines))
            sys.stdout.write("\r")

        try:
            print_lines()

            def _progress_line(type, line):
                """
                Print a new line to the miniscreen
                """
                # Add the new line to our output buffer
                msg = "{} {}".format("." if type == "stdout" else "!", line)
                if len(msg) > output_width - 4:
                    msg = msg[:output_width - 4] + "..."
                output_lines.append(msg)
                # Update the miniscreen
                print_lines()

            yield _progress_line

            # Clear the miniscreen by filling our ring buffer with empty lines
            # then printing them out
            for i in range(maxlines):
                output_lines.append("")
            print_lines()
        finally:
            sys.stdout.write(output_normal)
    elif not silent:
        def _progress_line(type, line):
            print("{}: {}".format(type, line))
        yield _progress_line
    else:
        def _progress_line(type, line):
            pass
        yield _progress_line

def run_command_fancy(name, cmd, env=None, logfd=None, fancy=True, debug=False):
    quoted_cmd = " ".join(shlex.quote(x) for x in cmd)
    log.info("%s running command %s", name, quoted_cmd)
    if logfd: print("runcmd:", quoted_cmd, file=logfd)

    # Run the script itself on an empty environment, so that what was
    # documented is exactly what was run
    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)

    with miniscreen(fancy, name, silent=debug) as progress:
        stderr = []
        for type, val in stream_output(proc):
            if type == "stdout":
                val = val.decode("utf-8")
                if logfd: print("stdout:", val, file=logfd)
                log.debug("%s stdout: %s", name, val)
                progress(type, val)
            elif type == "stderr":
                val = val.decode("utf-8")
                if logfd: print("stderr:", val, file=logfd)
                stderr.append(val)
                log.debug("%s stderr: %s", name, val)
                progress(type, val)
            elif type == "result":
                if logfd: print("retval:", val, file=logfd)
                log.debug("%s retval: %d", name, val)
                retval = val

    if retval != 0:
        lastlines = min(len(stderr), 5)
        log.error("%s exited with code %s", name, retval)
        log.error("Last %d lines of standard error:", lastlines)
        for line in stderr[-lastlines:]:
            log.error("%s: %s", name, line)

    return retval


parser = argparse.ArgumentParser(description="run a command showing only a portion of its output")
parser.add_argument("--logfile", action="store", help="specify a file where the full execution log will be written")
parser.add_argument("--debug", action="store_true", help="debugging output on the terminal")
parser.add_argument("--verbose", action="store_true", help="verbose output on the terminal")
parser.add_argument("command", nargs="*", help="command to run")
args = parser.parse_args()

if args.debug:
    loglevel = logging.DEBUG
elif args.verbose:
    loglevel = logging.INFO
else:
    loglevel = logging.WARN
logging.basicConfig(level=loglevel, stream=sys.stderr)
log = logging.getLogger()

fancy = False
if not args.debug and sys.stdout.isatty():
    curses.setupterm()
    if curses.tigetnum("colors") > 0:
        fancy = True

if args.logfile:
    logfd = open("output.log", "wt")
else:
    logfd = None

retval = run_command_fancy("miniscreen example", args.command, logfd=logfd)

sys.exit(retval)

21 January, 2015 10:13AM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Moving to Jekyll

I’ve been meaning to move away from Movable Type for a while; they no longer provide the “Open Source” variant, I’ve had some issues with the commenting side of things (more the fault of spammers than Movable Type itself) and there are a few minor niggles that I wanted to resolve. Nothing has been particularly pressing me to move and I haven’t been blogging as much so while I’ve been keeping an eye open for a replacement I haven’t exerted a lot of energy into the process. I have a little bit of time at present so I asked around on IRC for suggestions. One was ikiwiki, which I use as part of helping maintain the SPI website (and think is fantastic for that), the other was Jekyll. Both are available as part of Debian Jessie.

Jekyll looked a bit fancier out of the box (I’m no web designer so pre-canned themes help me a lot), so I decided to spend some time investigating it a bit more. I’d found a Movable Type to ikiwiki converter which provided a starting point for exporting from the SQLite3 DB I was using for MT. Most of my posts are in markdown, the rest (mostly from my Blosxom days) are plain HTML, so there wasn’t any need to do any conversion on the actual content. A minor amount of poking convinced Jekyll to use the same URL format (permalink: /:year/:month/:title.html in the _config.yml did what I wanted) and I had to do a few bits of fix up for some images that had been uploaded into MT, but overall fairly simple stuff.

Next I had to think about comments. My initial thought was to just ignore them for the moment; they weren’t really working on the MT install that well so it’s not a huge loss. I then decided I should at least see what the options were. Google+ has the ability to embed in your site, so I had a play with that. It worked well enough but I didn’t really want to force commenters into the Google ecosystem. Next up was Disqus, which I’ve seen used in various places. It seems to allow logins via various 3rd parties, can cope with threading and deals with the despamming. It was easy enough to integrate to play with, and while I was doing so I discovered that it could cope with importing comments. So I tweaked my conversion script to generate a WXR based file of the comments. This then imported easily into Disqus (and also I double checked that the export system worked).

I’m sure the use of a third party to handle comments will put some people off, but given the ability to export I’m confident if I really feel like dealing with despamming comments again at some point I can switch to something locally hosted. I do wish it didn’t require Javascript, but again it’s a trade off I’m willing to make at present.

Anyway. Thanks to Tollef for the pointer (and others who made various suggestions). Hopefully I haven’t broken (or produced a slew of “new” posts for) any of the feed readers pointed at my site (but you should update to use feed.xml rather than any of the others - I may remove them in the future once I see usage has died down).

(On the off chance it’s useful to someone else the conversion script I ended up with is available. There’s a built in Jekyll importer that may be a better move, but I liked ending up with a git repository containing a commit for each post.)

21 January, 2015 10:00AM

hackergotchi for Jo Shields

Jo Shields

mono-project.com Linux packages, January 2015 edition

The latest version of Mono has released (actually, it happened a week ago, but it took me a while to get all sorts of exciting new features bug-checked and shipshape).

Stable packages

This release covers Mono 3.12, and MonoDevelop 5.7. These are built for all the same targets as last time, with a few caveats (MonoDevelop does not include F# or ASP.NET MVC 4 support). ARM packages will be added in a few weeks’ time, when I get the new ARM build farm working at Xamarin’s Boston office.

Ahead-of-time support

This probably seems silly since upstream Mono has included it for years, but Mono on Debian has never shipped with AOT’d mscorlib.dll or mcs.exe, for awkward package-management reasons. Mono 3.12 fixes this, and will AOT these assemblies – optimized for your computer – on installation. If you can suggest any other assemblies to add to the list, we now support a simple manifest structure so any assembly can be arbitrarily AOT’d on installation.

Goodbye Mozroots!

I am very pleased to announce that as of this release, Mono users on Linux no longer need to run “mozroots” to get SSL working. A new command, “cert-sync”, has been added to this release, which synchronizes the Mono SSL certificate store against your OS certificate store – and this tool has been integrated into the packaging system for all mono-project.com packages, so it is automatically used. Just make sure the ca-certificates-mono package is installed on Debian/Ubuntu (it’s always bundled on RPM-based) to take advantage! It should be installed on fresh installs by default. If you want to invoke the tool manually (e.g. you installed via make install, not packages) use

cert-sync /path/to/ca-bundle.crt

On Debian systems, that’s

cert-sync /etc/ssl/certs/ca-certificates.crt

and on Red Hat derivatives it’s

cert-sync /etc/pki/tls/certs/ca-bundle.crt

Your distribution might use a different path, if it’s not derived from one of those.

Windows installer back from the dead

Thanks to help from Alex Koeplinger, I’ve brought the Windows installer back from the dead. The last release on the website was for 3.2.3 (it’s actually not this version at all – it’s complicated…), so now the Windows installer has parity with the Linux and OSX versions. The Windows installer (should!) bundles everything the Mac version does – F#, PCL facades, IronWhatever, etc, along with Boehm and SGen builds of the Mono runtime done with Visual Studio 2013.

An EXPERIMENTAL OH MY GOD DON’T USE THIS IN PRODUCTION 64-bit installer is in the works, when I have the time to try and make a 64-build of Gtk#.

21 January, 2015 01:26AM by directhex

Dimitri John Ledkov

Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available

I'm happy to announce that Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available for consumption.

These are 1.10.3 & 0.155 respectfully.

This means that everyone should start porting their reports, tools, and scriptage to python3.

ubuntu-dev-tools has the library portion ported to python3, as I did not dare to switch individual scripts to python3 without thorough interactive testing. Please help out porting those and/or file bug reports against the python3 port. Feel free to subscribe me to the bug reports on launchpad.

For the time being, I believe some things will not be easy to port to python3 because of the elephant in the room - bzrlib. For some things like lp-shell, it should be easy to move away from bzrlib, as non-vcs things are used there. For other things the current suggestion is to probably fork to bzr binary or a python2 process. I ponder if a minimal usable python3-bzrlib wrapper around python2 bzrlib is possible to satisfy the needs of basic and common scripts.

On a side note, launchpadlib & lazr.restfulclient have out of the box proxy support enabled. This makes things like add-apt-repository work behind networks with such setup. I think a few people will be happy about that.

All of these goodies are available in Ubuntu 15.04 (Vivid Vervet) or Debian Experimental (and/or NEW queue).

21 January, 2015 12:06AM by Dimitri John Ledkov (noreply@blogger.com)

January 20, 2015

Jonathan Wiltshire

Never too late for bug-squashing

With over a hundred RC bugs still outstanding for Jessie, there’s never been a better time to host a bug-squashing party in your local area. Here’s how I do it.

  1. At home is fine, if you don’t mind guests. You don’t need to seek out a sponsor and borrow or hire office space. If there isn’t room for couch-surfers, the project can help towards travel and accommodation expenses. My address isn’t secret, but I still don’t announce it – it’s fine to share it only with the attendees once you know who they are.
  2. You need a good work area. There should be room for people to sit and work comfortably – a dining room table and chairs is ideal. It should be quiet and free from distractions. A local mirror is handy, but a good internet connection is essential.
  3. Hungry hackers eat lots of snacks. This past weekend saw five of us get through 15 litres of soft drinks, two loaves of bread, half a kilo of cheese, two litres of soup, 22 bags of crisps, 12 jam tarts, two pints of milk, two packs of chocolate cake bars, and a large bag of biscuits (and we went out for breakfast and supper). Make sure there is plenty available before your attendees arrive, along with a good supply of tea and coffee.
  4. Have a work plan. Pick a shortlist of RC bugs to suit attendees’ strengths, or work on a particular packaging group’s bugs, or have a theme, or something. Make sure there’s a common purpose and you don’t just end up being a bunch of people round a table.
  5. Be an exemplary host. As the host you’re allowed to squash fewer bugs and instead make sure your guests are comfortable, know where the bathroom is, aren’t going hungry, etc. It’s an acceptable trade-off. (The reverse is true: if you’re attending, be an exemplary guest – and don’t spend the party reading news sites.)

Now, go host a BSP of your own, and let’s release!


Never too late for bug-squashing is a post from: jwiltshire.org.uk | Flattr

20 January, 2015 10:20PM by Jon

Sven Hoexter

Heads up: possible changes in fonts-lyx

Today the super nice upstream developers of LyX reached out to me (and pelle@) as the former and still part time lyx package maintainers to inform us of an ongoing discussion in http://www.lyx.org/trac/ticket/9229. The current aproach to fix this bug might result in a name change of all fonts shipped in fonts-lyx with the next LyX release.

Why is it relevant for people not using LyX?

For some historic reasons beyond my knowledge the LyX project ships a bunch of math symbol fonts converted to ttf files. From a seperate source package they moved to be part of the lyx source package and are currently delivered via the fonts-lyx package.

Over time a bunch of other packages picked this font package up as a dependency. Among them also rather popular packages like icedove, which results in a rather fancy popcon graph. Drawback as usual is that changes might have a visible impact in places where you do not expect them.

So if you've some clue about fonts, or depend on fonts-lyx in some way, you might want to follow that issue cited above and/or get in contact with the LyX developers.

If you've some spare time feel also invited to contribute to the lyx packaging in Debian. It really deserves a lot more love then what it seldomly gets today by the brave Nick Andrik, Per and myself.

20 January, 2015 08:02PM

hackergotchi for Daniel Pocock

Daniel Pocock

Quantifying the performance of the Microserver

In my earlier blog about choosing a storage controller, I mentioned that the Microserver's on-board AMD SB820M SATA controller doesn't quite let the SSDs perform at their best.

Just how bad is it?

I did run some tests with the fio benchmarking utility.

Lets have a look at those random writes, they simulate the workload of synchronous NFS write operations:

rand-write: (groupid=3, jobs=1): err= 0: pid=1979
  write: io=1024.0MB, bw=22621KB/s, iops=5655 , runt= 46355msec

Now compare it to the HP Z800 on my desk, it has the Crucial CT512MX100SSD1 on a built-in LSI SAS 1068E controller:

rand-write: (groupid=3, jobs=1): err= 0: pid=21103
  write: io=1024.0MB, bw=81002KB/s, iops=20250 , runt= 12945msec

and then there is the Thinkpad with OCZ-NOCTI mSATA SSD:

rand-write: (groupid=3, jobs=1): err= 0: pid=30185
  write: io=1024.0MB, bw=106088KB/s, iops=26522 , runt=  9884msec

That's right, the HP workstation is four times faster than the Microserver, but the Thinkpad whips both of them.

I don't know how much I can expect of the PCI bus in the Microserver but I suspect that any storage controller will help me get some gain here.

20 January, 2015 07:53PM by Daniel.Pocock

Sven Hoexter

python-ipcalc bumped from 0.3 to 1.1.3

I've helped a friend to get started with Debian packaging and he has now adopted python-ipcalc. Since I've no prior experience with packaging of Python modules and there were five years of upstream development in between, I've uploaded to experimental to give it some exposure.

So if you still use the python-ipcalc package, which is part of all current Debian releases and the upcoming jessie release, please check out the package from experimental. I think the only reverse dependency within Debian is sshfp, that one of course also requires some testing.

20 January, 2015 07:16PM

Raphael Geissert

Edit Debian, with iceweasel

Soon after publishing the chromium/chrome extension that allows you to edit Debian online, Moez Bouhlel sent a pull request to the extension's git repository: all the changes needed to make a firefox extension!

After another session of browser extensions discovery, I merged the commits and generated the xpi. So now you can go download the Debian online editing firefox extension and hack the world, the Debian world.

Install it and start contributing to Debian from your browser. There's no excuse now.

20 January, 2015 07:00AM by Raphael Geissert (noreply@blogger.com)

January 19, 2015

hackergotchi for Daniel Pocock

Daniel Pocock

jSMPP project update, 2.1.1 and 2.2.1 releases

The jSMPP project on Github stopped processing pull requests over a year ago and appeared to be needing some help.

I've recently started hosting it under https://github.com/opentelecoms-org/jsmpp and tried to merge some of the backlog of pull requests myself.

There have been new releases:

  • 2.1.1 works in any project already using 2.1.0. It introduces bug fixes only.
  • 2.2.1 introduces some new features and API changes and bigger bug fixes

The new versions are easily accessible for Maven users through the central repository service.

Apache Camel has already updated to use 2.1.1.

Thanks to all those people who have contributed to this project throughout its history.

19 January, 2015 09:29PM by Daniel.Pocock

Storage controllers for small Linux NFS networks

While contemplating the disk capacity upgrade for my Microserver at home, I've also been thinking about adding a proper storage controller.

Currently I just use the built-in controller in the Microserver. It is an AMD SB820M SATA controller. It is a bottleneck for the SSD IOPS.

On the disks, I prefer to use software RAID (such as md or BtrFs) and not become dependent on the metadata format of any specific RAID controller. The RAID controllers don't offer the checksumming feature that is available in BtrFs and ZFS.

The use case is NFS for a small number of workstations. NFS synchronous writes block the client while the server ensures data really goes onto the disk. This creates a performance bottleneck. It is actually slower than if clients are writing directly to their local disks through the local OS caches.

SSDs on an NFS server offer some benefit because they can complete write operations more quickly and the NFS server can then tell the client the operation is complete. The more performant solution (albeit with a slight risk of data corruption) is to use a storage controller with a non-volatile (battery-backed or flash-protected) write cache.

Many RAID controllers have non-volatile write caches. Some online discussions of BtrFs and ZFS have suggested staying away from full RAID controllers though, amongst other things, to avoid the complexities of RAID controllers adding their metadata to the drives.

This brings me to the first challenge though: are there suitable storage controllers that have a non-volatile write cache but without having other RAID features?

Or a second possibility: out of the various RAID controllers that are available, do any provide first-class JBOD support?

Observations

I looked at specs and documentation for various RAID controllers and identified some of the following challenges:

Next steps

Are there other options to look at, for example, alternatives to NFS?

If I just add in a non-RAID HBA to enable faster IO to the SSDs will this be enough to make a noticeable difference on the small number of NFS clients I'm using?

Or is it inevitable that I will have to go with one of the solutions that involves putting a vendor's volume metadata onto JBOD volumes? If I do go that way, which of the vendors' metadata formats are most likely to be recognized by free software utilities in the future if I ever connect the disk to a generic non-RAID HBA?

Thanks to all those people who provided comments about choosing drives for this type of NAS usage.

Related reading

19 January, 2015 01:59PM by Daniel.Pocock

January 18, 2015

Jonathan Wiltshire

Alcester BSP, day three

We have had a rather more successful weekend then I feared, as you can see from our log on the wiki page. Steve reproduced and wrote patches for several installer/bootloader bugs, and Neil and I spent significant time in a maze of twist zope packages (we have managed to provide more diagnostics on the bug, even if we couldn’t resolve it). Ben and Adam have ploughed through a mixture of bugs and maintenance work.

I wrongly assumed we would only be able to touch a handful of bugs, since they are now mostly quite difficult, so it was rather pleasant to recap our progress this evening and see that it’s not all bad after all.


Alcester BSP, day three is a post from: jwiltshire.org.uk | Flattr

18 January, 2015 10:27PM by Jon

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2014/51-2015/03

I have to admit that I was a bit lazy when it comes to working on RC bugs in the last weeks. here's my not-so-stellar summary:

  • #729220 – pdl: "pdl: problems upgrading from wheezy due to triggers"
    investigate (unsuccessfully), later fixed by maintainer
  • #772868 – gxine: "gxine: Trigger cycle causes dpkg to fail processing"
    switch trigger from "interest" to "interest-noawait", upload to DELAYED/2
  • #774584 – rtpproxy: "rtpproxy: Deamon does not start as init script points to wrong executable path"
    adjust path in init script, upload to DELAYED/2
  • #774791 – src:xine-ui: "xine-ui: Creates dpkg trigger cycle via libxine2-ffmpeg, libxine2-misc-plugins or libxine2-x"
    add trigger patch from Michael Gilbert, upload to DELAYED/2
  • #774862 – ciderwebmail: "ciderwebmail: unhandled symlink to directory conversion: /usr/share/ciderwebmail/root/static/images/mimeicons"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion (pkg-perl)
  • #774867 – lirc-x: "lirc-x: unhandled symlink to directory conversion: /usr/share/doc/PACKAGE"
    use dpkg-maintscript-helper to fix symlink_to_dir conversion, upload to DELAYED/2
  • #775640 – src:libarchive-zip-perl: "libarchive-zip-perl: FTBFS in jessie: Tests failures"
    start to investigate (pkg-perl)

18 January, 2015 09:41PM

Mark Brown

Heating the Internet of Things

Internet of Things seems to be trendy these days, people like the shiny apps for controlling things and typically there are claims that the devices will perform better than their predecessors by offloading things to the cloud – but this makes some people worry that there are potential security issues and it’s not always clear that internet usage is actually delivering benefits over something local. One of the more widely deployed applications is smart thermostats for central heating which is something I’ve been playing with. I’m using Tado, there’s also at least Nest and Hive who do similar things, all relying on being connected to the internet for operation.

The main thing I’ve noticed has been that the temperature regulation in my flat is better, my previous thermostat allowed the temperature to vary by a couple of degrees around the target temperature in winter which got noticeable, with this the temperature generally seems to vary by a fraction of a degree at most. That does use the internet connection to get the temperature outside, though I’m fairly sure that most of this is just a better algorithm (the thermostat monitors how quickly the flat heats up when heating and uses this when to turn off rather than waiting for the temperature to hit the target then seeing it rise further as the radiators cool down) and performance would still be substantially improved without it.

The other thing that these systems deliver which does benefit much more from the internet connection is that it’s easy to control them remotely. This in turn makes it a lot easier to do things like turn the heating off when it’s not needed – you can do it remotely, and you can turn the heating back on without being in the flat so that you don’t need to remember to turn it off before you leave or come home to a cold building. The smarter ones do this automatically based on location detection from smartphones so you don’t need to think about it.

For example, when I started this post this I was sitting in a coffee shop so the heating had been turned off based on me taking my phone with me and as a result the temperature gone had down a bit. By the time I got home the flat was back up to normal temperature all without any meaningful intervention or visible difference on my part. This is particularly attractive for me given that I work from home – I can’t easily set a schedule to turn the heating off during the day like someone who works in an office so the heating would be on a lot of the time. Tado and Nest will to varying extents try to do this automatically, I don’t know about Hive. The Tado one at least works very well, I can’t speak to the others.

I’ve not had a bill for a full winter yet but I’m fairly sure looking at the meter that between the two features I’m saving a substantial amount of energy (and hence money and/or the environment depending on what you care about) and I’m also seeing a more constant temperature within the flat, my guess would be that most of the saving is coming from the heating being turned off when I leave the flat. For me at least this means that having the thermostat internet connected is worthwhile.

18 January, 2015 09:23PM by Mark Brown

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Running UBSAN tests via clang with Rocker

Every now and then we get reports from CRAN about our packages failing a test there. A challenging one concerns UBSAN, or Undefined Behaviour Sanitizer. For background on UBSAN, see this RedHat blog post for gcc and this one from LLVM about clang.

I had written briefly about this before in a blog post introducing the sanitizers package for tests, as well as the corresponding package page for sanitizers, which clearly predates our follow-up Rocker.org repo / project described in this initial announcement and when we became the official R container for Docker.

Rocker had support for SAN testing, but UBSAN was not working yet. So following a recent CRAN report against our RcppAnnoy package, I was unable to replicate the error and asked for help on r-devel in this thread.

Martyn Plummer and Jan van der Laan kindly sent their configurations in the same thread and off-list; Jeff Horner did so too following an initial tweet offering help. None of these worked for me, but further trials eventually lead me to the (already mentioned above) RedHat blog post with its mention of -fno-sanitize-recover to actually have an error abort a test. Which, coupled with the settings used by Martyn, were what worked for me: clang-3.5 -fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr,function -fno-sanitize-recover.

This is now part of the updated Dockerfile of the R-devel-SAN-Clang repo behind the r-devel-ubsan-clang. It contains these settings, as well a new support script check.r for littler---which enables testing right out the box.

Here is a complete example:

docker                              # run Docker (any recent version, I use 1.2.0)
  run                               # launch a container 
    --rm                            # remove Docker temporary objects when dome
    -ti                             # use a terminal and interactive mode 
    -v $(pwd):/mnt                  # mount the current directory as /mnt in the container
    rocker/r-devel-ubsan-clang      # using the rocker/r-devel-ubsan-clang container
  check.r                           # launch the check.r command from littler (in the container)
    --setwd /mnt                    # with a setwd() to the /mnt directory
    --install-deps                  # installing all package dependencies before the test
    RcppAnnoy_0.0.5.tar.gz          # and test this tarball

I know. It is a mouthful. But it really is merely the standard practice of running Docker to launch a single command. And while I frequently make this the /bin/bash command (hence the -ti options I always use) to work and explore interactively, here we do one better thanks to the (pretty useful so far) check.r script I wrote over the last two days.

check.r does about the same as R CMD check. If you look inside check you will see a call to a (non-exported) function from the (R base-internal) tools package. We call the same function here. But to make things more interesting we also first install the package we test to really ensure we have all build-dependencies from CRAN met. (And we plan to extend check.r to support additional apt-get calls in case other libraries etc are needed.) We use the dependencies=TRUE option to have R smartly install Suggests: as well, but only one level (see help(install.packages) for details. With that prerequisite out of the way, the test can proceed as if we had done R CMD check (and additional R CMD INSTALL as well). The result for this (known-bad) package:

edd@max:~/git$ docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt --install-deps RcppAnnoy_0.0.5.tar.gz 
also installing the dependencies ‘Rcpp’, ‘BH’, ‘RUnit’

trying URL 'http://cran.rstudio.com/src/contrib/Rcpp_0.11.3.tar.gz'
Content type 'application/x-gzip' length 2169583 bytes (2.1 MB)
opened URL
==================================================
downloaded 2.1 MB

trying URL 'http://cran.rstudio.com/src/contrib/BH_1.55.0-3.tar.gz'
Content type 'application/x-gzip' length 7860141 bytes (7.5 MB)
opened URL
==================================================
downloaded 7.5 MB

trying URL 'http://cran.rstudio.com/src/contrib/RUnit_0.4.28.tar.gz'
Content type 'application/x-gzip' length 322486 bytes (314 KB)
opened URL
==================================================
downloaded 314 KB

trying URL 'http://cran.rstudio.com/src/contrib/RcppAnnoy_0.0.4.tar.gz'
Content type 'application/x-gzip' length 25777 bytes (25 KB)
opened URL
==================================================
downloaded 25 KB

* installing *source* package ‘Rcpp’ ...
** package ‘Rcpp’ successfully unpacked and MD5 sums checked
** libs
clang++-3.5 -fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr,function -fno-sanitize-recover -I/usr/local/lib/R/include -DNDEBUG -I../inst/include/ -I/usr/local/include    -fpic  -pipe -Wall -pedantic -
g  -c Date.cpp -o Date.o

[...]
* checking examples ... OK
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ...
  Running ‘runUnitTests.R’
 ERROR
Running the tests in ‘tests/runUnitTests.R’ failed.
Last 13 lines of output:
  +     if (getErrors(tests)$nFail > 0) {
  +         stop("TEST FAILED!")
  +     }
  +     if (getErrors(tests)$nErr > 0) {
  +         stop("TEST HAD ERRORS!")
  +     }
  +     if (getErrors(tests)$nTestFunc < 1) {
  +         stop("NO TEST FUNCTIONS RUN!")
  +     }
  + }
  
  
  Executing test function test01getNNsByVector  ... ../inst/include/annoylib.h:532:40: runtime error: index 3 out of bounds for type 'int const[2]'
* checking PDF version of manual ... OK
* DONE

Status: 1 ERROR, 2 WARNINGs, 1 NOTE
See/tmp/RcppAnnoy/..Rcheck/00check.logfor details.
root@a7687c014e55:/tmp/RcppAnnoy# 

The log shows that thanks to check.r, we first download and the install the required packages Rcpp, BH, RUnit and RcppAnnoy itself (in the CRAN release). Rcpp is installed first, we then cut out the middle until we get to ... the failure we set out to confirm.

Now having a tool to confirm the error, we can work on improved code.

One such fix currently under inspection in a non-release version 0.0.5.1 then passes with the exact same invocation (but pointing at RcppAnnoy_0.0.5.1.tar.gz):

edd@max:~/git$ docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt --install-deps RcppAnnoy_0.0.5.1.tar.gz
also installing the dependencies ‘Rcpp’, ‘BH’, ‘RUnit’
[...]
* checking examples ... OK
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ...
  Running ‘runUnitTests.R’
 OK
* checking PDF version of manual ... OK
* DONE

Status: 1 WARNING
See/mnt/RcppAnnoy.Rcheck/00check.logfor details.

edd@max:~/git$

This proceeds the same way from the same pristine, clean container for testing. It first installs the four required packages, and the proceeds to test the new and improved tarball. Which passes the test which failed above with no issues. Good.

So we now have an "appliance" container anybody can download from free from the Docker hub, and deploy as we did here in order to have fully automated, one-command setup for testing for UBSAN errors.

UBSAN is a very powerful tool. We are only beginning to deploy it. There are many more useful configuration settings. I would love to hear from anyone who would like to work on building this out via the R-devel-SAN-Clang GitHub repo. Improvements to the littler scripts are similarly welcome (and I plan on releasing an updated littler package "soon").

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 January, 2015 08:12PM

hackergotchi for EvolvisForge blog

EvolvisForge blog

Debian/m68k hacking weekend cleanup

OK, time to clean up ↳ tarent so people can work again tomorrow.

Not much to clean though (the participants were nice and cleaned up after themselves ☺), so it’s mostly putting stuff back to where it belongs. Oh, and drinking more of the cool Belgian beer Geert (Linux upstream) brought ☻

We were productive, reporting and fixing kernel bugs, fixing hardware, swapping and partitioning discs, upgrading software, getting buildds (mostly Amiga) back to work, trying X11 (kdrive) on a bare metal Atari Falcon (and finding a window manager that works with it), etc. – I hope someone else writes a report; for now we have a photo and a screenshot (made with trusty xwd). Watch the debian-68k mailing list archives for things to come.

I think that, issues with electric cars aside, everyone liked the food places too ;-)

18 January, 2015 04:16PM by Thorsten Glaser

Andreas Metzler

Another new toy

Given that snow is yet a little bit sparse for snowboarding and the weather could be improved on I have made myself a late christmas present: Torggler TS 120 Tourenrodel Spezial

It is a rather sporty rodel (Torggler TS 120 Tourenrodel Spezial 2014/15, 9kg weight, with fast (non stainless) "racing rails" and 22° angle of the runners) but not a competition model. I wish I had bought this years ago. It is a lot more comfortable than a classic sled ("Davoser Schlitten"), since one is sitting in instead of on top of the sled somehow like in a hammock. Being able to steer without putting a foot into the snow has the nice side effect that the snow stays on the ground instead of ending up in my face. Obviously it is also faster which is a huge improvement even for recreational riding, since it makes the difference between riding the sledge or pulling it on flattish stretches. Strongly recommended.

FWIW I ordered this via rodelfuehrer.de (they started with a guidebook of luge tracks, which translates to "Rodelführer"), where I would happily order again.

18 January, 2015 03:35PM by Andreas Metzler

hackergotchi for Chris Lamb

Chris Lamb

Adjusting a backing track with SoX

Earlier today I came across some classical sheet music that included a "playalong" CD, just like a regular recording except it omits the solo cello part. After a quick listen it became clear there were two problems:

  • The recording was made at A=442, rather than the more standard A=440.
  • The tempi of the movements was not to my taste, either too fast or too slow.

SoX, the "Swiss Army knife of sound processing programs", can easily adjust the latter, but to remedy the former it must be provided with a dimensionless "cent" unit—ie. 1/100th of a semitone—rather than the 442Hz and 440Hz reference frequencies.

First, we calculate the cent difference with:

https://d1icoid1cnixnp.cloudfront.net/yadt/blog.Image/image/original/24.jpeg

Next, we rip the material from the CD:

$ sudo apt-get install ripit flac
[..]
$ ripit --coder 2 --eject --nointeraction
[..]

And finally we adjust the tempo and pitch:

$ apt-get install sox libsox-fmt-mp3
[..]
$ sox 01.flac 01.mp3 pitch -7.85 tempo 1.00 # (Tuning notes)
$ sox 02.flac 02.mp3 pitch -7.85 tempo 0.95 # Too fast!
$ sox 03.flac 03.mp3 pitch -7.85 tempo 1.01 # Close..
$ sox 04.flac 04.mp3 pitch -7.85 tempo 1.03 # Too slow!

(I'm converting to MP3 at the same time it'll be more convenient on my phone.)

18 January, 2015 12:28PM

Ian Campbell

Using Grub 2 as a bootloader for Xen PV guests on Debian Jessie

I recently wrote a blog post on using grub 2 as a Xen PV bootloader for work. See Using Grub 2 as a bootloader for Xen PV guests over on https://blog.xenproject.org.

Rather than repeat the whole thing here I'll just briefly cover the stuff which is of interest for Debian users (if you want all full background and the stuff on building grub from source etc then see the original post).

TL;DR: With Jessie, install grub-xen-host in your domain 0 and grub-xen in your PV guests then in your guest configuration, depending on whether you want a 32- or 64-bit PV guest write either:

kernel = "/usr/lib/grub-xen/grub-i386-xen.bin"

or

kernel = "/usr/lib/grub-xen/grub-x86_64-xen.bin"

(instead of bootloader = ... or other kernel = ..., also omit ramdisk = ... and any command line related stuff (e.g. root = ..., extra = ..., cmdline = ... ) and your guests will boot using Grub 2, much like on native.

In slightly more detail:

The forthcoming Debian 8.0 (Jessie) release will contain support for both host and guest pvgrub2. This was added in version 2.02~beta2-17 of the package (bits were present before then, but -17 ties it all together).

The package grub-xen-host contains grub binaries configured for the host, these will attempt to chainload an in-guest grub image (following the Xen x86 PV Bootloader Protocol) and fall back to searching for a grub.cfg in the guest filesystems. grub-xen-host is Recommended by the Xen meta-packages in Debian or can be installed by hand.

The package grub-xen-bin contains the grub binaries for both the i386-xen and x86_64-xen platforms, while the grub-xen package integrates this into the running system by providing the actual pvgrub2 image (i.e. running grub-install at the appropriate times to create an image tailored to the system) and integration with the kernel packages (i.e. running update-grub at the right times), so it is the grub-xen which should be installed in Debian guests.

At this time the grub-xen package is not installed in a guest automatically so it will need to be done manually (something which perhaps could be addressed for Stretch).

18 January, 2015 09:23AM

hackergotchi for Guido Günther

Guido Günther

whatmaps 0.0.9

I have released whatmaps 0.0.9 a tool to check which processes map shared objects of a certain package. It can integrate into apt to automatically restart services after a security upgrade.

This release fixes the integration with recent systemd (as in Debian Jessie), makes logging more consistent and eases integration into downstream distributions. It's available in Debian Sid and Jessie and will show up in Wheezy-backports soon.

This blog is flattr enabled.

18 January, 2015 09:17AM

hackergotchi for Rogério Brito

Rogério Brito

Uploading SICP to Youtube

Intro

I am not alone in considering Harold Abelson and Gerald Jay Sussman's recorded lectures based on their book "Structure and Interpretation of Computer Programs" is a masterpiece.

There are many things to like about the content of the lectures, beginning with some pearls and wisdom about the craft of writing software (even though this is not really a "software enginneering" book), the clarity with which the concepts are described, the Freedom-friendly aspects of the authors regarding the material that they produced and much, the breadth of the subjects covered and much more.

The videos, their length, and splitting them

The course consists of 20 video files and they are all uploaded on Youtube already.

There is one thing, though: while the lectures are naturally divided into segments (the instructors took a break in after every 30 minutes or so worth of lectures), the videos corresponding to each lecture have all the segments concatenated.

To better watch them, accounting for the easier possibility to put a few of the lectures in a mobile device or to avoid fast forwarding long videos from my NAS when I am watching them on my TV (and some other factors), I decided to sit down, take notes for each video of where the breaks where, and write a simple Python script to help split the videos in segments, and, then, reencode the segments.

I decided not to take the videos from Youtube to perform my splitting activities, but, instead, to operate on one of the "sources" that the authors once had in their homepage (videos encoded in DivX and audio in MP3). The videos are still available as a torrent file (with a magnet link for the hash 650704e4439d7857a33fe4e32bcfdc2cb1db34db), with some very good souls still seeding it (I can seed it too, if desired). Alas, I have not found a source for the higher quality MPEG1 videos, but I think that the videos are legible enough to avoid bothering with a larger download.

I soon found out that there are some beneficial side-effects of splitting the videos, like not having to edit/equalize the entire audio of the videos when only a segment was bad (which is understandable, as these lectures were recorded almost 30 years ago and technology was not as advanced as things are today).

So, since I already have the split videos lying around here, I figured out that, perhaps, other people may want to download them, as they may be more convenient to watch (say, during commutes or whatever/whenever/wherever it best suits them).

Of course, uploading all the videos is going to take a while and I would only do it if people would really benefit from them. If you think so, let me know here (or if you know someone who would like the split version of the videos, spread the word).

18 January, 2015 01:52AM

January 17, 2015

Jonathan Wiltshire

Alcester BSP, day two

Neil has abandoned his reputation as an RM machine, and instead concentrated on making the delayed queue as long as he can. I’m reliably informed that it’s now at a 3-year high. Steve is delighted that his reigning-in work is finally having an effect.


Alcester BSP, day two is a post from: jwiltshire.org.uk | Flattr

17 January, 2015 11:02PM by Jon

Tim Retout

CPAN PR Challenge - January - IO-Digest

I signed up to the CPAN Pull Request Challenge - apparently I'm entrant 170 of a few hundred.

My assigned dist for January was IO-Digest - this seems a fairly stable module. To get the ball rolling, I fixed the README, but this was somehow unsatisfying. :)

To follow-up, I added Travis-CI support, with a view to validating the other open pull request - but that one looks likely to be a platform-specific problem.

Then I extended the Travis file to generate coverage reports, and separately realised the docs weren't quite fully complete, so fixed this and added a test.

Two of these have already been merged by the author, who was very responsive.

Part of me worries that Github is a centralized, proprietary platform that we now trust most of our software source code to. But activities such as this are surely a good thing - how much harder would it be to co-ordinate 300 volunteers to submit patches in a distributed fashion? I suppose you could do something similar with the list of Debian source packages and metadata about the upstream VCS, say...

17 January, 2015 10:01PM

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Updating a profile in Debian’s apparmor-profiles-extra package

I have gotten my first patch to the Pidgin AppArmor profile accepted upstream. One of my mentors thus suggested that I’d patch the updated profile in the Debian package myself. This is fairly easy and requires simply that one knows how to use Git.

If you want to get write access to the apparmor-profiles-extra package in Debian, you first need to request access to the Collaborative Maintenance Alioth project, collab-maint in short. This also requires setting up an account on Alioth.

Once all is set up, one can export the apparmor-profiles-extra Git repository.
If you simply want to submit a patch, it’s sufficient to clone this repository anonymously.
Otherwise, one should use the “–auth” parameter with “debcheckout”. The “debcheckout” command is part of the “devscripts” package:

debcheckout --auth apparmor-profiles-extra

Go into the apparmor-profiles-extra folder and create a new working branch:

git branch workingtitle
git checkout workingtitle

Get the latest version of profiles from upstream. In “profiles”, one can edit the profiles.

Test.

The debian/README.Debian file should be edited: add what relevant changes one just imported from upstream.

Then, one could either push the branch to collab-maint:

git commit -a
git push origin workingtitle

or simply submit a patch to the Debian Bug Tracking System against the apparmor-profiles-extra package.

The Debian AppArmor packaging team mailing list will receive a notification of this commit. This way, commits can be peer reviewed and merged by the team.

17 January, 2015 03:00PM by u

hackergotchi for Guido Günther

Guido Günther

krb5-auth-dialog 3.15.4

To keep up with GNOMEs schedule I've released krb5-auth-dialog 3.15.4. The changes of 3.15.1 and 3.15.4 include among updated translations, the replacement of deprecated GTK+ widgets, minor UI cleanups and bug fixes a header bar fix that makes us only use header bar buttons iff the desktop environment has them enabled:

krb5-auth-dialog with header bar krb5-auth-dialog without header bar

This makes krb5-auth-dialog better ingtegrated into other desktops again thanks to mclasen's awesome work.

This blog is flattr enabled.

17 January, 2015 09:42AM

hackergotchi for Diego Escalante Urrelo

Diego Escalante Urrelo

Jonathan Wiltshire

Alcester BSP, day one

Perhaps I should say evening one, since we didn’t get going until nine or so. I have mostly been processing unblocks – 13 in all. We have a delayed upload and a downgrade in the pipeline, plus a tested diff for Django. Predictably, Neil had the one and only removal request so far.


Alcester BSP, day one is a post from: jwiltshire.org.uk | Flattr

17 January, 2015 12:25AM by Jon

January 16, 2015

hackergotchi for Erich Schubert

Erich Schubert

Year 2014 in Review as Seen by a Trend Detection System

We ran our trend detection tool Signi-Trend (published at KDD 2014) on news articles collected for the year 2014. We removed the category of financial news, which is overrepresented in the data set. Below are the (described) results, from the top 50 trends (I will push the raw result to appspot if possible due to file limits). The top 10 trends are highlighted in bold.
January
2014-01-29: Obama's State of the Union address
February
2014-02-05..23: Sochi Olympics (11x, including the four below)
2014-02-07: Gay rights protesters arrested at Sochi Olympics
2014-02-08: Sochi Olympics begins
2014-02-16: Injuries in Sochi Extreme Park
2014-02-17: Men's Snowboard cross finals called of because of fog
2014-02-19: Violence in Ukraine and Kiev
2014-02-22: Yanukovich leaves Kiev
2014-02-23: Sochi Olympics close
2014-02-28: Crimea crisis begins
March
2014-03-01..06: Crimea crisis escalates futher (3x)
2014-03-08: Malaysia Airlines machine missing in South China Sea (2x)
2014-03-18: Crimea now considered part of Russia by Putin
2014-03-28: U.N. condemns Crimea's secession
April
2014-04-17..18: Russia-Ukraine crisis continues (3x)
2014-04-20: South Korea ferry accident
May
2014-05-18: Cannes film festival
2014-05-25: EU elections
June
2014-06-13: Islamic state fighting in Iraq
2014-06-16: U.S. talks to Iran about Iraq
July
2014-07-17..19: Malaysian airline shot down over Ukraine (3x)
2014-07-20: Israel shelling Gaza kills 40+ in a day
August
2014-08-07: Russia bans EU food imports
2014-08-20: Obama orders U.S. air strikes in Iraq against IS
2014-08-30: EU increases sanctions against Russia
September
2014-09-04: NATO summit
2014-09-23: Obama orders more U.S. air strikes against IS
Oktober
2014-10-16: Ebola case in Dallas
2014-10-24: Ebola patient in New York is stable
November
2014-11-02: Elections: Romania, and U.S. rampup
2014-11-05: U.S. Senate elections
2014-11-25: Ferguson prosecution
Dezember
2014-12-08: IOC Olympics sport additions
2014-12-11: CIA prisoner center in Thailand
2014-12-15: Sydney cafe hostage siege
2014-12-17: U.S. and Cuba relations improve unexpectedly
2014-12-19: North Korea blamed for Sony cyber attack
2014-12-28: AirAsia flight 8501 missing

16 January, 2015 05:22PM

Richard Hartmann

Release Critical Bug report for Week 03

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1100 (Including 178 bugs affecting key packages)
    • Affecting Jessie: 172 (key packages: 104) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 128 (key packages: 80) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 19 bugs are tagged 'patch'. (key packages: 10) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 8 bugs are marked as done, but still affect unstable. (key packages: 5) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 101 bugs are neither tagged patch, nor marked done. (key packages: 65) Help make a first step towards resolution!
      • Affecting Jessie only: 44 (key packages: 24) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 18 bugs are in packages that are unblocked by the release team. (key packages: 7)
        • 26 bugs are in packages that are not unblocked. (key packages: 17)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144) ???
51 178 (124+54) 323 (190+133) 189 (134+55)
52 115 (78+37) 289 (190+99) 147 (112+35)
1 93 (60+33) 287 (171+116) 140 (104+36)
2 82 (46+36) 271 (162+109) 157 (124+33)
3 25 (15+10) 249 (165+84) 172 (128+44)
4 14 (8+6) 244 (176+68)
5 2 (0+2) 224 (132+92)
6 release! 212 (129+83)
7 release+1 194 (128+66)
8 release+2 206 (144+62)
9 release+3 174 (105+69)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

16 January, 2015 04:21PM by Richard 'RichiH' Hartmann

hackergotchi for EvolvisForge blog

EvolvisForge blog

Debian/m68k hacking weekend commencing soonish

As I said, I did not certain events that begun with “lea” and end with “ing” prevent me from organising a Debian/m68k hack weekend. Well, that weekend is now.

I’m too unorganised, and I spent too much time in the last few evenings to organise things so I built up a sleep deficit already ☹ and the feedback was slow. (But so are the computers.) And someone I’d have loved to come was hurt and can’t come.

On the plus side, several people I’ve long wanted to meet IRL are coming, either already today or tomorrow. I hope we all will have a lot of fun.

Legal disclaimer: “Debian/m68k” is a port of Debian™ to m68k. It used to be official, but now isn’t. It belongs to debian-ports.org, which may run on DSA hardware, but is not acknowledged by Debian at large, unfortunately. Debian is a registered trademark owned by Software in the Public Interest, Inc.

16 January, 2015 02:26PM by Thorsten Glaser

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Polygons

I stumbled upon this site thanks to Helga: Parable of the Polygons. On the site you can interactively find out how harmless choices can make a harmful world. I found it quite eye opening. And what most catched me but isn't part of the site is that only unhappy polygons are willing to move. Those who are just ok with their neighbourhood but not really happy about it aren't willing to move. Which made me try it out in my own way: Trying to create the most diverse possible environment by temporarily making as many polygons unhappy to find out if it's possible to make as many polygons happy in the long run as possible.

... which is actually part of the way I see my own life. I always sort-of tried to confront people to think. I mean, it's not that common that you see a by-the-looks male person wearing a skirt. And ... since I moved out in July into a small intermediate flat and thus a new neighbourhood, I found the confidence (in parts also to be attributed to the confidence built up at these fine feministic conferences) to walk my hometown in a skirt. Only on some few occations, when meeting up with friends, mostly at evening/night, but it was always a nice experience. And I only felt once uncomfortable to be honest, when there was a probably right-winged skinhead at the subway station. Too many other people around, so I tried to avoid eye contact, but it didn't feel good.

Diversity is something that society needs. In all aspects. Also within the Debian project. I believe strongly in that there can't be much of innovation and moving forward if all people do think the same direction. That only means that potential alternative paths won't even get considered, and potentially get lost. That's one of the core parts of what makes the Free Software community lively and useful. People try different approaches, and in the end there will be adopters of what they believe is the better project. Projects pop up every now and then, others starve because of loss of interest, users not picking it up, developers spending their time on other stuff, and that's absolutely fine too. There is always something to be learned even from those situations.

Speaking of diversity, there is this protest going on later today because the boss of a cafe here in Vienna considered it a good idea to kick out a lesbian couple because they kissed each other for greeting and told them that they don't have a place for their "otherness" in her traditional viennese cafe and they rather should take it to a brothel. She excused yesterday for her tone that she used, she said she should have been more relaxed—as the CEO of that cafe. Which literally means that she only exused for the tone she used in her role, but not at all for the message she transported. So meh, hope there will be many people at the protest. Yes, there is some anti discrimination law around, but that only covers the workplace, and not service areas. Welcome to Austria.
On the upside, court striked down ban on same-sex couple adoption just the other day. Hopefully there is still hope for this country. :)

/personal | permanent link | Comments: 2 | Flattr this

16 January, 2015 01:44PM by Rhonda

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Locate

cassarossa:~> time locate asdkfjhasekjrxhw
locate asdkfjhasekjrxhw  19,49s user 0,46s system 82% cpu 24,071 total

It's 2015. locate still works by a linear scan through a flat file.

16 January, 2015 11:54AM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s fitfth report about Debian Long Term Support

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December 46 work hours have been equally split among 4 paid contributors (note that Thorsten and Raphaël have actually spent more hours because they took over some hours that Holger did not do over the former months). Their reports are available:

Evolution of the situation

Compared to last month, the number of paid work hours has almost not increased (we are at 48 hours per month). We still have a couple of new sponsors in the pipe but with the new year they did not complete the process yet. Hopefully next month will see a noticeable increase.

As usual, we are looking for more sponsors to reach our our minimal goal of funding the equivalent of a half-time position. Those of you who are struggling to spend money in the last quarter due to budget overrun, now is a good time to see if you want to include Debian LTS support in your 2015 budget!

In terms of security updates waiting to be handled, the situation looks similar to last month: the dla-needed.txt file lists 30 packages awaiting an update (3 more than last month), the list of open vulnerabilities in Squeeze shows about 56 affected packages in total. We do not manage to clear the backlog but it’s not getting significantly worse either.

Thanks to our sponsors

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 January, 2015 09:41AM by Raphaël Hertzog

January 15, 2015

hackergotchi for Junichi Uekawa

Junichi Uekawa

Opposite to strong typing.

Opposite to strong typing. Maybe weak typing is too discriminatory, we should call it typing challenged. Like: sqlite is a typing challenged language.

15 January, 2015 09:18PM by Junichi Uekawa

hackergotchi for Daniel Pocock

Daniel Pocock

Disk expansion

A persistent problem that I encounter with hard disks is the capacity limit. If only hard disks could expand like the Tardis.

My current setup at home involves a HP Microserver. It has four drive bays carrying two SSDs (for home directories) and two Western Digital RE4 2TB drives for bulk data storage (photos, source tarballs and other things that don't change often). Each pair of drives is mirrored. I chose the RE4 because I use RAID1 and they offer good performance and error recovery control which is useful in any RAID scenario.

When I put in the 2TB drives, I created a 1TB partition on each for Linux md RAID1 and another 1TB partition on each for BtrFs.

Later I added the SSDs and I chose BtrFs again as it had been working well for me.

Where to from here?

Since getting a 36 megapixel DSLR that produces 100MB raw images and 20MB JPEGs I've been filling up that 2TB faster than I could have ever imagined.

I've also noticed that vendors are offering much bigger NAS and archive disks so I'm tempted to upgrade.

First I looked at the Seagate Archive 8TB drives. 2TB bigger than the nearest competition. Discussion on Reddit suggests they don't have Error Recovery Control / TLER however and that leaves me feeling they are not the right solution for me.

Then I had a look at WD Red. Slightly less performant than the RE4 drives I run now, but with the possibility of 6TB per drive and a little cheaper. Apparently they have TLER though, just like the RE4 and other enterprise drives.

Will 6 or 8TB create new problems?

This all leaves me scratching my head and wondering about a couple of things though:

  • Will I run into trouble with the firmware in my HP Microserver if I try to use such a big disk?
  • Should I run the whole thing with BtrFs and how well will it work at this scale?
  • Should I avoid the WD Red and stick with RE4 or similar drives from Seagate or elsehwere?

If anybody can share any feedback it would be really welcome.

15 January, 2015 08:29PM by Daniel.Pocock

Mark Brown

Kernel build times for automated builders

Over the past year or so various people have been automating kernel builds with the aim of both setting the standard that things should build reliably and using the resulting builds for automated testing. This has been having good results, it’s especially nice to compare the results for older stable kernel builds with current ones and notice how much happier everything is.

One of the challenges with doing this is that for good coverage you really need to include allmodconfig or allyesconfig builds to ensure coverage of as much kernel code as possible but that’s fairly resource intensive given the size of the kernel, especially when you want to cover several architectures. It’s also fairly important to get prompt results, development trees are changing all the time and the longer the gap between a problem appearing and it being identified the more likely the report is to be redundant.

Since I was looking at my own setup and I know of several people who’ve done similar benchmarking I thought I’d publish some ballpark numbers for from scratch allmodconfig builds on a single architecture:

i7-4770 with SSD 20 minutes
linode 2048 1.25 hours
EC2 m3.medium 1.5 hours
EC2 c3.large 2 hours
Cubietruck with SSD 20 hours

All with the number of tasks spawned by make set to the number of execution threads the system has and no speedups from anything like ccache. I may keep this updated in future with further results.

Obviously there’s tradeoffs beyond the time, especially for someone like me doing this at home with their own resources – my desktop is substantially faster than anything else I’ve tried but I’m also using it interactively for my work, it’s not easily accessible when not at home and the fans spin up during builds while EC2 starts to cost noticeable money to use as you add more builds.

15 January, 2015 06:07PM by Mark Brown

hackergotchi for Michal Čihař

Michal Čihař

Weblate UI polishing

After releasing Weblate 2.0 with Bootstrap based UI, there was still lot of things to improve. Weblate 2.1 brought more consistency in using buttons with colors and icons. Weblate 2.2 will bring some improvements in other graphics elements.

One of thing which was for quite long in our issue tracker is to provide own renderer for SVG status badge. So far Weblate has offered either PNG badge or external SVG rendered by shields.io. Relying on external service was not good in a long term and also caused requests to third party server on many pages, what could be considered bad privacy wise.

Since this week, Weblate can render SVG badge on it's own and they are also matching current style used by other services (eg. Travis CI):

Translation status

One last thing which really did not fit into new UI were activity charts. In past they were rendered as PNG on server side, but for upcoming releases we have switched to use Chartist javascript library and render them as SVG on client side. This way we can nicely style them to fit into page, they scale properly and also reduce server load. You can see them in action on Hosted Weblate server:

Weblate activity chart

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

15 January, 2015 05:00PM by Michal Čihař (michal@cihar.com)

hackergotchi for Lunar

Lunar

80%

Unfortunately I could not go on stage at the 31st Chaos Communication Congress to present reproducible builds in Debian alongside Mike Perry from the Tor Project and Seth Schoen from the Electronic Frontier Foundation. I've tried to make it up for it, though… and we have made amazing progress.

Wiki reorganization

What was a massive and frightening wiki page now looks really more welcoming:

Screenshot of ReproducibleBuilds on Debian wiki

Depending on what one is looking for, it should be much easier to find. There's now a high-level status overview given on the landing page, maintainers can learn how to make their packages reproducible, enthusiasts can more easily find what can help the project, and we have even started writing some history.

.buildinfo for all packages

New year's eve saw me hacking Perl to write dpkg-genbuildinfo. Similar to dpkg-genchanges, it's run by dpkg-buildpackage to produce .buildinfo control files. This is where the build environment, and hash of source and binary packages are recorded. This script, integrated with dpkg, replace the previous debhelper interim solution written by Niko Tyni.

We used to fix mtimes in control.tar and data.tar using a specific addition to debhelper named dh_fixmtimes. To better support the ALWAYS_EXCLUDE environment variable and for pragramtic reasons, we moved the process in dh_builddeb.

Both changes were quickly pushed to our continuous integration platform. Before, only packages using dh would create a .buildinfo and thus eventually be considered reproducible. With these modifications, many more packages had their chance… and this shows:

Growing amount of packages considered reproducible

Yes, with our experimental toolchain we are now at more than eighty percent! That's more than 17200 source packages!

srebuild

Another big item on the todo-list was crossed over by Johannes Schauer. srebuild is a wrapper around sbuild:

Given a .buildinfo file, it first finds a timestamp of Debian Sid from snapshot.debian.org which contains the requested packages in their exact versions. It then runs sbuild with the right architecture as given by the .buildinfo file and the right base system to upgrade from, as given by the version of the base-files package version in the .buildinfo file. Using two hooks it will install the right package versions and verify that the installed packages are in the right version before the build starts.

Understanding problems

Over 1700 packages have now been reviewed to understand why build results could not be reproduced on our experimental platform. The variations between the two builds are currently limited to time and file ordering, but this still has uncovered many problems. There are still toolchain fixes to be made (more than 180 packages for the PHP registry) which can make many packages reproducible at once, but others like C pre-processor macros will require many individual changes.

debbindiff, the main tool used to understand differences, has gained support for .udeb, TrueType and OpenType fonts, PNG and PDF files. It's less likely to crash on problems with encoding or external tool. But most importantly for large package, it has been made a lot faster, thanks to Reiner Herrmann and Helmut Grohne. Helmut has also been able to spot cross-compilation issues by using debbindiff!

Targeting our efforts

It gives warm fuzzy feelings to hit the 80% mark, but it would be a bit irrelevant if this would not concern packages that matter. Thankfully, Holger worked on producing statistics for more specific package sets. Mattia Rizzolo has also done great work to improve the scripts generating the various pages visible on reproducible.debian.net.

All essential and build-esential packages, except gcc and bash, are considered reproducible or have patches ready. After some lengthy builds, I also managed to come up with a patch to make linux build reproducibly.

Miscellaneous

After my initial attempt to modify r-base to remove a timestamp in R packages, Dirk Eddelbuettel discussed the issue with upstream and came up with a better patch. The latter has already been merged upstream!

Dirk's solution is to allow timestamps to be set using an external environment variable. This is also how I modified FontForge to make it possible to reproduce fonts.

Identifiers generated by xsltproc have also been an issue. After reviewing my initial patch, Andrew Awyer came up with a much nicer solution. Its potential performance implications need to be evaluated before submission, though.

Chris West has been working on packages built with Maven amongst other things.

PDF generated by GhostScript, another painful source of troubles, is being worked on by Peter De Wachter.

Holger got X.509 certificates signed by the CA cartel for jenkins.debian.net and reproducible.debian.net. No more scary security messages now. Let's hope next year we will be able to get certificates through Let's Encrypt!

Let's make a difference together

As you can imagine with all that happened in the past weeks, the #debian-reproducible IRC channel has been a cool place to hang out. It's very energizing to get together and share contributions, exchange tips and discuss hardest points. Mandatory quote:

* h01ger is very happy to see again and again how this is a nice
         learning circle...! i've learned a whole lot here too... in
         just 3 months... and its going on...!

Reproducible builds are not going to change anything for most of our users. They simply don't care how they get software on their computer. But they care to get the right software without having to worry about it. That's our responsibility, as developers. Enabling users to trust their software is important and a major contribution, we as Debian, can make to the wider free software movement. Once Jessie is released, we should make a collective effort to make reproducible builds an highlight of our next release.

15 January, 2015 04:36PM

Noah Meyerhans

Spamassassin updates

If you're running Spamassassin on Debian or Ubuntu, have you enabled automatic rule updates? If not, why not? If possible, you should enable this feature. It should be as simple as setting "CRON=1" in /etc/default/spamassassin. If you choose not to enable this feature, I'd really like to hear why. In particular, I'm thinking about changing the default behavior of the Spamassassin packages such that automatic rule updates are enabled, and I'd like to know if (and why) anybody opposes this.

Spamassassin hasn't been providing rules as part of the upstream package for some time. In Debian, we include a snapshot of the ruleset from an essentially arbitrary point in time in our packages. We do this so Spamassassin will work "out of the box" on Debian systems. People who install spamassassin from source must download rules using spamassassin's updates channel. The typical way to use this service is to use cron or something similar to periodically check for rule changes via this service. This allows the anti-spam community to quickly adapt to changes in spammer tactics, and for you to actually benefit from their work by taking advantage of their newer, presumably more accurate, rules. It also allows for quick reaction to issues such as the one described in bug 738872 and 774768.

If we do change the default, there are a couple of possible approaches we could take. The simplest would be to simply change the default value of the CRON variable in /etc/default/spamassassin. Perhaps a cleaner approach would be to provide a "spamassassin-autoupdates" package that would simply provide the cron job and a simple wrapper program to perform the updates. The Spamassassin package would then specify a Recommends relationship with this package, thus providing the default enabled behavior while still providing a clear and simple mechanism to disable it.

15 January, 2015 03:44PM