May 24, 2015

Russ Allbery

Catch-up haul

As always, even though I've not been posting much, I'm still buying books. This is a catch-up post listing a variety of random purchases.

Katherine Addison — The Goblin Emperor (sff)
Milton Davis — From Here to Timbuktu (sff)
Mark Forster — How to Make Your Dreams Come True (non-fiction)
Angela Highland — Valor of the Healer (sff)
Marko Kloos — Terms of Enlistment (sff)
Angela Korra'ti — Faerie Blood (sff)
Cixin Liu — The Three-Body Problem (sff)
Emily St. John Mandel — Station Eleven (sff)
Sydney Padua — The Thrilling Adventures of Lovelace and Babbage (graphic novel)
Melissa Scott & Jo Graham — The Order of the Air Omnibus (sff)
Andy Weir — The Martian (sff)

Huh, for some reason I thought I'd bought more than that.

I picked up the rest of the Hugo nominees that aren't part of a slate, and as it happens have already read all the non-slate nominees at the time of this writing (although I'm horribly behind on reviews). I also picked up the first book of Marko Kloos's series, since he did the right thing and withdrew from the Hugos once it became clear what nonsense was going on this year.

The rest is a pretty random variety of on-line recommendations, books by people who made sense on the Internet, and books by authors I like.

24 May, 2015 11:44PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2015 DVD preparation starts

As the last step in the preparation of the TeX Live 2015 release we have now completely frozen updates to the repository and built the (hopefully) final image. The following weeks will see more testing and preparation of the gold master for DVD preparation.

texlive2015

The last weeks were full of frantic rebuilds of binaries, in particular due to the always in the last minute found bugs in several engines. Also already after the closing time we found a serious problem with Windows installations in administrator mode, where unprivileged users didn’t have read access to the format dumps. This was due to the File::Temp usage on Windows which sets too restrictive ACLs.

Unless really serious bugs show up, further changes will have to wait till after the release. Let’s hope for some peaceful two weeks.

So, it is time to prepare for release parties 😉 Enjoy!

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author

24 May, 2015 10:50PM by Norbert Preining

hackergotchi for Wouter Verhelst

Wouter Verhelst

Fixing CVE-2015-0847 in Debian

Because of CVE-2015-0847 and CVE-2013-7441, two security issues in nbd-server, I've had to updates for nbd, for which there are various supported versions: upstream, unstable, stable, oldstable, oldoldstable, and oldoldstable-backports. I've just finished uploading security fixes for the various supported versions of nbd-server in Debian. There're various relevant archives, and unfortunately it looks like they all have their own way of doing things regarding security:

  • For squeeze-lts (oldoldstable), you check out the secure-testing repository, run a script from that repository that generates a DLA number and email template, commit the result, and send a signed mail (whatever format) to the relevant mailinglist. Uploads go to ftp-master with squeeze-lts as target distribution.
  • For backports, you send a mail to the team alias requesting a BSA number, do the upload, and write the mail (based on a template that you need to modify yourself), which you then send (inline signed) to the relevant mailinglist. Uploads go to ftp-master with $dist-backports as target distribution, but you need to be in a particular ACL to be allowed to do so. However, due to backports policy, packages should never be in backports before they are in the distribution from which they are derived -- so I refrained from uploading to backports until the regular security update had been done. Not sure whether that's strictly required, but I didn't think it would do harm; even so, that did mean the procedure for backports was even more involved.
  • For the distributions supported by the security team (stable and oldstable, currently), you prepare the upload yourself, ask permission from the security team (by sending a debdiff), do the upload, and then ask the security team to send out the email. Uploads go to security-master, which implies that you may have to use dpkg-buildpackage's -sa parameter in order to make sure that the orig.tar.gz is actually in the security archive.
  • For unstable and upstream, you Just Upload(TM), because it's no different from a regular release.

While I understand how the differences between the various approaches have come to exist, I'm not sure I understand why they are necessary. Clearly, there's some room for improvement here.

As anyone who reads the above may see, doing an upload for squeeze-lts is in fact the easiest of the three "stable" approaches, since no intermediate steps are required. While I'm not about to advocate dropping all procedures everywhere, a streamlining of them might be appropriate.

24 May, 2015 07:18PM

Russ Allbery

git-pbuilder 1.34

Long time without a blog post. My time got eaten by work and travel and work-related travel. Hopefully more content soon.

This is just a quick note about the release of version 1.34 of the git-pbuilder script (which at some point really should just be rewritten in Python and incorporated entirely into the git-buildpackage package). Guido Günther added support for creating chroots for LTS distributions.

You can get the latest version from my scripts page.

24 May, 2015 12:59AM

May 23, 2015

hackergotchi for Eddy Petrișor

Eddy Petrișor

Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1

About 2 months ago I set a goal to run some kind of BSD on the spare Linksys NSLU2 I had. This was driven mostly by curiosity, after listening to a few BSDNow episodes and becoming a regular listener, but it was a really interesting experience (it was also somewhat frustrating, mostly due to lacking documentation or proprietary code).

Looking for documentation on how to install any BSD flavour on the Linksys NSLU2, I have found what appears to be some too-incomplete-to-be-useful-for-a-BSD-newbie information about installing FreeBSD, no information about OpenBSD and some very detailed information about NetBSD on the Linksys NSLU2.

I was very impressed by the NetBSD build.sh script which can be used to cross-compile the entire NetBSD system - to do that, it also builds the appropriate toolchain - NetBSD kernel and the base system, even when ran on a Linux host. Having some experience with cross compilation for GNU/Linux embedded systems I can honestly say this is immensely impressive, well done NetBSD!

Gone were a few failed attempts to properly follow the instruction and lots of hours of (re)building, but then I had the kernel and the sets (the NetBSD system is split into several parts which are grouped by functionality, these are the sets), so I was in the position to have to set things up to be able to net boot - kernel loading via TFTP and rootfs on NFS.

But it wouldn't be challenging if the instructions were followed to the letter, so the first thing I wanted to change was that I didn't want to run dhcpd just to pass the DHCP boot configuration to the NSLU2, that seemed like a waste of resources since I already had dnsmasq running.

After some effort and struggling with missing documentation, I managed to use dnsmasq to pass DHCP boot parameters to the slug, but also use it as TFTP server - after some time I documented this for future reference on my blog and expect to refer to it in the future.

Setting up NFS wasn't a problem, but, when trying to boot, I found that I managed to misread at least 3 or 4 times some of the NSLU2 related information on the NetBSD wiki. To be able to debug what was happening I concluded the slug should have a serial console attached to it, which helped a lot.

Still the result was that I wasn't able to boot the trunk version of the NetBSD code on my NSLU2.

Long story short, with the help of some people from the #netbsd IRC channel on Freenode and from the port-arm NetBSD mailing list I found out that I might have a better chance with specific older versions. In practice what really worked was the code from the netbsd_6_1 branch.

Discussions on the port-arm mailing list, some digging into the (recently found) PR (problem reports), and a successful execution of the trunk kernel (at the time, version 7.99.4) together with 6.1.5 userspace lead me to the conclusion the NetBSD userspace for armbe was broken in the trunk branch.

And since I concluded this would be a good occasion to learn a few details about NetBSD, I set out to git bisect through the trunk history to identify when this happened. But that meant being able to easily load kernels and run them from TFTP, which was not how the RedBoot bootloader flashed into the slug behaves by default.

Be default, the RedBoot bootloader flashed into the NSLU2 waits for 2 seconds for a manual interaction (it waits for a ^C) on the serial console or on the telnet RedBoot prompt, then, if no such event happens, it copies the Linux image it has in flash starting with adress 0x50060000 into RAM at address 0x01d00000 (after stripping the Sercomm header) and then executes the copied code from RAM.

Of course, this is not a very handy way to try to boot things from TFTP, so my first idea to overcome this limitation was to use a second stage bootloader which would do the loading via TFTP of the NetBSD kernel, then execute it from RAM. Flashing this second stage bootloader instead of the Linux kernel at 0x50060000 would make sure that no manual intervention except power on would be necessary when a new kernel+userspace pair is ready to be tested.

Another advantage was that I would not risk bricking the NSLU2 since I would not be changing RedBoot, the original bootloader.

I knew Apex was used as the second stage bootloader in Debian, so I started configuring my own version of the APEX bootloader to make it work for the netbsd-nfs.bin image to be loaded via TFTP.

My first disappointment was that Apex was did not support receiving the boot parameters via DHCP, but only via RARP (it was clear it was less tested with BOOTP or DHCP) and TFTP was documented in the code as being problematic. That meant that I would have to hard code the boot configuration or configure RARP, but that wasn't too bad.

Later I found out that I wasted time on that avenue because the network driver in Apex was some Intel code (NPE Access Library) which can't be freely distributed, but could have been downloaded from Intel's site back in 2008-2009. The bad news was that current versions did not work at all with the old patch work that was done in Apex to allow for the driver made for Linux to compile in a world of its own so it could be incorporated in Apex.

I was stuck and the only options I were:
  1. Fight with the available Intel code and make it compile in Apex
  2. Incorporate the NPE driver from NetBSD into a rump kernel which will be included in Apex, since I knew the NetBSD driver only needed a very easily obtainable binary blob, instead of the entire driver as was in Apex before
  3. Hack together an Apex version that simulates the typing of the necessary commands to load the netbsd-nfs.bin image inside RedBoot, or in other words, call from Apex the RedBoot functions necessary to load from TFTP and execute NetBSD.
Option 1 did not look that appealing after looking into the horrible Intel build system and its endless dependencies into a specific Linux kernel version.

Option 2 was more appealing, but since I didn't knew NetBSD and only tried once to build and run a NetBSD rump kernel, it seemed like a doable project only for an experienced NetBSD developer or at least an experienced NetBSD user, which I was not.

So I was left with option 3, which meant I had to do some reverse engineering of the code, because, although RedBoot is GPL, Linksys did not publish the source from which the running RedBoot was built from.


(continues here)

23 May, 2015 03:21PM by eddyp (noreply@blogger.com)

Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 2 - RedBoot reverse engineering and APEX hacking

(continuation of Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1; meanwhile, my article was mentioned briefly in BSDNow Episode 89 - Exclusive Disjunction around minute 36:25)

Choosing to call RedBoot from a hacked Apex


As I was saying in my previous post, in order to be able to automate the booting of the NetBSD image via TFTP, I opted for using a 2nd stage bootloader (planning to flash it in the NSLU2 instead of a Linux kernel), and since Debian was already using Apex, I chose Apex, too.

The first problem I found was that the networking support in Apex was relying on an old version of the Intel NPE library which I couldn't find on Intel's site. The new version was incompatible/not building with the old build wrapper in Apex, so I was faced with 3 options:
  1. Fight with the availabel Intel code and try to force it to compile in Apex
  2. Incorporate the NPE driver from NetBSD into a rump kernel to be included in Apex instead of the original Intel code, since the NetBSD driver only needed an easily compilable binary blob
  3. Hack together an Apex version that simulates the typing necessary RedBoot commands to load via TFTP the netbsd image and execute it.
After taking a look at the NPE driver buildsystem, I concluded there were very few options less attractive that option 1, among which was hammering nails through my forehead as a improvement measure against the severe brain damage which I would probably be likely to be inflicted with after dealing with the NPE "build system".

Option 2 looked like the best option I could have, given the situation, but my NetBSD foo was too close to 0 to even dream to endeavor on such a task. In my opinion, this still remains the technically superior solution to the problem since is very portable and a flexible way to ensure networking works in spite of the proprietary NPE code.

But, in practice, the best option I could implement at the time was option 3. I initially planned to pre-fill from Apex my desired commands into the RedBoot buffer that stored the keyboard strokes typed by the user:

load -r -b 0x200000 -h 192.168.0.2 netbsd-nfs.bin
g
Since this was the first time ever for me I was going to do less than trivial reverse engineering in order to find the addresses and signatures of interesting functions in the RedBoot code, it wasn't bad at all that I had a version of the RedBoot source code.

When stuck with reverse engineering, apply JTAG


The bad thing was that the code Linksys published as the source of the RedBoot running inside the NSLU2 was, in fact, a different code which had some significant changes around the code pieces I was mostly interested in. That in spite of the GPL terms.

But I thought that I could manage. After all, how hard could it be to identify the 2-3 functions I was interested in and 1 buffer? Even if I only had the disassembled code from the slug, it shouldn't be that hard.

I struggled with this for about 2-3 weeks on the few occasions I had during that time, but the excitement of leaning something new kept me going. Until I got stuck somewhere between the misalignment between the published RedBoot code and the disassembled code, the state of the system at the time of dumping the contents from RAM (for purposes of disassemby), the assembly code generated by GCC for some specific C code I didn't have at all, and the particularities of ARM assembly.

What was most likely to unblock me was to actually see the code in action, so I decided attaching a JTAG dongle to the slug and do a session of in-circuit-debugging was in order.

Luckily, the pinout of the JTAG interface was already identified in the NSLU2 Linux project, so I only had to solder some wires to the specified places and a 2x20 header to be able to connect through JTAG to the board.


JTAG connections on Kinder (the NSLU2 targeting NetBSD)

After this was done I tried immediately to see if when using a JTAG debugger I could break the execution of the code on the system. The answer was sadly, no.

The chip was identified, but breaking the execution was not happening. I tried this in OpenOCD and in another proprietary debugger application I had access to, and the result was the same, breaking was not happening.
$ openocd -f interface/ftdi/olimex-arm-usb-ocd.cfg -f board/linksys_nslu2.cfg
Open On-Chip Debugger 0.8.0 (2015-04-14-09:12)
Licensed under GNU GPL v2
For bug reports, read
    http://openocd.sourceforge.net/doc/doxygen/bugs.html
Info : only one transport option; autoselect 'jtag'
adapter speed: 300 kHz
Info : ixp42x.cpu: hardware has 2 breakpoints and 2 watchpoints
0
Info : clock speed 300 kHz
Info : JTAG tap: ixp42x.cpu tap/device found: 0x29277013 (mfg: 0x009,
part: 0x9277, ver: 0x2)
[..]

$ telnet localhost 4444
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Open On-Chip Debugger
> halt
target was in unknown state when halt was requested
in procedure 'halt'
> poll
background polling: on
TAP: ixp42x.cpu (enabled)
target state: unknown
Looking into the documentation I found a bit of information on the XScale processors[X] which suggested that XScale processors might necessarily need the (otherwise optional) SRST signal on the JTAG interface to be able to single step the chip.

This confused me a lot since I was sure other people had already used JTAG on the NSLU2.

The options I saw at the time were:
  1. my NSLU2 did have a fully working JTAG interface (either due to the missing SRST signal on the interface or maybe due to a JTAG lock on later generation NSLU-s, as was my second slug)
  2. nobody ever single stepped the slug using OpenOCD or other JTAG debugger, they only reflashed, and I was on totally new ground
I even contacted Rod Whitby, the project leader of the NSLU2 project to try to confirm single stepping was done before. Rod told me he never did that and he only reflashed the device.

This confused me even further because, from what I encountered on other platforms, in order to flash some device, the code responsible for programming the flash is loaded in the RAM of the target microcontroller and that code is executed on the target after a RAM buffer with the to be flashed data is preloaded via JTAG, then the operation is repeated for all flash blocks to be reprogrammed.

I was aware it was possible to program a flash chip situated on the board, outside the chip, by only playing with the chip's pads, strictly via JTAG, but I was still hoping single stepping the execution of the code in RedBoot was possible.

Guided by that hope and the possibility the newer versions of the device to be locked, I decided to add a JTAG interface to my older NSLU2, too. But this time I decided I would also add the TRST and SRST signals to the JTAG interface, just in case single stepping would work.

This mod involved even more extensive changes than the ones done on the other NSLU, but I was so frustrated by the fact I was stuck that I didn't mind poking a few holes through the case and the prospect of a connector always sticking out from the other NSLU2, which was doing some small, yet useful work in my home LAN.

It turns out NOBODY single stepped the NSLU2

 

After biting the bullet and soldering JTAG interface with also the TRST and the SRST signals connected as the pinout page from the NSLU2 Linux wiki suggested, I was disappointed to observe that I was not able to single step the older NSLU2 either, in spite of the presence of the extra signals.

I even tinkered with the reset configurations of OpenOCD, but had not success. After obtaining the same result on the proprietary debugger, digging through a presentation made by Rod back in the hay day of the project and the conversations on the NSLU2 Linux Yahoo mailing list, I finally concluded:
Actually nobody single stepped the NSLU2, no matter the version of the NSLU2 or connections available on the JTAG interface!
So I was back to square 1, I had to either struggle with disassembly, reevaluate my initial options, find another option or even drop entirely the idea. At that point I was already committed to the project, so dropping entirely the idea didn't seem like the reasonable thing to do.

Since I was feeling I was really close to finish on the route I had chosen a while ago, I was not any significantly more knowledgeable in the NetBSD code, and looking at the NPE code made me feel like washing my hands, the only option which seemed reasonable was to go on.

Digging a lot more through the internet, I was finally able to find another version of the RedBoot source which was modified for Intel ixp42x systems. A few checks here and there revealed this newly found code was actually almost identical to the code I had disassembled from the slug I was aiming to run NetBSD on. This was a huge step forward.

Long story short, a couple of days later I had a hacked Apex that could go through the RedBoot data structures, search for available commands in RedBoot and successfully call any of the built-in RedBoot commands!

Testing with loading this modified Apex by hand in RAM via TFTP then jumping into it to see if things woked as expected revealed a few small issues which I corrected right away.

Flashing a modified RedBoot?! But why? Wasn't Apex supposed to avoid exactly that risky operation?


Since the tests when executing from RAM were successful, my custom second stage Apex bootloader for NetBSD net booting was ready to be flashed into the NSLU2.

I added two more targets in the Makefile in the code on the dedicated netbsd branch of my Apex repository to generate the images ready for flashing into the NSLU2 flash (RedBoot needs to find a Sercomm header in flash, otherwise it will crash) and the exact commands to be executed in RedBoot are also print out after generation. This way, if the command is copy-pasted, there is no risk the NSLU2 is bricked by mistake.

After some flashing and reflashing of the apex_nslu2.flash image into the NSLU2 flash, some manual testing, tweaking and modifying the default built in APEX commands, checking that the sequence of commands 'move', 'go 0x01d00000' would jump into Apex, which, in turn, would call RedBoot to transfer the netbsd-nfs.bin image from a TFTP to RAM and then execute it successfully, it was high time to check NetBSD would boot automatically after the NSLU was powered on.

It didn't. Contrary to my previous tests, no call made from Apex to the RedBoot code would return back to Apex, not even the execution of a basic command such as the 'version' command.

It turns out the default commands hardcoded into RedBoot were 'boot; exec 0x01d00000', but I had tested 'boot; go 0x01d0000', which is not the same thing.

While 'go' does a plain jump at the specified address, the 'exec' command also does some preparations so it allows a jump into the Linux kernel and those preparations break some environment the RedBoot commands expect. I don't know which those are and didn't had the mood or motivation to find out.

So the easiest solution was to change the RedBoot's built-in command and turn that 'exec' into a 'go'. But that meant this time I was actually risking to brick the NSLU, unless I
was able to reflash via JTAG the NSLU2.


(to be continued - next, changing RedBoot and bisecting through the NetBSD history)

[X] Linksys NSLU2 has an XScale IXP420 processor which is compatible at ASM level with the ARMv5TEJ instruction set

23 May, 2015 03:15PM by eddyp (noreply@blogger.com)

HOWTO: No SSH logins SFTP only chrooted server configuration with OpenSSH

If you are in a situation where you want to set up a SFTP server in a more secure way, don't want to expose anything from the server via SFTP and do not want to enable SSH login on the account allowed to sftp, you might find the information below useful.

What do we want to achive:
  • SFTP server
  • only a specified account is allowed to connect to SFTP
  • nothing outside the SFTP directory is exposed
  • no SSH login is allowed
  • any extra security measures are welcome
To obtain all of the above we will create a dedicated account which will be chroot-ed, its home will be stored on a removable/no always mounted drive (acessing SFTP will not work when the drive is not mounted).

Mount the removable drive which will hold the SFTP area (you might need to add some entry in fstab). 

Create the account to be used for SFTP access (on a Debian system this will do the trick):
# adduser --system --home /media/Store/sftp --shell /usr/sbin/nologin sftp

This will create the account sftp which has login disabled, shell is /usr/sbin/nologin and create the home directory for this user.

Unfortunately the default ownership of the home directory of this user are incompatible with chroot-ing in SFTP (which prevents access to other files on the server). A message like the one below will be generated in this kind of case:
$ sftp -v sftp@localhost
[..]
sftp@localhost's password:
debug1: Authentication succeeded (password).
Authenticated to localhost ([::1]:22).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
Write failed: Broken pipe
Couldn't read packet: Connection reset by peer
Also /var/log/auth.log will contain something like this:
fatal: bad ownership or modes for chroot directory "/media/Store/sftp"

The default permissions are visible using the 'namei -l' command on the sftp home directory:
# namei -l /media/Store/sftp
f: /media/Store/sftp
drwxr-xr-x root root    /
drwxr-xr-x root root    media
drwxr-xr-x root root    Store
drwxr-xr-x sftp nogroup sftp
We change the ownership of the sftp directory and make sure there is a place for files to be uploaded in the SFTP area:
# chown root:root /media/Store/sftp
# mkdir /media/Store/sftp/upload
# chown sftp /media/Store/sftp/upload

We isolate the sftp users from other users on the system and configure a chroot-ed environment for all users accessing the SFTP server:
# addgroup sftpusers
# adduser sftp sftusers
Set a password for the sftp user so password authentication works:
# passwd sftp
Putting all pieces together, we restrict access only to the sftp user, allow it access via password authentication only to SFTP, but not SSH (and disallow tunneling and forwarding or empty passwords).

Here are the changes done in /etc/ssh/sshd_config:
PermitEmptyPasswords no
PasswordAuthentication yes
AllowUsers sftp
Subsystem sftp internal-sftp
Match Group sftpusers
        ChrootDirectory %h
        ForceCommand internal-sftp
        X11Forwarding no
        AllowTcpForwarding no
        PermitTunnel no
Reload the sshd configuration (I'm using systemd):
# systemctl reload ssh.service
Check sftp user can't login via SSH:
$ ssh sftp@localhost
sftp@localhost's password:
This service allows sftp connections only.
Connection to localhost closed.
But SFTP is working and is restricted to the SFTP area:
$ sftp sftp@localhost
sftp@localhost's password:
Connected to localhost.
sftp> ls
upload 
sftp> pwd
Remote working directory: /
sftp> put netbsd-nfs.bin
Uploading netbsd-nfs.bin to /netbsd-nfs.bin
remote open("/netbsd-nfs.bin"): Permission denied
sftp> cd upload
sftp> put netbsd-nfs.bin
Uploading netbsd-nfs.bin to /upload/netbsd-nfs.bin
netbsd-nfs.bin                                                              100% 3111KB   3.0MB/s   00:00
Now your system is ready to accept sftp connections, things can be uploaded in the upload directory and whenever the external drive is unmounted, SFTP will NOT work.

Note: Since we added 'AllowUsers sftp', you can test no local user can login via SSH. If you don't want to restrict access only to the sftp user, you can whitelist other users by adding them in the AllowUsers directive, or dropping it entirely so all local users can SSH into the system.

23 May, 2015 02:44PM by eddyp (noreply@blogger.com)

hackergotchi for DebConf team

DebConf team

Second Call for Proposals and Approved Talks for DebConf15 (Posted by DebConf Content Team)

DebConf15 will be held in Heidelberg, Germany from the 15th to the 22nd of August, 2015. The clock is ticking and our annual conference is approaching. There are less than three months to go, and the Call for Proposals period closes in only a few weeks.

This year, we are encouraging people to submit “half-length” 20-minute events, to allow attendees to have a broader view of the many things that go on in the project in the limited amount of time that we have.

To make sure that your proposal is part of the official DebConf schedule you should submit it before June 15th.

If you have already sent your proposal, please log in to summit and make sure to improve your description and title. This will help us fit the talks into tracks, and devise a cohesive schedule.

For more details on how to submit a proposal see: http://debconf15.debconf.org/proposals.xhtml.

Approved Talks

We have processed the proposals submitted up to now, and we are proud to announce the first batch of approved talks. Some of them:

  • This APT has Super Cow Powers (David Kalnischkies)
  • AppStream, Limba, XdgApp: Past, present and future (Matthias Klumpp)
  • Onwards to Stretch (and other items from the Release Team) (Niels Thykier for the Release Team)
  • GnuPG in Debian report (Daniel Kahn Gillmor)
  • Stretching out for trustworthy reproducible builds - creating bit by bit identical binaries (Holger Levsen & Lunar)
  • Debian sysadmin (and infrastructure) from an outsider/newcomer perspective (Donald Norwood)
  • The Debian Long Term Support Team: Past, Present and Future (Raphaël Hertzog & Holger Levsen)

If you have already submitted your event and haven’t heard from us yet, don’t panic! We will contact you shortly.

We would really like to hear about new ideas, teams and projects related to Debian, so do not hesitate to submit yours.

See you in Heidelberg,
DebConf Team

23 May, 2015 02:36PM by DebConf Organizers

hackergotchi for Francois Marier

Francois Marier

Usual Debian Server Setup

I manage a few servers for myself, friends and family as well as for the Libravatar project. Here is how I customize recent releases of Debian on those servers.

Hardware tests

apt-get install memtest86+ smartmontools e2fsprogs

Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine.

To check memory, I boot into memtest86+ from the grub menu and let it run overnight.

Then I check the hard drives using:

smartctl -t long /dev/sdX
badblocks -swo badblocks.out /dev/sdX

Configuration

apt-get install etckeepr git sudo vim

To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:

  • turn off daily auto-commits
  • turn off auto-commits before package installs

To get more control over the various packages I install, I change the default debconf level to medium:

dpkg-reconfigure debconf

Since I use vim for all of my configuration file editing, I make it the default editor:

update-alternatives --config editor

ssh

apt-get install openssh-server mosh fail2ban

Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way.

I also ensure that the locale I use is available on the server by adding it the list of generated locales:

dpkg-reconfigure locales

Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):

HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com

UsePrivilegeSeparation sandbox

AuthenticationMethods publickey
PasswordAuthentication no
PermitRootLogin no

AcceptEnv LANG LC_* TZ
LogLevel VERBOSE
AllowGroups sshuser

or the following for wheezy servers:

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256

On those servers where I need duplicity/paramiko to work, I also add the following:

KexAlgorithms ...,diffie-hellman-group-exchange-sha1
MACs ...,hmac-sha1

Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server.

I also create a new group and add the users that need ssh access to it:

addgroup sshuser
adduser francois sshuser

and add a timeout for root sessions by putting this in /root/.bash_profile:

TMOUT=600

Security checks

apt-get install logcheck logcheck-database fcheck tiger debsums corekeeper
apt-get remove john john-data rpcbind tripwire

Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:

/var/log/apache2/error.log
/var/log/mail.err
/var/log/mail.warn
/var/log/mail.info
/var/log/fail2ban.log

while ensuring that the apache logfiles are readable by logcheck:

chmod a+rx /var/log/apache2
chmod a+r /var/log/apache2/*

and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:

create 644 root adm

I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):

INTRO=0
FQDN=0

Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:

Tiger_Check_RUNPROC=Y
Tiger_Check_DELETED=Y
Tiger_Check_APACHE=Y
Tiger_FSScan_WDIR=Y
Tiger_SSH_Protocol='2'
Tiger_Passwd_Hashes='sha512'
Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres'
Tiger_Listening_ValidProcs='sshd|mosh-server|ntpd'

General hardening

apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra

While the harden packages are configuration-free, AppArmor must be manually enabled:

perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub
update-grub

Entropy and timekeeping

apt-get install haveged rng-tools ntp

To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.

Preventing mistakes

apt-get install molly-guard safe-rm sl

The above packages are all about catching mistakes (such as accidental deletions). However, in order to extend the molly-guard protection to mosh sessions, one needs to manually apply a patch.

Package updates

apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest

These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools.

In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:

#!/bin/sh
cat /var/run/reboot-required 2> /dev/null || true

to send me a notification whenever a kernel update requires a reboot to take effect.

Handy utilities

apt-get install renameutils atool iotop sysstat lsof mtr-tiny

Most of these tools are configure-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.

Apache configuration

apt-get install apache2-mpm-event

While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make.

I enable these in /etc/apache2/conf.d/security:

<Directory />
    AllowOverride None
    Order Deny,Allow
    Deny from all
</Directory>
ServerTokens Prod
ServerSignature Off

and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default.

I also create a new /etc/apache2/conf.d/servername which contains:

ServerName machine_hostname

Mail

apt-get install postfix

Configuring mail properly is tricky but the following has worked for me.

In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname.

Change the following in /etc/postfix/main.cf:

inet_interfaces = loopback-only
myhostname = (fully qualified hostname)
smtp_tls_security_level = may
smtp_tls_protocols = !SSLv2, !SSLv3

Set the following aliases in /etc/aliases:

  • set francois as the destination of root emails
  • set an external email address for francois
  • set root as the destination for www-data emails

before running newaliases to update the aliases database.

Create a new cronjob (/etc/cron.hourly/checkmail):

#!/bin/sh
ls /var/mail

to ensure that email doesn't accumulate unmonitored on this box.

Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.

Network tuning

To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline (jessie or later) by putting the following in /etc/sysctl.conf:

net.core.default_qdisc=fq_codel

23 May, 2015 09:00AM

May 22, 2015

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.3

Weblate 2.3 has been released today. It comes with better features for project owners, better file formats support and more configuration options for users.

Full list of changes for 2.3:

  • Dropped support for Django 1.6 and South migrations.
  • Support for adding new translations when using Java Property files
  • Allow to accept suggestion without editing.
  • Improved support for Google OAuth2.
  • Added support for Microsoft .resx files.
  • Tuned default robots.txt to disallow big crawling of translations.
  • Simplified workflow for accepting suggestions.
  • Added project owners who always receive important notifications.
  • Allow to disable editing of monolingual template.
  • More detailed repository status view.
  • Direct link for editing template when changing translation.
  • Allow to add more permissions to project owners.
  • Allow to show secondary language in zen mode.
  • Support for hiding source string in favor of secondary language.

You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user.

Weblate is also being used https://hosted.weblate.org/ as official translating service for phpMyAdmin, Gammu, Weblate itself and other projects.

If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

PS: The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: English phpMyAdmin SUSE Weblate | 0 comments

22 May, 2015 08:00AM by Michal Čihař (michal@cihar.com)

May 21, 2015

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH release 1.58.0-1

A new released of BH is now on CRAN. BH provides a large part of the Boost C++ libraries as a set of template headers for use by R and Rcpp.

This release both upgrades the version of Boost to the current release, and adds a new library: Boost MultiPrecision .

A brief summary of changes from the NEWS file is below.

Changes in version 1.58.0-1 (2015-05-21)

  • Upgraded to Boost 1.58 installed directly from upstream source

  • Added Boost MultiPrecision as requested in GH ticket #12 based on rcpp-devel request by Jordi Molins Coronado

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHubGitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 May, 2015 10:50PM

hackergotchi for Yves-Alexis Perez

Yves-Alexis Perez

Followup on Debian grsec kernels for Jessie

So, following the previous post, I've indeed updated the way I'm making my grsec kernels.

I wanted to upgrade my server to Jessie, and didn't want to keep the 3.2 kernel indefinitely, so I had to update to at least 3.14, and find something to make my life (and maybe some others) easier.

In the end, like planned, I've switched to the make deb-pkg way, using some scripts here and there to simplify stuff.

The scripts and configs can be found in my debian-grsec-config repository. The repository layout is pretty much self-explaining:

The bin/ folder contains two scripts:

  • get-grsec.sh, which will pick the latest grsec patch (for each branch) and applies it to the correct Linux branch. This script should be run from a git clone of the linux-stable git repository;
  • kconfig.py is taken from the src:linux Debian package, and can be used to merge multiple KConfig files

The configs/ folder contains the various configuration bits:

  • config-* files are the Debian configuration files, taken from the linux-image binary packages (for amd64 and i386);
  • grsec* are the grsecurity specifics bits (obviously);
  • hardening* contain non-grsec stuff still useful for hardened kernels, for example KASLR (cargo-culting nonwidthstanding) or strong SSP (available since I'm building the kernels on a sid box, YMMV).

I'm currently building amd64 kernels for Jessie and i386 kernels will follow soon, using config-3.14 + hardening + grsec. I'm hosting them on my apt repository. You're obviously free to use them, but considering how easy it is to rebuild a kernel, you might want to use a personal configuration (instead of mine) and rebuild the kernel yourself, so you don't have to trust my binary packages.

Here's a very quick howto (adapt it to your needs):

mkdir linux-grsec && cd linux-grsec
git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
git clone git://anonscm.debian.org/users/corsac/grsec/debian-grsec-config.git
mkdir build
cd linux-stable
../debian-grsec-config/bin/get-grsec.sh stable2 # for 3.14 branch
../debian-grsec-config/bin/kconfig.py ../build/.config ../debian-grsec-config/configs/config-3.14-2-amd64 ../debian-grsec-config/configs/hardening ../debian-grsec-config/configs/grsec
make KBUILD_OUTPUT=../build -j4 oldconfig
make KBUILD_OUTPUT=../build -j4 deb-pkg

Then you can use the generated Debian binary packages. If you use the Debian config, it'll need a lot of disk space for compilation and generate a huge linux-image debug package, so you might want to unset CONFIG_DEBUG_INFO locally if you're not interested. Right now only the deb files are generated but I've submitted a patch to have a .changes file which can be then used to manipulate them more easily (for example for uploading them a local Debian repository).

Note that, obviously, this is not targeted for inclusion to the official Debian archive. This is still not possible for various reasons explained here and there, and I still don't have a solution for that.

I hope this (either the scripts and config or the generated binary packages) can be useful. Don't hesitate to drop me a mail if needed.

21 May, 2015 08:36PM by Yves-Alexis (corsac@debian.org)

hackergotchi for Jonathan McDowell

Jonathan McDowell

I should really learn systemd

As I slowly upgrade all my machines to Debian 8.0 (jessie) they’re all ending up with systemd. That’s fine; my laptop has been running it since it went into testing whenever it was. Mostly I haven’t had to care, but I’m dimly aware that it has a lot of bits I should learn about to make best use of it.

Today I discovered systemctl is-system-running. Which I’m not sure why I’d use it, but when I ran it it responded with degraded. That’s not right, thought I. How do I figure out what’s wrong? systemctl --state=failed turned out to be the answer.

# systemctl --state=failed
  UNIT                         LOAD   ACTIVE SUB    DESCRIPTION
● systemd-modules-load.service loaded failed failed Load Kernel Modules

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

Ok, so it’s failed to load some kernel modules. What’s it trying to load? systemctl status -l systemd-modules-load.service led me to /lib/systemd/systemd-modules-load which complained about various printer modules not being able to be loaded. Turned out this was because CUPS had dropped them into /etc/modules-load.d/cups-filters.conf on upgrade, and as I don’t have a parallel printer I hadn’t compiled up those modules. One of my other machines had also had an issue with starting up filesystem quotas (I think because there’d been some filesystems that hadn’t mounted properly on boot - my fault rather than systemd). Fixed that up and then systemctl is-system-running started returning a nice clean running.

Now this is probably something that was silently failing back under sysvinit, but of course nothing was tracking that other than some output on boot up. So I feel that I’ve learnt something minor about systemd that actually helped me cleanup my system, and sets me in better stead for when something important fails.

21 May, 2015 05:20PM

hackergotchi for Michal Čihař

Michal Čihař

Translating Sphinx documentation

Few days ago, I've started writing Odorik module to manipulate with API of one Czech mobile network operator. As usual, the code comes with documentation written in English. Given that vast majority of users are Czech, it sounds useful to have in Czech language as well.

The documentation itself is written in Sphinx and built using Read the Docs. Using those to translate the documentation is quite easy.

First step is to add necessary configuration to the Sphinx project as described in their Internationalization Quick Guide. It's matter of few configuration directives and invoking of sphinx-intl and the result can be like this commit.

Once the code in repository is ready, you can start building translated documentation on the Read the docs. There is nice guide for that as well. All you need to do is to create another project, set it's language and link it from master project as translation.

The last step is to find some translators to actually translate the document. For me the obvious choice was using Weblate, so the translation is now on Hosted Weblate. The mass import of several po files can be done by import_project management command.

And thanks to all these you can now read Czech documentation for python Odorik module.

Filed under: Coding English Odorik Weblate | 0 comments

21 May, 2015 04:00PM by Michal Čihař (michal@cihar.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RInside 0.2.13

A new release 0.2.13 of RInside is now on CRAN. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

This release works around a bug in R 3.2.0, and addressed in R 3.2.0-patched. The NEWS extract below has more details.

Changes in RInside version 0.2.13 (2015-05-20)

  • Added workaround for a bug in R 3.2.0: by including the file RInterface.h only once we do not getting linker errors due to multiple definitions of R_running_as_main_program (which is now addressed in R-patched as well).

  • Small improvements to the Travis CI script.

CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 May, 2015 01:41AM

May 20, 2015

Hideki Yamane

What is the most valuable challenge for Debian in this Stretch cycle?



  • restructuring website as 21st century style and drop deprecated info
  • automate test integration to infrastructure: piuparts & ci
  • more test for packages: adopt autopkgtest
  • "go/no-go vote" for migration to next release candidate (e.g. bohdi in Fedora)
  • more comfortable collab-development (see GitHub's pull request style, not mail-centric)
  • other
Your opinion?

20 May, 2015 02:48PM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Martin-Éric Racine

Martin-Éric Racine

xf86-video-geode 2.11.17

This morning, I pushed out version 2.11.17 of the Geode X.Org driver. This is the driver used by the OLPC XO-1 and by a plethora of low-power desktops, micro notebooks and thin clients. This is a minor release. It merges conditional support for the OpenBSD MSR device (Marc Ballmer, Matthieu Herrb), fixes a condition that prevents compiling on some embedded platforms (Brian A. Lloyd) and upgrades the code for X server 1.17 compatibility (Maarten Lankhorst).


Pending issues:

  • toggle COM2 into DDC probing mode during driver initialization
  • reset the DAC chip when exiting X and returning to vcons
  • fix a rendering corner case with Libre Office

20 May, 2015 09:46AM by Martin-Éric (noreply@blogger.com)

Enrico Zini

love-thy-neighbor

Love thy neighbor as thyself

‘Love thy neighbor as thyself’, words which astoundingly occur already in the Old Testament.

One can love one’s neighbor less than one loves oneself; one is then the egoist, the racketeer, the capitalist, the bourgeois. and although one may accumulate money and power one does not of necessity have a joyful heart, and the best and most attractive pleasures of the soul are blocked.

Or one can love one’s neighbor more than oneself—then one is a poor devil, full of inferiority complexes, with a longing to love everything and still full of hate and torment towards oneself, living in a hell of which one lays the fire every day anew.

But the equilibrium of love, the capacity to love without being indebted to anyone, is the love of oneself which is not taken away from any other, this love of one’s neighbor which does no harm to the self.

(From Herman Hesse, "My Belief")

I always have a hard time finding this quote on the Internet. Let's fix that.

20 May, 2015 09:35AM

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Berge

I wrote well over one year ago about Earthlings. It really did have some impact on my life. Nowadays I try to avoid animal products where possible, especially for my food. And in the context of vegan information that I tracked I stumbled upon a great band from Germany: Berge. They recently started a deal with their record label which says that if they receive one million clicks within the next two weeks on their song 10.000 Tränen their record label is going to donate 10.000,- euros to a German animal rights organization. Reason enough for me to share this band with you! :)
(For those who are puzzled by the original upload date of the video: Don't let yourself get confused, the call for it is from this monday)

  • 10.000 Tränen: This is the song that needs the views. It's a nice tune and great lyrics to think about. Even though its in German it got English subtitles. :)
  • Schauen was passiert: In the light of 10.000 Tränen it was hard for me to select other songs, but this one sounds nice. "Let's see what happens". :)
  • Meer aus Farben: I love colors. And I hate the fact that most conference shirts are black only. Or that it seems to be impossible to find colorful clothes and shoes for tall women.

Like always, enjoy!

/music | permanent link | Comments: 3 | Flattr this

20 May, 2015 09:21AM by Rhonda

hackergotchi for Norbert Preining

Norbert Preining

Shishiodoshi or Us and They – On the perceived exclusivity of Japanese

The other day I received from my Japanese teacher an interesting article by Yamazaki Masakazu 山崎正和 comparing garden styles, and in particular the attitude towards and presentation of water in Japanese and European gardens (page 1, page 2). The author’s list of achievements is long, various professorships, dramatist, literature critique, recognized as Person of Cultural Merit, just to name a view. I was looking forward to an interesting and high quality article!

The article itself introduces the reader to 鹿おどし Shishiodoshi, one of the standard ingredients of a Japanese garden: It is a device where water drips into a bamboo tube that is also a seesaw. At some point the water in the bamboo tube makes the seesaw switch over and the water pours out, after which the seesaw returns to the original position and a new cycle begins.

Shishiodoshi

The author describes his feelings and thoughts about the shishiodoshi, in particular connects human life (stress and relieve, cycles), the flow of time, and some other concepts with the shishiodoshi. Up to here it is a wonderful article providing interesting insights into the way the author thinks. Unfortunately, then the author tries to underline his ideas by comparing the Japanese shishiodoshi with European style water fountains, describing the former with all favorable properties and full of deep meaning, while the latter is qualified as beautiful and nice, but bare of any deeper meaning.

I don’t go into details that the whole comparison is anyway a bad one, as he is comparing Baroque style fountains, a very limited period, and furthermore ignores the fact that water fountains are not genuinely European (isn’t there one in the Kenrokuen, one of the three most famous gardens in Japan!?), nor does he consider any other “water-installation” that might be available. What really destroys the in principle very nice article is the tone:

The general tone of the article then can be summarized into: “The shishiodoshi is rich on meaning, connects to the life of humans, instigates philosophical reflections, represents nature, the flow of time etc. The water fountain is beautiful and gorgeous, but that is all.”

I don’t think that this separation, or this negative undertone, was created on purpose by the author. A person of his stature is supposedly above this level of primitive comparison. I believe that it is nothing else but a consequence of upbringing and the general attitude that permeates the whole society with this feeling of separateness.

Us and They

Repeatedly providing sentences like “Japanese people and Western people have different tastes..” (日本人は西洋人と違った独特の好みを持っていたのである). About 10 times in this short article expressions like “Japanese” and “Westerner” appear, leaving the reader with a bitter taste of an author that considers first the Japanese a people (what about Ainu, Ryukyu, etc?), and second that the Japanese are exclusive in the sense that they are set apart from the rest of the world in their way of thinking, living, being.

What puzzles me is that this is not only a singular opinion, but a very general straight in the Japanese media, TV, radio, newspaper, books. Everyone considers “Japan” and “Japanese” as something that is fundamentally and profoundly different from everyone else in the world.

There is “We – the Japanese” (and that doesn’t mean nationality of passport, but blood line!), and there are “They – the Rest” or, as the way of writing and and description on many occasion suggestions, “They – the Barbarians”.

A short anecdote will underly this: One of the favorite TV talk show / pseudo-documentary style is about Japanese living abroad. That day a lady married in Paris was interviewed. What followed was a guided tour through Paris showing: Dirt in the gutter, street cleaning cars, waste disposal places. Yes, that was all. Just about the “dirt”. Of course, at length the (unfortunately only apparent) cleanliness of Japanese cities and neighborhoods are mentioned and shown to remind everyone how wonderful Japan is and how dirty the Barbarians. I don’t want to say that I consider Japan more dirty than most other countries – just the visible part is clean, the moment you step a bit aside and around the corner, there are the worst trash just thrown away without consideration. Anyway.

To return to the topic of “Us and They” – I consider us all humans, first and foremost, and nationality, birthplace, and all that are just by chance. I do NOT reject cultural differences, they are here, of course. But cultural differences are one thing, separating one self and one’s perceived people from the rest of the world is another.

Conclusion

I repeat, I don’t think that the author had any ill intentions, but it would have been nicer if the article wouldn’t make such a stark distinction. He could have written about Shishiodoshi and water fountains without using the “Us – They” categorization. He could have compared other water installations, could have discussed the long tradition of small ponds in European gardens, just to name a few things. But the author choose to highlight differences instead of commonalities.

It is the “Us against Them” feeling that often makes life in Japan for a foreigner difficult. Japanese are not special, Austrians, too, are not special, nor are Americans, Russians, Tibetans, or any other nationality. No nationality is special, we are all humans. Maybe at some point this will arrive also in the Japanese society and thinking.

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author

20 May, 2015 01:01AM by Norbert Preining

May 19, 2015

hackergotchi for Gunnar Wolf

Gunnar Wolf

Feeling somewhat special

Today I feel more special than I have ever felt.

Or... Well, or something like that.

Thing is, there is no clear adjective for this — But I successfully finished my Specialization degree! Yes, believe it or not, today I can formally say I am Specialist in Informatic Security and Information Technologies (Especialista en Seguridad Informática y Tecnologías de la Información), as awarded by the Higher School of Electric and Mechanic Engineering (Escuela Superior de Ingeniería Mecánica y Eléctrica) of the National Polytechnical Institute (Instituto Politécnico Nacional).

In Mexico and most Latin American countries, degrees are usually incorporated to your name as if they were a nobiliary title. Thus, when graduating from Engineering studies (pre-graduate universitary level), I became "Ingeniero Gunnar Wolf". People graduating from further postgraduate programs get to introduce themselves as "Maestro Foobar Baz" or "Doctor Quux Noox". And yes, a Specialization is a small posgraduate program (I often say, the smallest possible posgraduate). And as a Specialist... What can I brag about? Can say I am Specially Gunnar Wolf? Or Special Gunnar Wolf? Nope. The honorific title for a Specialization is a pointer to null, and when casted into a char* it might corrupt your honor-recognizing function. So I'm still Ingeniero Gunnar Wolf, for information security reasons.

So that's the reason I am now enrolled in the Masters program. I hope to write an addenda to this message soonish (where soonish ≥ 18 months) saying I'm finally a Maestro.

As a sidenote, many people asked me: Why did I take on the specialization, which is a degree too small for most kinds of real work recognition? Because it's been around twenty years since I last attended a long-term scholar program as a student. And my dish is quite full with activities and responsabilities. I decided to take a short program, designed for 12 months (I graduated in 16, minus two months that the university was on strike... Quite good, I'd say ;-) ) to see how I fared on it, and only later jumping on the full version.

Because, yes, to advance my career at the university, I finally recognized and understood that I do need postgraduate studies.

Oh, and what kind of work did I do for this? Besides the classes I took, I wrote a thesis on a model for evaluating covert channels for establishing secure communications.

19 May, 2015 11:36PM by gwolf

hackergotchi for Lars Wirzenius

Lars Wirzenius

Software development estimation

Acceptable estimations for software development:

  • Almost certainly doable in less than a day.
  • Probably doable in less than a day, almost certainly not going to take more than three days.
  • Probably doable in less than a week, but who knows?
  • Certainly going to take longer than a week, and nobody can say how long, but if you press me, the estimate is between two weeks and four months.

Reality prevents better accuracy.

19 May, 2015 07:50PM

Thorsten Alteholz

alpine and UTF-8 and Debian lists

This is a note for my future self: When writing an email with only “charset=US-ASCII”, alpine creates an email with:

Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII

and everything is fine.

In case of UTF-8 characters inside the text, alpine creates something like:

Content-Type: MULTIPART/MIXED; BOUNDARY="705298698-1667814148-1432049085=:28313"

and the only available part contains:

Content-Type: TEXT/PLAIN; format=flowed; charset=UTF-8
Content-Transfer-Encoding: 8BIT

Google tells me that the reason for this is:

Alpine uses a single part MULTIPART/MIXED to apply a protection wrapper around QUOTED-PRINTABLE and BASE64 content to prevent it from being corrupted by various mail delivery systems that append little (typically advertising) things at the end of the message.

Ok, this behavior might come from bad experiences and it seems to work most of the time. Unfortunately if one sends a signed email to a Debian list that checks whether the signature is valid (like for example debian-lts-announce), such an email will be rejected with:

Failed to understand the email or find a signature: UDFormatError:
Cannot handle multipart messages not of type multipart/signed

*sigh*

19 May, 2015 03:47PM by alteholz

Simon Josefsson

Scrypt in IETF

Colin Percival and I have worked on an internet-draft on scrypt for some time. I realize now that the -00 draft was published over two years ago, turning this effort today somewhat into archeology rather than rocket science. Still, having a published RFC that is easy to refer to from other Internet protocols will hopefully help to establish the point that PBKDF2 alone no longer provides state-of-the-art protection for password hashing.

I have written about password hashing before where I give a quick introduction to the basic concepts in the context of the well-known PBKDF2 algorithm. The novelty in scrypt is that it is designed to combat brute force and hardware accelerated attacks on hashed password databases. Briefly, scrypt expands the password and salt (using PBKDF2 as a component) and then uses that to create a large array (typically tens or hundreds of megabytes) using the Salsa20 core hash function and then de-references that large array in a random and sequential pattern. There are three parameters to the scrypt function: a CPU/Memory cost parameter N (varies, typical values are 16384 or 1048576), a blocksize parameter r (typically 8), and a parallelization parameter p (typically a low number like 1 or 16). The process is described in the draft, and there are further discussions in Colin’s original scrypt paper.

The document has been stable for some time, and we are now asking for it to be published. Thus now is good time to provide us with feedback on the document. The live document on gitlab is available if you want to send us a patch.

19 May, 2015 12:55PM by simon

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Lenovo Yoga 2 13 with Debian

I recently acquired a Lenovo Yoga 2 13. While, at the time, the Yoga 3 was available, I decided to go for Yoga 2 13. The Yoga 3 comes with the newer Core M Broadwell family, which, in my opinion, doesn't really bring any astounding benefits.

The Yoga 2 13 comes in mulitple variants worldwide. Infact these hardware variations have different effets when run under Linux.

My varaint of Yoga 2 13 is:

CPU: Intel(R) Core(TM) i7-4510U CPU @ 2.00GHz

RAM: 8 GiB - Occupying 2 slots

Memory Controller Information
        Supported Interleave: One-way Interleave
        Current Interleave: One-way Interleave
        Maximum Memory Module Size: 8192 MB
        Maximum Total Memory Size: 16384 MB
       
Handle 0x0006, DMI type 6, 12 bytes
Handle 0x0007, DMI type 6, 12 bytes

The usual PCI devices:

rrs@learner:~$ lspci
00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Controller (rev 0b)
00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b)
00:03.0 Audio device: Intel Corporation Haswell-ULT HD Audio Controller (rev 0b)
00:14.0 USB controller: Intel Corporation 8 Series USB xHCI HC (rev 04)
00:16.0 Communication controller: Intel Corporation 8 Series HECI #0 (rev 04)
00:1b.0 Audio device: Intel Corporation 8 Series HD Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation 8 Series PCI Express Root Port 4 (rev e4)
00:1d.0 USB controller: Intel Corporation 8 Series USB EHCI #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation 8 Series LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 8 Series SATA Controller 1 [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 8 Series SMBus Controller (rev 04)
01:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8723BE PCIe Wireless Network Adapter
17:37 ♒♒♒   ☺    

And the storage devices

Device Model:     WDC WD5000M22K-24Z1LT0-SSHD-16GB

Device Model:     KINGSTON SM2280S3120G

 

Storage

The drive runs into serious performance problems when its SSHD's NCQ (mis)feature is under use in Linux <= 4.0.

[28974.232550] ata2.00: configured for UDMA/133
[28974.232565] ahci 0000:00:1f.2: port does not support device sleep
[28983.680955] ata1.00: exception Emask 0x10 SAct 0x7fffffff SErr 0x400100 action 0x6 frozen
[28983.681000] ata1.00: irq_stat 0x08000000, interface fatal error
[28983.681027] ata1: SError: { UnrecovData Handshk }
[28983.681052] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.681082] ata1.00: cmd 61/40:00:b8:84:88/05:00:0a:00:00/40 tag 0 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.681152] ata1.00: status: { DRDY }
[28983.681171] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.681202] ata1.00: cmd 61/40:08:f8:89:88/05:00:0a:00:00/40 tag 1 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.681271] ata1.00: status: { DRDY }
[28983.681289] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.681316] ata1.00: cmd 61/40:10:38:8f:88/05:00:0a:00:00/40 tag 2 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.681387] ata1.00: status: { DRDY }
[28983.681407] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.681435] ata1.00: cmd 61/40:18:78:94:88/05:00:0a:00:00/40 tag 3 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.697642] ata1.00: status: { DRDY }
[28983.697643] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.697646] ata1.00: cmd 61/40:c8:38:65:88/05:00:0a:00:00/40 tag 25 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.697647] ata1.00: status: { DRDY }
[28983.697648] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.697651] ata1.00: cmd 61/40:d0:78:6a:88/05:00:0a:00:00/40 tag 26 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.697651] ata1.00: status: { DRDY }
[28983.697652] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.697656] ata1.00: cmd 61/40:d8:b8:6f:88/05:00:0a:00:00/40 tag 27 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.697657] ata1.00: status: { DRDY }
[28983.697658] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.697661] ata1.00: cmd 61/40:e0:f8:74:88/05:00:0a:00:00/40 tag 28 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.697662] ata1.00: status: { DRDY }
[28983.697663] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.697666] ata1.00: cmd 61/40:e8:38:7a:88/05:00:0a:00:00/40 tag 29 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.697667] ata1.00: status: { DRDY }
[28983.697668] ata1.00: failed command: WRITE FPDMA QUEUED
[28983.697672] ata1.00: cmd 61/40:f0:78:7f:88/05:00:0a:00:00/40 tag 30 ncq 688128 out
                        res 40/00:3c:78:a9:88/00:00:0a:00:00/40 Emask 0x10 (ATA bus error)
[28983.697672] ata1.00: status: { DRDY }
[28983.697676] ata1: hard resetting link
[28984.017356] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[28984.022612] ata1.00: configured for UDMA/133
[28984.022740] ata1: EH complete
[28991.611732] Suspending console(s) (use no_console_suspend to debug)
[28992.183822] sd 1:0:0:0: [sdb] Synchronizing SCSI cache
[28992.186569] sd 1:0:0:0: [sdb] Stopping disk
[28992.186604] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[28992.189594] sd 0:0:0:0: [sda] Stopping disk
[28992.967426] PM: suspend of devices complete after 1351.349 msecs
[28992.999461] PM: late suspend of devices complete after 31.990 msecs
[28993.000058] ehci-pci 0000:00:1d.0: System wakeup enabled by ACPI
[28993.000306] xhci_hcd 0000:00:14.0: System wakeup enabled by ACPI
[28993.016463] PM: noirq suspend of devices complete after 16.978 msecs
[28993.017024] ACPI: Preparing to enter system sleep state S3
[28993.017349] PM: Saving platform NVS memory
[28993.017357] Disabling non-boot CPUs ...
[28993.017389] intel_pstate CPU 1 exiting
[28993.018727] kvm: disabling virtualization on CPU1
[28993.019320] smpboot: CPU 1 is now offline
[28993.019646] intel_pstate CPU 2 exiting

In the interim, to overcome this problem, we can force the device to run in degraded mode. I'm not sure if it is really the degraded mode, or the device was falsely advertised as a 6 GiB capable device. Time will tell, but for now, force it to run in 3 GiB mode, and so far, I haven't run into the above mentioned probems. To force 3 GiB speed, apply the following.

rrs@learner:~$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.0.4+ root=/dev/mapper/sdb_crypt ro cgroup_enable=memory swapaccount=1 rootflags=data=writeback libata.force=1:3 quiet
16:42 ♒♒♒   ☺    

And then verify it... As you can see below, I've forced it for ata1 because I want my SSD drive to run at full-speed. I've done enough I/O, which earlier resulted in the kernel spitting the SATA errors. With this workaround, the kernel does not spit any error messages.

[    1.273365] libata version 3.00 loaded.
[    1.287290] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 4 ports 6 Gbps 0x3 impl SATA mode
[    1.288238] ata1: FORCE: PHY spd limit set to 3.0Gbps
[    1.288240] ata1: SATA max UDMA/133 abar m2048@0xb051b000 port 0xb051b100 irq 41
[    1.288242] ata2: SATA max UDMA/133 abar m2048@0xb051b000 port 0xb051b180 irq 41
[    1.288244] ata3: DUMMY
[    1.288245] ata4: DUMMY
[    1.606971] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[    1.607906] ata1.00: ATA-9: WDC WD5000M22K-24Z1LT0-SSHD-16GB, 02.01A03, max UDMA/133
[    1.607910] ata1.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[    1.608856] ata1.00: configured for UDMA/133
[    1.609106] scsi 0:0:0:0: Direct-Access     ATA      WDC WD5000M22K-2 1A03 PQ: 0 ANSI: 5
[    1.927167] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    1.928980] ata2.00: ATA-8: KINGSTON SM2280S3120G, S8FM06.A, max UDMA/133
[    1.928983] ata2.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
[    1.929616] ata2.00: configured for UDMA/133

And the throughput you get out of your WD SATA SSHD drive, with capability set to 3.0 GiB is:

rrs@learner:/media/SSHD/tmp$ while true; do dd if=/dev/zero of=foo.img bs=1M count=20000; sync; rm -rf foo.img; sync; done
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 202.014 s, 104 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 206.111 s, 102 MB/s

Hannes Reinecke has submitted patches for NCQ enhancements, for Linux 4.1, which I hope will resolve these problems. Another option is to disable NCQ for the drive, or else blacklist the make/model in driver/ata/libata-core.c

By the time I finished this blog entry draft, I had tests to conclude that this did not look like an NCQ problem. Because in degraded mode too, it runs with NCQ enabled (check above).

rrs@learner:~$ sudo fstrim -vv /media/SSHD
/media/SSHD: 268.2 GiB (287930949632 bytes) trimmed
16:58 ♒♒♒   ☺    

rrs@learner:~$ sudo fstrim -vv /
[sudo] password for rrs:
/: 64 GiB (68650749952 bytes) trimmed
16:56 ♒♒♒   ☺    

Another interesting feature of this drive is support for TRIM / DISCARD. This drive's FTL accepts the TRIM command. Ofcourse, you need to ensure that you have discard enabled in all the layers. In my case, SATA + Device Mapper (Crypt and LVM) + File System (ext4)

Display

The overall display of this device is amazing. It is large enough to give you vibrant look. At 1920x1080 resolution, things look good. The display support was available out-of-the-box.

There were some suspend / resume hangs  that occured with kernels < 4.x, during suspend / resume. The issue was root caused and fixed for Linux 4.0.

You may still notice the following kernel messages, though not problematic to me so far.

[28977.518114] PM: thaw of devices complete after 3607.979 msecs
[28977.590389] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[28977.590582] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[28977.591095] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[28977.591185] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[28977.591368] acpi device:30: Cannot transition to power state D3cold for parent in (unknown)
[28977.591911] pci_bus 0000:01: Allocating resources
[28977.591933] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[28977.592093] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[28977.592401] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment

You may need to disable the Intel Management Engine Interface (mei.ko), incase you run into suspend/resume problems.

rrs@learner:/media/SSHD/tmp$ cat /etc/modprobe.d/intel-mei-blacklist.conf
blacklist mei
blacklist mei-me
17:01 ♒♒♒   ☺    

You may also run into the following Kernel Oops during suspend/resume. Below, you see 2 interation of sleep because it first hibernates and then sleeps (s2both).

[  180.470206] Syncing filesystems ... done.
[  180.473337] Freezing user space processes ... (elapsed 0.001 seconds) done.
[  180.475210] PM: Marking nosave pages: [mem 0x00000000-0x00000fff]
[  180.475213] PM: Marking nosave pages: [mem 0x0006f000-0x0006ffff]
[  180.475215] PM: Marking nosave pages: [mem 0x00088000-0x000fffff]
[  180.475220] PM: Marking nosave pages: [mem 0x97360000-0x97b5ffff]
[  180.475274] PM: Marking nosave pages: [mem 0x9c36f000-0x9cffefff]
[  180.475356] PM: Marking nosave pages: [mem 0x9d000000-0xffffffff]
[  180.476877] PM: Basic memory bitmaps created
[  180.477003] PM: Preallocating image memory... done (allocated 380227 pages)
[  180.851800] PM: Allocated 1520908 kbytes in 0.37 seconds (4110.56 MB/s)
[  180.851802] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.
[  180.853355] Suspending console(s) (use no_console_suspend to debug)
[  180.853520] wlan0: deauthenticating from c4:6e:1f:d0:67:26 by local choice (Reason: 3=DEAUTH_LEAVING)
[  180.864159] cfg80211: Calling CRDA to update world regulatory domain
[  181.172222] PM: freeze of devices complete after 319.294 msecs
[  181.196080] ------------[ cut here ]------------
[  181.196124] WARNING: CPU: 3 PID: 3707 at drivers/gpu/drm/i915/intel_display.c:7904 hsw_enable_pc8+0x659/0x7c0 [i915]()
[  181.196125] SPLL enabled
[  181.196159] Modules linked in: rfcomm ctr ccm bnep pci_stub vboxpci(O) vboxnetadp(O) vboxnetflt(O) vboxdrv(O) bridge stp llc xt_conntrack iptable_filter ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xt_CHECKSUM xt_tcpudp iptable_mangle ip_tables x_tables nls_utf8 nls_cp437 vfat fat rtsx_usb_ms memstick snd_hda_codec_hdmi joydev mousedev hid_sensor_rotation hid_sensor_incl_3d hid_sensor_als hid_sensor_accel_3d hid_sensor_magn_3d hid_sensor_gyro_3d hid_sensor_trigger industrialio_triggered_buffer kfifo_buf industrialio hid_sensor_iio_common iTCO_wdt iTCO_vendor_support hid_multitouch x86_pkg_temp_thermal intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm btusb hid_sensor_hub bluetooth uvcvideo videobuf2_vmalloc videobuf2_memops
[  181.196203]  videobuf2_core v4l2_common videodev media pcspkr evdev mac_hid arc4 psmouse serio_raw efivars i2c_i801 rtl8723be btcoexist rtl8723_common rtl_pci rtlwifi mac80211 snd_soc_rt5640 cfg80211 snd_soc_rl6231 snd_hda_codec_realtek i915 snd_soc_core snd_hda_codec_generic ideapad_laptop ac snd_compress dw_dmac sparse_keymap drm_kms_helper rfkill battery dw_dmac_core snd_hda_intel snd_pcm_dmaengine snd_soc_sst_acpi snd_hda_controller video 8250_dw regmap_i2c snd_hda_codec drm snd_hwdep snd_pcm spi_pxa2xx_platform i2c_designware_platform soc_button_array snd_timer i2c_designware_core snd i2c_algo_bit soundcore shpchp lpc_ich button processor fuse ipv6 autofs4 ext4 crc16 jbd2 mbcache btrfs xor raid6_pq algif_skcipher af_alg dm_crypt dm_mod sg usbhid sd_mod rtsx_usb_sdmmc rtsx_usb crct10dif_pclmul
[  181.196220]  crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd ahci libahci libata xhci_pci ehci_pci xhci_hcd ehci_hcd scsi_mod usbcore usb_common thermal fan thermal_sys hwmon i2c_hid hid i2c_core sdhci_acpi sdhci mmc_core gpio_lynxpoint
[  181.196224] CPU: 3 PID: 3707 Comm: kworker/u16:7 Tainted: G           O    4.0.4+ #14
[  181.196225] Hardware name: LENOVO 20344/INVALID, BIOS 96CN29WW(V1.15) 10/16/2014
[  181.196230] Workqueue: events_unbound async_run_entry_fn
[  181.196233]  0000000000000000 ffffffffa0706f68 ffffffff81522198 ffff880064debc88
[  181.196235]  ffffffff8106c5b1 ffff880251460000 ffff880250f83b68 ffff880250f83b78
[  181.196237]  ffff880250f83800 0000000000000001 ffffffff8106c62a ffffffffa071407c
[  181.196238] Call Trace:
[  181.196248]  [<ffffffff81522198>] ? dump_stack+0x40/0x50
[  181.196251]  [<ffffffff8106c5b1>] ? warn_slowpath_common+0x81/0xb0
[  181.196254]  [<ffffffff8106c62a>] ? warn_slowpath_fmt+0x4a/0x50
[  181.196278]  [<ffffffffa06ae349>] ? hsw_enable_pc8+0x659/0x7c0 [i915]
[  181.196289]  [<ffffffffa0643ee0>] ? intel_suspend_complete+0xe0/0x6e0 [i915]
[  181.196300]  [<ffffffffa0644501>] ? i915_drm_suspend_late+0x21/0x90 [i915]
[  181.196311]  [<ffffffffa0644690>] ? i915_pm_poweroff_late+0x40/0x40 [i915]
[  181.196318]  [<ffffffff813fa7ba>] ? dpm_run_callback+0x4a/0x100
[  181.196321]  [<ffffffff813fb010>] ? __device_suspend_late+0xa0/0x180
[  181.196324]  [<ffffffff813fb10e>] ? async_suspend_late+0x1e/0xa0
[  181.196326]  [<ffffffff8108b973>] ? async_run_entry_fn+0x43/0x160
[  181.196330]  [<ffffffff81083a5d>] ? process_one_work+0x14d/0x3f0
[  181.196332]  [<ffffffff81084463>] ? worker_thread+0x53/0x480
[  181.196334]  [<ffffffff81084410>] ? rescuer_thread+0x300/0x300
[  181.196338]  [<ffffffff81089191>] ? kthread+0xc1/0xe0
[  181.196341]  [<ffffffff810890d0>] ? kthread_create_on_node+0x180/0x180
[  181.196346]  [<ffffffff81527898>] ? ret_from_fork+0x58/0x90
[  181.196349]  [<ffffffff810890d0>] ? kthread_create_on_node+0x180/0x180
[  181.196350] ---[ end trace 8e339004db298838 ]---
[  181.220094] PM: late freeze of devices complete after 47.936 msecs
[  181.220972] PM: noirq freeze of devices complete after 0.875 msecs
[  181.221577] ACPI: Preparing to enter system sleep state S4
[  181.221886] PM: Saving platform NVS memory
[  181.222702] Disabling non-boot CPUs ...
[  181.222731] intel_pstate CPU 1 exiting
[  181.224041] kvm: disabling virtualization on CPU1
[  181.224680] smpboot: CPU 1 is now offline
[  181.225121] intel_pstate CPU 2 exiting
[  181.226407] kvm: disabling virtualization on CPU2
[  181.227025] smpboot: CPU 2 is now offline
[  181.227441] intel_pstate CPU 3 exiting
[  181.227728] Broke affinity for irq 19
[  181.227747] Broke affinity for irq 41
[  181.228771] kvm: disabling virtualization on CPU3
[  181.228793] smpboot: CPU 3 is now offline
[  181.229624] PM: Creating hibernation image:
[  181.563651] PM: Need to copy 379053 pages
[  181.563655] PM: Normal pages needed: 379053 + 1024, available pages: 1697704
[  182.472910] PM: Hibernation image created (379053 pages copied)
[  181.232347] PM: Restoring platform NVS memory
[  181.233171] Enabling non-boot CPUs ...
[  181.233246] x86: Booting SMP configuration:
[  181.233248] smpboot: Booting Node 0 Processor 1 APIC 0x1
[  181.246771] kvm: enabling virtualization on CPU1
[  181.249339] CPU1 is up
[  181.249389] smpboot: Booting Node 0 Processor 2 APIC 0x2
[  181.262313] kvm: enabling virtualization on CPU2
[  181.264853] CPU2 is up
[  181.264903] smpboot: Booting Node 0 Processor 3 APIC 0x3
[  181.277831] kvm: enabling virtualization on CPU3
[  181.280317] CPU3 is up
[  181.288471] ACPI: Waking up from system sleep state S4
[  182.340655] PM: noirq thaw of devices complete after 0.637 msecs
[  182.378087] PM: early thaw of devices complete after 37.428 msecs
[  182.378436] rtlwifi: rtlwifi: wireless switch is on
[  182.451021] rtc_cmos 00:01: System wakeup disabled by ACPI
[  182.697575] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[  182.697617] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  182.699248] ata1.00: configured for UDMA/133
[  182.699911] ata2.00: configured for UDMA/133
[  182.699917] ahci 0000:00:1f.2: port does not support device sleep
[  186.059539] PM: thaw of devices complete after 3685.338 msecs
[  186.134292] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  186.134479] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  186.134992] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  186.135080] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  186.135266] acpi device:30: Cannot transition to power state D3cold for parent in (unknown)
[  186.135950] pci_bus 0000:01: Allocating resources
[  186.135974] pcieport 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000
[  186.135980] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  186.136049] pcieport 0000:00:1c.0: res[15]=[mem 0x00100000-0x000fffff 64bit pref] get_res_add_size add_size 200000
[  186.136072] pcieport 0000:00:1c.0: BAR 15: assigned [mem 0x9fb00000-0x9fcfffff 64bit pref]
[  186.136174] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  186.136490] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  199.454497] Suspending console(s) (use no_console_suspend to debug)
[  200.024190] sd 1:0:0:0: [sdb] Synchronizing SCSI cache
[  200.024356] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[  200.025359] sd 1:0:0:0: [sdb] Stopping disk
[  200.028701] sd 0:0:0:0: [sda] Stopping disk
[  201.106085] PM: suspend of devices complete after 1651.336 msecs
[  201.106591] ------------[ cut here ]------------
[  201.106628] WARNING: CPU: 0 PID: 3725 at drivers/gpu/drm/i915/intel_display.c:7904 hsw_enable_pc8+0x659/0x7c0 [i915]()
[  201.106628] SPLL enabled
[  201.106656] Modules linked in: rfcomm ctr ccm bnep pci_stub vboxpci(O) vboxnetadp(O) vboxnetflt(O) vboxdrv(O) bridge stp llc xt_conntrack iptable_filter ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xt_CHECKSUM xt_tcpudp iptable_mangle ip_tables x_tables nls_utf8 nls_cp437 vfat fat rtsx_usb_ms memstick snd_hda_codec_hdmi joydev mousedev hid_sensor_rotation hid_sensor_incl_3d hid_sensor_als hid_sensor_accel_3d hid_sensor_magn_3d hid_sensor_gyro_3d hid_sensor_trigger industrialio_triggered_buffer kfifo_buf industrialio hid_sensor_iio_common iTCO_wdt iTCO_vendor_support hid_multitouch x86_pkg_temp_thermal intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm btusb hid_sensor_hub bluetooth uvcvideo videobuf2_vmalloc videobuf2_memops
[  201.106694]  videobuf2_core v4l2_common videodev media pcspkr evdev mac_hid arc4 psmouse serio_raw efivars i2c_i801 rtl8723be btcoexist rtl8723_common rtl_pci rtlwifi mac80211 snd_soc_rt5640 cfg80211 snd_soc_rl6231 snd_hda_codec_realtek i915 snd_soc_core snd_hda_codec_generic ideapad_laptop ac snd_compress dw_dmac sparse_keymap drm_kms_helper rfkill battery dw_dmac_core snd_hda_intel snd_pcm_dmaengine snd_soc_sst_acpi snd_hda_controller video 8250_dw regmap_i2c snd_hda_codec drm snd_hwdep snd_pcm spi_pxa2xx_platform i2c_designware_platform soc_button_array snd_timer i2c_designware_core snd i2c_algo_bit soundcore shpchp lpc_ich button processor fuse ipv6 autofs4 ext4 crc16 jbd2 mbcache btrfs xor raid6_pq algif_skcipher af_alg dm_crypt dm_mod sg usbhid sd_mod rtsx_usb_sdmmc rtsx_usb crct10dif_pclmul
[  201.106711]  crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd ahci libahci libata xhci_pci ehci_pci xhci_hcd ehci_hcd scsi_mod usbcore usb_common thermal fan thermal_sys hwmon i2c_hid hid i2c_core sdhci_acpi sdhci mmc_core gpio_lynxpoint
[  201.106714] CPU: 0 PID: 3725 Comm: kworker/u16:25 Tainted: G        W  O    4.0.4+ #14
[  201.106715] Hardware name: LENOVO 20344/INVALID, BIOS 96CN29WW(V1.15) 10/16/2014
[  201.106720] Workqueue: events_unbound async_run_entry_fn
[  201.106723]  0000000000000000 ffffffffa0706f68 ffffffff81522198 ffff880064dd7c88
[  201.106725]  ffffffff8106c5b1 ffff880251460000 ffff880250f83b68 ffff880250f83b78
[  201.106727]  ffff880250f83800 0000000000000002 ffffffff8106c62a ffffffffa071407c
[  201.106728] Call Trace:
[  201.106737]  [<ffffffff81522198>] ? dump_stack+0x40/0x50
[  201.106740]  [<ffffffff8106c5b1>] ? warn_slowpath_common+0x81/0xb0
[  201.106742]  [<ffffffff8106c62a>] ? warn_slowpath_fmt+0x4a/0x50
[  201.106765]  [<ffffffffa06ae349>] ? hsw_enable_pc8+0x659/0x7c0 [i915]
[  201.106776]  [<ffffffffa0643ee0>] ? intel_suspend_complete+0xe0/0x6e0 [i915]
[  201.106786]  [<ffffffffa0644501>] ? i915_drm_suspend_late+0x21/0x90 [i915]
[  201.106797]  [<ffffffffa0644690>] ? i915_pm_poweroff_late+0x40/0x40 [i915]
[  201.106802]  [<ffffffff813fa7ba>] ? dpm_run_callback+0x4a/0x100
[  201.106805]  [<ffffffff813fb010>] ? __device_suspend_late+0xa0/0x180
[  201.106809]  [<ffffffff813fb10e>] ? async_suspend_late+0x1e/0xa0
[  201.106811]  [<ffffffff8108b973>] ? async_run_entry_fn+0x43/0x160
[  201.106813]  [<ffffffff81083a5d>] ? process_one_work+0x14d/0x3f0
[  201.106815]  [<ffffffff81084463>] ? worker_thread+0x53/0x480
[  201.106818]  [<ffffffff81084410>] ? rescuer_thread+0x300/0x300
[  201.106821]  [<ffffffff81089191>] ? kthread+0xc1/0xe0
[  201.106824]  [<ffffffff810890d0>] ? kthread_create_on_node+0x180/0x180
[  201.106827]  [<ffffffff81527898>] ? ret_from_fork+0x58/0x90
[  201.106830]  [<ffffffff810890d0>] ? kthread_create_on_node+0x180/0x180
[  201.106832] ---[ end trace 8e339004db298839 ]---
[  201.130052] PM: late suspend of devices complete after 23.960 msecs
[  201.130725] ehci-pci 0000:00:1d.0: System wakeup enabled by ACPI
[  201.130885] xhci_hcd 0000:00:14.0: System wakeup enabled by ACPI
[  201.146986] PM: noirq suspend of devices complete after 16.930 msecs
[  201.147591] ACPI: Preparing to enter system sleep state S3
[  201.147942] PM: Saving platform NVS memory
[  201.147948] Disabling non-boot CPUs ...
[  201.147999] intel_pstate CPU 1 exiting
[  201.149324] kvm: disabling virtualization on CPU1
[  201.149337] smpboot: CPU 1 is now offline
[  201.149640] intel_pstate CPU 2 exiting
[  201.151096] kvm: disabling virtualization on CPU2
[  201.151108] smpboot: CPU 2 is now offline
[  201.152017] intel_pstate CPU 3 exiting
[  201.153250] kvm: disabling virtualization on CPU3
[  201.153256] smpboot: CPU 3 is now offline
[  201.156229] ACPI: Low-level resume complete
[  201.156307] PM: Restoring platform NVS memory
[  201.160033] CPU0 microcode updated early to revision 0x1c, date = 2014-07-03
[  201.160190] Enabling non-boot CPUs ...
[  201.160241] x86: Booting SMP configuration:
[  201.160243] smpboot: Booting Node 0 Processor 1 APIC 0x1
[  201.172665] kvm: enabling virtualization on CPU1
[  201.174982] CPU1 is up
[  201.175013] smpboot: Booting Node 0 Processor 2 APIC 0x2
[  201.187569] CPU2 microcode updated early to revision 0x1c, date = 2014-07-03
[  201.188796] kvm: enabling virtualization on CPU2
[  201.191130] CPU2 is up
[  201.191158] smpboot: Booting Node 0 Processor 3 APIC 0x3
[  201.203297] kvm: enabling virtualization on CPU3
[  201.205679] CPU3 is up
[  201.210414] ACPI: Waking up from system sleep state S3
[  201.224617] ehci-pci 0000:00:1d.0: System wakeup disabled by ACPI
[  201.332523] xhci_hcd 0000:00:14.0: System wakeup disabled by ACPI
[  201.332634] PM: noirq resume of devices complete after 121.623 msecs
[  201.372718] PM: early resume of devices complete after 40.058 msecs
[  201.372892] rtlwifi: rtlwifi: wireless switch is on
[  201.373270] sd 0:0:0:0: [sda] Starting disk
[  201.373271] sd 1:0:0:0: [sdb] Starting disk
[  201.445954] rtc_cmos 00:01: System wakeup disabled by ACPI
[  201.692510] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  201.694719] ata2.00: configured for UDMA/133
[  201.694724] ahci 0000:00:1f.2: port does not support device sleep
[  201.836724] usb 2-4: reset high-speed USB device number 2 using xhci_hcd
[  201.890158] psmouse serio1: synaptics: queried max coordinates: x [..5702], y [..4730]
[  201.930768] psmouse serio1: synaptics: queried min coordinates: x [1242..], y [1124..]
[  202.076784] usb 2-5: reset full-speed USB device number 3 using xhci_hcd
[  202.205100] usb 2-5: ep 0x2 - rounding interval to 64 microframes, ep desc says 80 microframes
[  202.316799] usb 2-7: reset full-speed USB device number 5 using xhci_hcd
[  202.444945] usb 2-7: No LPM exit latency info found, disabling LPM.
[  202.556817] usb 2-8: reset full-speed USB device number 6 using xhci_hcd
[  202.908691] usb 2-6: reset high-speed USB device number 4 using xhci_hcd
[  203.932602] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[  204.044890] ata1.00: configured for UDMA/133
[  206.228698] PM: resume of devices complete after 4855.892 msecs
[  206.380738] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  206.383152] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  206.385775] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  206.388066] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  206.390415] acpi device:30: Cannot transition to power state D3cold for parent in (unknown)
[  206.393078] pci_bus 0000:01: Allocating resources
[  206.393098] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  206.395470] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  206.397927] i915 0000:00:02.0: BAR 6: [??? 0x00000000 flags 0x2] has bogus alignment
[  206.518516] Restarting kernel threads ... done.
[  206.518812] PM: Basic memory bitmaps freed
[  206.518816] Restarting tasks ... done.

There is one more occasional Kernel Oops (below), which I believe again has to do with Intel.

[ 8770.745396] ------------[ cut here ]------------
[ 8770.745441] WARNING: CPU: 0 PID: 7206 at drivers/gpu/drm/i915/intel_display.c:9756 intel_check_page_flip+0xd2/0xe0 [i915]()
[ 8770.745444] Kicking stuck page flip: queued at 466186, now 466191
[ 8770.745445] Modules linked in: cpuid rfcomm ctr ccm bnep pci_stub vboxpci(O) vboxnetadp(O) vboxnetflt(O) vboxdrv(O) bridge stp llc xt_conntrack iptable_filter ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xt_CHECKSUM xt_tcpudp iptable_mangle ip_tables x_tables nls_utf8 nls_cp437 vfat fat rtsx_usb_ms memstick snd_hda_codec_hdmi joydev mousedev hid_sensor_rotation hid_sensor_incl_3d hid_sensor_als hid_sensor_accel_3d hid_sensor_magn_3d hid_sensor_gyro_3d hid_sensor_trigger industrialio_triggered_buffer kfifo_buf industrialio hid_sensor_iio_common iTCO_wdt iTCO_vendor_support hid_multitouch x86_pkg_temp_thermal intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm btusb hid_sensor_hub bluetooth uvcvideo videobuf2_vmalloc videobuf2_memops
[ 8770.745484]  videobuf2_core v4l2_common videodev media pcspkr evdev mac_hid arc4 psmouse serio_raw efivars i2c_i801 rtl8723be btcoexist rtl8723_common rtl_pci rtlwifi mac80211 snd_soc_rt5640 cfg80211 snd_soc_rl6231 snd_hda_codec_realtek i915 snd_soc_core snd_hda_codec_generic ideapad_laptop ac snd_compress dw_dmac sparse_keymap drm_kms_helper rfkill battery dw_dmac_core snd_hda_intel snd_pcm_dmaengine snd_soc_sst_acpi snd_hda_controller video 8250_dw regmap_i2c snd_hda_codec drm snd_hwdep snd_pcm spi_pxa2xx_platform i2c_designware_platform soc_button_array snd_timer i2c_designware_core snd i2c_algo_bit soundcore shpchp lpc_ich button processor fuse ipv6 autofs4 ext4 crc16 jbd2 mbcache btrfs xor raid6_pq algif_skcipher af_alg dm_crypt dm_mod sg usbhid sd_mod rtsx_usb_sdmmc rtsx_usb crct10dif_pclmul
[ 8770.745536]  crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd ahci libahci libata xhci_pci ehci_pci xhci_hcd ehci_hcd scsi_mod usbcore usb_common thermal fan thermal_sys hwmon i2c_hid hid i2c_core sdhci_acpi sdhci mmc_core gpio_lynxpoint
[ 8770.745561] CPU: 0 PID: 7206 Comm: icedove Tainted: G        W  O    4.0.4+ #14
[ 8770.745563] Hardware name: LENOVO 20344/INVALID, BIOS 96CN29WW(V1.15) 10/16/2014
[ 8770.745565]  0000000000000000 ffffffffa0706f68 ffffffff81522198 ffff88025f203dc8
[ 8770.745569]  ffffffff8106c5b1 ffff880250f83800 ffff880254dcc000 0000000000000000
[ 8770.745572]  0000000000000000 0000000000000000 ffffffff8106c62a ffffffffa0709d50
[ 8770.745575] Call Trace:
[ 8770.745577]  <IRQ>  [<ffffffff81522198>] ? dump_stack+0x40/0x50
[ 8770.745592]  [<ffffffff8106c5b1>] ? warn_slowpath_common+0x81/0xb0
[ 8770.745595]  [<ffffffff8106c62a>] ? warn_slowpath_fmt+0x4a/0x50
[ 8770.745616]  [<ffffffffa06a0bb3>] ? __intel_pageflip_stall_check+0x113/0x120 [i915]
[ 8770.745634]  [<ffffffffa06af042>] ? intel_check_page_flip+0xd2/0xe0 [i915]
[ 8770.745652]  [<ffffffffa067cde1>] ? ironlake_irq_handler+0x2e1/0x1010 [i915]
[ 8770.745657]  [<ffffffff81092d1a>] ? check_preempt_curr+0x5a/0xa0
[ 8770.745663]  [<ffffffff812d66c2>] ? timerqueue_del+0x22/0x70
[ 8770.745668]  [<ffffffff810bb7d5>] ? handle_irq_event_percpu+0x75/0x190
[ 8770.745672]  [<ffffffff8101b945>] ? read_tsc+0x5/0x10
[ 8770.745676]  [<ffffffff810bb928>] ? handle_irq_event+0x38/0x50
[ 8770.745680]  [<ffffffff810be841>] ? handle_edge_irq+0x71/0x120
[ 8770.745685]  [<ffffffff810153bd>] ? handle_irq+0x1d/0x30
[ 8770.745689]  [<ffffffff8152a866>] ? do_IRQ+0x46/0xe0
[ 8770.745694]  [<ffffffff8152866d>] ? common_interrupt+0x6d/0x6d
[ 8770.745695]  <EOI>  [<ffffffff8152794d>] ? system_call_fastpath+0x16/0x1b
[ 8770.745701] ---[ end trace 8e339004db29883a ]---

Network

In my case, the laptop came with the Realtek Wireless device (details above in lspci output). Note: The machine has no wired interface.

While the Intel Wifi devices shipped with this laptop have their own share of problems, this device (rtl8723be) works out of the box. But only for a while. There is no certain pattern on what triggers the bug, but once triggered, the network just freezes. Nothing is logged.

If your Yoga 2 13 came with the RTL chip, the following workaround may help avoid the network issues.

rrs@learner:/media/SSHD/tmp$ cat /etc/modprobe.d/rtl8723be.conf
options rtl8723be fwlps=0
17:06 ♒♒♒   ☺    

MCE

Almost every boot, eventually, the kernel reports MCE errors. Not something I understand well, but so far, it hasn't caused any visible issues. And from what I have googled so far, nobody seems to have fixed it anywhere

So, with fingers crossed, lets just hope this never translates into a real problem.

What the kernel reports of the CPU's capabilities.

[    0.041496] mce: CPU supports 7 MCE banks
[  299.540930] mce: [Hardware Error]: Machine check events logged

The MCE logs extracted from the buffer.

mcelog: failed to prefill DIMM database from DMI data
Hardware event. This is not a software error.
MCE 0
CPU 0 BANK 5
MISC 38a0000086 ADDR fef81880
TIME 1432455005 Sun May 24 13:40:05 2015
MCG status:
MCi status:
Error overflow
Uncorrected error
MCi_MISC register valid
MCi_ADDR register valid
Processor context corrupt
MCA: corrected filtering (some unreported errors in same region)
Generic CACHE Level-2 Generic Error
STATUS ee0000000040110a MCGSTATUS 0
MCGCAP c07 APICID 0 SOCKETID 0
CPUID Vendor Intel Family 6 Model 69
Hardware event. This is not a software error.
MCE 1
CPU 0 BANK 6
MISC 78a0000086 ADDR fef81780
TIME 1432455005 Sun May 24 13:40:05 2015
MCG status:
MCi status:
Uncorrected error
MCi_MISC register valid
MCi_ADDR register valid
Processor context corrupt
MCA: corrected filtering (some unreported errors in same region)
Generic CACHE Level-2 Generic Error
STATUS ae0000000040110a MCGSTATUS 0
MCGCAP c07 APICID 0 SOCKETID 0
CPUID Vendor Intel Family 6 Model 69
Hardware event. This is not a software error.
MCE 2
CPU 0 BANK 5
MISC 38a0000086 ADDR fef81880
TIME 1432455114 Sun May 24 13:41:54 2015
MCG status:
MCi status:
Error overflow
Uncorrected error
MCi_MISC register valid
MCi_ADDR register valid
Processor context corrupt
MCA: corrected filtering (some unreported errors in same region)
Generic CACHE Level-2 Generic Error
STATUS ee0000000040110a MCGSTATUS 0
MCGCAP c07 APICID 0 SOCKETID 0
CPUID Vendor Intel Family 6 Model 69
Hardware event. This is not a software error.
MCE 3
CPU 0 BANK 6
MISC 78a0000086 ADDR fef81780
TIME 1432455114 Sun May 24 13:41:54 2015
MCG status:
MCi status:
Uncorrected error
MCi_MISC register valid
MCi_ADDR register valid
Processor context corrupt
MCA: corrected filtering (some unreported errors in same region)
Generic CACHE Level-2 Generic Error
STATUS ae0000000040110a MCGSTATUS 0
MCGCAP c07 APICID 0 SOCKETID 0
CPUID Vendor Intel Family 6 Model 69

Categories: 

Keywords: 

19 May, 2015 11:55AM by Ritesh Raj Sarraf

hackergotchi for Steve McIntyre

Steve McIntyre

Easier installation of Jessie on the Applied Micro X-Gene

As shipped, Debian Jessie (8.0) did not include kernel support for the USB controller on APM X-Gene based machines like the Mustang. In fact, at the time of writing this that support has not yet gone upstream into the mainline Linux kernel either but patches have been posted by Mark Langsdorf from Red Hat.

This means that installing Debian is more awkward than it could be on these machines. They don't have optical drives fitted normally, so the neat isohybrid CD images that we have made in Debian so far won't work very well at all. Booting via UEFI from a USB stick will work, but then the installer won't be able to read from the USB stick at all and you're stuck. :-( The best way so far for installing Debian is to do a network installation using tftp etc.

Well, until now... :-)

I've patched the Debian Jessie kernel, then re-built the installer and a netinst image to use them. I've put a copy of that image up at http://cdimage.debian.org/cdimage/unofficial/arm64-mustang/ with more instructions on how to use it. I'm just submitting the patch for inclusion into the Jessie stable kernel, hopefully ready to go into the 8.1 point release.

19 May, 2015 11:18AM

Andrew Cater

CI20 - MIPS dev. board - first impressions.

Annoyingly, I've bought one of these just before the form factor changes and it becomes a nice square board.

Up and running immediately out of the box, which is nice.

The kernel supplied on NAND flash is recent enough that it supports CONFIG_FHANDLE which is needed for the upgrade to Jessie.

The instructions for Jessie upgrade are straightforward and appear to be working correctly: they also suggest apt-get autoremove and apt-get autoclean to clean up which is a very nice touch.

The sources.list in apt was already pointing to my country's Debian mirror which was even nicer.

Quite a good experience immediately from unboxing: it also adds to the number of machine architectures I've run Debian on:
alpha, amd64, arm, armel, armhf, i386, sparc - not bad for a start.

It's a bit slow - but it's a SoC, so what can you expect? The PowerVR graphics demos were spectacular  but the drivers are very definitely non-free - you can't have everything.

[I do notice that there is an FSF-friendly Debian variant, though not yet certified as such - presumably not including PowerVR drivers]

(Lots of occurrences of the word "nice" in this post that I've just noticed. It's either understatement or just that I'm British)

19 May, 2015 06:52AM by Andrew Cater (noreply@blogger.com)

May 18, 2015

hackergotchi for Jonathan McDowell

Jonathan McDowell

Stepping down from SPI

I was first elected to the Software in the Public Interest board back in 2009. I was re-elected in 2012. This July I am up for re-election again. For a variety of reasons I’ve decided not to stand; mostly a combination of the fact that I think 2 terms (6 years) is enough in a single stretch and an inability to devote as much time to the organization as I’d like. I mentioned this at the May board meeting. I’m planning to stay involved where I can.

My main reason for posting this here is to cause people to think about whether they might want to stand for the board. Nominations open on July 1st and run until July 13th. The main thing you need to absolutely commit to is being able to attend the monthly board meeting, which is held on IRC at 20:30 UTC on the second Thursday of the month. They tend to last at most 30 minutes. Of course there’s a variety of tasks that happen in the background, such as answering queries from prospective associated projects or discussing ongoing matters on the membership or board lists depending on circumstances.

It’s my firm belief that SPI do some very important work for the Free software community. Few people realise the wide variety of associated projects. SPI offload the boring admin bits around accepting donations and managing project assets (be those machines, domains, trademarks or whatever), leaving those projects able to concentrate on the actual technical side of things. Most project members don’t realise the involvement of SPI, and that’s largely a good thing as it indicates the system is working. However it also means that there can sometimes be a lack of people wanting to stand at election time, and an absence of diversity amongst the candidates.

I’m happy to answer questions of anyone who might consider standing for the board; #spi on irc.oftc.net is a good place to ask them - I am there as Noodles.

18 May, 2015 10:38PM

hackergotchi for Daniel Pocock

Daniel Pocock

Free and open WebRTC for the Fedora Community

In January 2014, we launched the rtc.debian.org service for the Debian community. An equivalent service has been in testing for the Fedora community at FedRTC.org.

Some key points about the Fedora service:

  • The web front-end is just HTML, CSS and JavaScript. PHP is only used for account creation, the actual WebRTC experience requires no server-side web framework, just a SIP proxy.
  • The web code is all available in a Github repository so people can extend it.
  • Anybody who can authenticate against the FedOAuth OpenID is able to get a fedrtc.org test account immediately.
  • The server is built entirely with packages from CentOS 7 + EPEL 7, except for the SIP proxy itself. The SIP proxy is reSIProcate, which is available as a Fedora package and builds easily on RHEL / CentOS.

Testing it with WebRTC

Create an RTC password and then log in. Other users can call you. It is federated, so people can also call from rtc.debian.org or from freephonebox.net.

Testing it with other SIP softphones

You can use the RTC password to connect to the SIP proxy from many softphones, including Jitsi or Lumicall on Android.

Copy it

The process to replicate the server for another domain is entirely described in the Real-Time Communications Quick Start Guide.

Discuss it

The FreeRTC mailing list is a great place to discuss any issues involving this site or free RTC in general.

WebRTC opportunities expanding

Just this week, the first batch of Firefox OS televisions are hitting the market. Every one of these is a potential WebRTC client that can interact with free communications platforms.

18 May, 2015 05:48PM by Daniel.Pocock

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, April 2015

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 81.75 work hours have been dispatched among 5 paid contributors (20.75 hours where unused hours of Ben and Holger that were re-dispatched to other contributors). Their reports are available:

Evolution of the situation

May has seen a small increase in terms of sponsored hours (66.25 hours per month) and June is going to do even better with at least a new gold sponsor. We will have no problems sustaining the increased workload it implies since three Debian developers joined the team of contributors paid by Freexian (Antoine Beaupré, Santiago Ruano Rincón, Scott Kitterman).

The Jessie release probably shed some light on the Debian LTS project since we announced that Jessie will benefit from 5 years of support. Let’s hope that the trend will continue in the following months and that we reach our first milestone of funding the equivalent of a half-time position.

In terms of security updates waiting to be handled, the situation is a bit contrasted: the dla-needed.txt file lists 28 packages awaiting an update (12 less than last month), the list of open vulnerabilities in Squeeze shows about 60 affected packages in total (4 more than last month). The extra hours helped to make a good stride in the packages awaiting an update but there are many new vulnerabilities waiting to be triaged.

Thanks to our sponsors

The new sponsors of the month are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

18 May, 2015 09:58AM by Raphaël Hertzog

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

random 0.2.4

A new release of our random package for truly (hardware-based) random numbers as provided by random.org is now on CRAN.

The R 3.2.0 release brought the change to use an internal method="libcurl" which we are using if available; else the curl::curl() method added in release 0.2.3 is used. We are also a little more explicit about closing connections, and added really basic regression tests -- as it is hard to test hardware-based RNGs draws.

Courtesy of CRANberries comes a diffstat report for this release. Current and previous releases are available here as well as on CRAN.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 May, 2015 03:02AM

May 17, 2015

hackergotchi for Lunar

Lunar

Reproducible builds: week 3 in Stretch cycle

What happened about the reproducible builds effort for this week:

Toolchain fixes

Tomasz Buchert submitted a patch to fix the currently overzealous package-contains-timestamped-gzip warning.

Daniel Kahn Gillmor identified #588746 as a source of unreproducibility for packages using python-support.

Packages fixed

The following 57 packages became reproducible due to changes in their build dependencies: antlr-maven-plugin, aspectj-maven-plugin, build-helper-maven-plugin, clirr-maven-plugin, clojure-maven-plugin, cobertura-maven-plugin, coinor-ipopt, disruptor, doxia-maven-plugin, exec-maven-plugin, gcc-arm-none-eabi, greekocr4gamera, haskell-swish, jarjar-maven-plugin, javacc-maven-plugin, jetty8, latexml, libcgi-application-perl, libnet-ssleay-perl, libtest-yaml-valid-perl, libwiki-toolkit-perl, libwww-csrf-perl, mate-menu, maven-antrun-extended-plugin, maven-antrun-plugin, maven-archiver, maven-bundle-plugin, maven-clean-plugin, maven-compiler-plugin, maven-ear-plugin, maven-install-plugin, maven-invoker-plugin, maven-jar-plugin, maven-javadoc-plugin, maven-processor-plugin, maven-project-info-reports-plugin, maven-replacer-plugin, maven-resources-plugin, maven-shade-plugin, maven-site-plugin, maven-source-plugin, maven-stapler-plugin, modello-maven-plugin1.4, modello-maven-plugin, munge-maven-plugin, ocaml-bitstring, ocr4gamera, plexus-maven-plugin, properties-maven-plugin, ruby-magic, ruby-mocha, sisu-maven-plugin, syncache, vdk2, wvstreams, xml-maven-plugin, xmlbeans-maven-plugin.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Ben Hutchings also improved and merged several changes submitted by Lunar to linux.

Currently untested because in contrib:

  • Dmitry Smirnov uploaded fheroes2-pkg/0+svn20150122r3274-2-2.

reproducible.debian.net

“Thanks to the reproducible-build team for running a buildd from hell.” — gregor herrmann

Mattia Rizzolo modified the script added last week to reschedule a package from Alioth, a reason can now be optionally specified.

Holger Levsen splitted the package sets page so each set now has its own page. He also added new sets for Java packages, Haskell packages, Ruby packages, debian-installer packages, Go packages, and OCaml packages.

Reiner Herrmann added locales-all to the set of packages installed in the build environment as its needed to properly identify variations due to the current locale.

Holger Levsen improved the scheduling so new uploads get tested sooner. He also changed the .json output that is used by tracker.debian.org to lists FTBFS issues again but only for issues unrelated to the toolchain or our test setup. Amongst many other small fixes and additions, the graph colors should now be more friendly to red-colorblind people.

The fix for pbuilder given in #677666 by Tim Landscheidt is now used. This fixed several FTBFS for OCaml packages.

Work on rebuilding with different CPU has continued, a “kvm-on-kvm” build host has been set been set up for this purpose.

debbindiff development

Version 19 of debbindiff included a fix for a regression when handling info files.

Version 20 fixes a bug when diffing files with many differences toward a last line with no newlines. It also now uses the proper encoding when writing the text output to a pipe, and detects info files better.

Documentation update

Thanks to Santiago Vila, the unneeded -depth option used with find when fixing mtimes has been removed from the examples.

Package reviews

113 obsolete reviews have been removed this week while 77 has been added.

17 May, 2015 10:16PM

hackergotchi for Andrew Pollock

Andrew Pollock

[debian] Fixing some issues with changelogs.debian.net

I got an email last year pointing out a cosmetic issue with changelogs.debian.net. I think at the time of the email, the only problem was some bitrot in PHP's built-in server variables making some text appear incorrectly.

I duly added something to my TODO list to fix it, and it subsequently sat there for like 13 months. In the ensuing time, Debian changed some stuff, and my code started incorrectly handling a 302 as well, which actually broke it good and proper.

I finally got around to fixing it.

I also fixed a problem where sometimes there can be multiple entries in the Sources file for a package (switching to using api.ftp-master.debian.org would also address this), which caused sometimes caused an incorrect version of the changelog to be returned.

In the resulting tinkering, I learned about api.ftp-master.debian.org, which is totally awesome. I could stop maintaining and parsing a local copy of sid's Sources file, and just make a call to this instead.

Finally, I added linking to CVEs, because it was a quick thing to do, and adds value.

In light of api.ftp-master.debian.org, I'm very tempted to rewrite the redirector. The code is very old and hard for present-day Andrew to maintain, and I despise PHP. I'd rather write it in Python today, with some proper test coverage. I could also potentially host it on AppEngine instead of locally, just so I get some experience with AppEngine

It's also been suggested that I fold the changes into the changelog hosting on ftp-master.debian.org. I'm hesitant to do this, as it would require changing the output from plain text to HTML, which would mess up consumers of the plain text (like the current implementation of changelogs.debian.net)

17 May, 2015 02:42PM

hackergotchi for Lunar

Lunar

Reproducible builds: week 2 in Stretch cycle

What happened about the reproducible builds effort for this week:

Media coverage

Debian's effort on reproducible builds has been covered in the June 2015 issue of Linux Magazin in Germany.

Cover of Linux Magazin June 2015

Article about reproducible builds in Linux Magazin June 2015

Toolchain fixes

  • gregor herrmann uploaded libextutils-depends-perl/0.404-1 which makes its output deterministic.
  • Christian Hofstaedtler uploaded yard/0.8.7.4-2 which will not write timestamps in the generated documentation. Original patch by Chris Lamb, does not write timestamps in the generated documentation anymore.
  • Emmanuel Bourg uploaded maven-plugin-tools/3.3-2 which removes the date from the plugin descriptor. Patch by Reiner Herrmann.
  • Emmanuel Bourg uploaded maven-archiver/2.6-1 which now uses the date set in the DEB_CHANGELOG_DATETIME environment variable for the timestamp in the pom.properties file embedded in the jar files. Original patch by Chris West.
  • Nicolas Boulenguez uploaded dh-ada-library/6.4 which will warn against non deterministic ALI for sources newer than changelog.

josch rebased the experimental version of debhelper on 9.20150507.

Packages fixed

The following 515 packages became reproducible due to changes of their build dependencies: airport-utils, airspy-host, all-in-one-sidebar, ampache, aptfs, arpack, asciio, aspell-kk, asused, balance, batmand, binutils-avr, bioperl, bpm-tools, c2050, cakephp-instaweb, carton, cbp2make, checkbot, checksecurity, chemeq, chronicle, cube2-data, cucumber, darkstat, debci, desktop-file-utils, dh-linktree, django-pagination, dosbox, eekboek, emboss-explorer, encfs, exabgp, fbasics, fife, fonts-lexi-saebom, gdnsd, glances, gnome-clocks, gunicorn, haproxy, haskell-aws, haskell-base-unicode-symbols, haskell-base64-bytestring, haskell-basic-prelude, haskell-binary-shared, haskell-binary, haskell-bitarray, haskell-bool-extras, haskell-boolean, haskell-boomerang, haskell-bytestring-lexing, haskell-bytestring-mmap, haskell-config-value, haskell-mueval, haskell-tasty-kat, itk3, jnr-constants, jshon, kalternatives, kdepim-runtime, kdevplatform, kwalletcli, lemonldap-ng, libalgorithm-combinatorics-perl, libalgorithm-diff-xs-perl, libany-uri-escape-perl, libanyevent-http-scopedclient-perl, libanyevent-perl, libanyevent-processor-perl, libapache-session-wrapper-perl, libapache-sessionx-perl, libapp-options-perl, libarch-perl, libarchive-peek-perl, libaudio-flac-header-perl, libaudio-wav-perl, libaudio-wma-perl, libauth-yubikey-decrypter-perl, libauthen-krb5-simple-perl, libauthen-simple-perl, libautobox-dump-perl, libb-keywords-perl, libbarcode-code128-perl, libbio-das-lite-perl, libbio-mage-perl, libbrowser-open-perl, libbusiness-creditcard-perl, libbusiness-edifact-interchange-perl, libbusiness-isbn-data-perl, libbusiness-tax-vat-validation-perl, libcache-historical-perl, libcache-memcached-perl, libcairo-gobject-perl, libcarp-always-perl, libcarp-fix-1-25-perl, libcatalyst-action-serialize-data-serializer-perl, libcatalyst-controller-formbuilder-perl, libcatalyst-dispatchtype-regex-perl, libcatalyst-plugin-authentication-perl, libcatalyst-plugin-authorization-acl-perl, libcatalyst-plugin-session-store-cache-perl, libcatalyst-plugin-session-store-fastmmap-perl, libcatalyst-plugin-static-simple-perl, libcatalyst-view-gd-perl, libcgi-application-dispatch-perl, libcgi-application-plugin-authentication-perl, libcgi-application-plugin-logdispatch-perl, libcgi-application-plugin-session-perl, libcgi-application-server-perl, libcgi-compile-perl, libcgi-xmlform-perl, libclass-accessor-classy-perl, libclass-accessor-lvalue-perl, libclass-accessor-perl, libclass-c3-adopt-next-perl, libclass-dbi-plugin-type-perl, libclass-field-perl, libclass-handle-perl, libclass-load-perl, libclass-ooorno-perl, libclass-prototyped-perl, libclass-returnvalue-perl, libclass-singleton-perl, libclass-std-fast-perl, libclone-perl, libconfig-auto-perl, libconfig-jfdi-perl, libconfig-simple-perl, libconvert-basen-perl, libconvert-ber-perl, libcpan-checksums-perl, libcpanplus-dist-build-perl, libcriticism-perl, libcrypt-cracklib-perl, libcrypt-dh-gmp-perl, libcrypt-mysql-perl, libcrypt-passwdmd5-perl, libcrypt-simple-perl, libcss-packer-perl, libcss-tiny-perl, libcurses-widgets-perl, libdaemon-control-perl, libdancer-plugin-database-perl, libdancer-session-cookie-perl, libdancer2-plugin-database-perl, libdata-format-html-perl, libdata-uuid-libuuid-perl, libdata-validate-domain-perl, libdate-jd-perl, libdate-simple-perl, libdatetime-astro-sunrise-perl, libdatetime-event-cron-perl, libdatetime-format-dbi-perl, libdatetime-format-epoch-perl, libdatetime-format-mail-perl, libdatetime-tiny-perl, libdatrie, libdb-file-lock-perl, libdbd-firebird-perl, libdbix-abstract-perl, libdbix-class-datetime-epoch-perl, libdbix-class-dynamicdefault-perl, libdbix-class-introspectablem2m-perl, libdbix-class-timestamp-perl, libdbix-connector-perl, libdbix-oo-perl, libdbix-searchbuilder-perl, libdbix-xml-rdb-perl, libdevel-stacktrace-ashtml-perl, libdigest-hmac-perl, libdist-zilla-plugin-emailnotify-perl, libemail-date-format-perl, libemail-mime-perl, libemail-received-perl, libemail-sender-perl, libemail-simple-perl, libencode-detect-perl, libexporter-tidy-perl, libextutils-cchecker-perl, libextutils-installpaths-perl, libextutils-libbuilder-perl, libextutils-makemaker-cpanfile-perl, libextutils-typemap-perl, libfile-counterfile-perl, libfile-pushd-perl, libfile-read-perl, libfile-touch-perl, libfile-type-perl, libfinance-bank-ie-permanenttsb-perl, libfont-freetype-perl, libfrontier-rpc-perl, libgd-securityimage-perl, libgeo-coordinates-utm-perl, libgit-pureperl-perl, libgnome2-canvas-perl, libgnome2-wnck-perl, libgraph-readwrite-perl, libgraphics-colornames-www-perl, libgssapi-perl, libgtk2-appindicator-perl, libgtk2-gladexml-simple-perl, libgtk2-notify-perl, libhash-asobject-perl, libhash-moreutils-perl, libhtml-calendarmonthsimple-perl, libhtml-display-perl, libhtml-fillinform-perl, libhtml-form-perl, libhtml-formhandler-model-dbic-perl, libhtml-html5-entities-perl, libhtml-linkextractor-perl, libhtml-tableextract-perl, libhtml-widget-perl, libhtml-widgets-selectlayers-perl, libhtml-wikiconverter-mediawiki-perl, libhttp-async-perl, libhttp-body-perl, libhttp-date-perl, libimage-imlib2-perl, libimdb-film-perl, libimport-into-perl, libindirect-perl, libio-bufferedselect-perl, libio-compress-lzma-perl, libio-compress-perl, libio-handle-util-perl, libio-interface-perl, libio-multiplex-perl, libio-socket-inet6-perl, libipc-system-simple-perl, libiptables-chainmgr-perl, libjoda-time-java, libjsr305-java, libkiokudb-perl, liblemonldap-ng-cli-perl, liblexical-var-perl, liblingua-en-fathom-perl, liblinux-dvb-perl, liblocales-perl, liblog-dispatch-configurator-any-perl, liblog-log4perl-perl, liblog-report-lexicon-perl, liblwp-mediatypes-perl, liblwp-protocol-https-perl, liblwpx-paranoidagent-perl, libmail-sendeasy-perl, libmarc-xml-perl, libmason-plugin-routersimple-perl, libmasonx-processdir-perl, libmath-base85-perl, libmath-basecalc-perl, libmath-basecnv-perl, libmath-bigint-perl, libmath-convexhull-perl, libmath-gmp-perl, libmath-gradient-perl, libmath-random-isaac-perl, libmath-random-oo-perl, libmath-random-tt800-perl, libmath-tamuanova-perl, libmemoize-expirelru-perl, libmemoize-memcached-perl, libmime-base32-perl, libmime-lite-tt-perl, libmixin-extrafields-param-perl, libmock-quick-perl, libmodule-cpanfile-perl, libmodule-load-conditional-perl, libmodule-starter-pbp-perl, libmodule-util-perl, libmodule-versions-report-perl, libmongodbx-class-perl, libmoo-perl, libmoosex-app-cmd-perl, libmoosex-attributehelpers-perl, libmoosex-blessed-reconstruct-perl, libmoosex-insideout-perl, libmoosex-relatedclassroles-perl, libmoosex-role-timer-perl, libmoosex-role-withoverloading-perl, libmoosex-storage-perl, libmoosex-types-common-perl, libmoosex-types-uri-perl, libmoox-singleton-perl, libmoox-types-mooselike-numeric-perl, libmousex-foreign-perl, libmp3-tag-perl, libmysql-diff-perl, libnamespace-clean-perl, libnet-bonjour-perl, libnet-cli-interact-perl, libnet-daap-dmap-perl, libnet-dbus-glib-perl, libnet-dns-perl, libnet-frame-perl, libnet-google-authsub-perl, libnet-https-any-perl, libnet-https-nb-perl, libnet-idn-encode-perl, libnet-idn-nameprep-perl, libnet-imap-client-perl, libnet-irc-perl, libnet-mac-vendor-perl, libnet-openid-server-perl, libnet-smtp-ssl-perl, libnet-smtp-tls-perl, libnet-smtpauth-perl, libnet-snpp-perl, libnet-sslglue-perl, libnet-telnet-perl, libnhgri-blastall-perl, libnumber-range-perl, libobject-signature-perl, libogg-vorbis-header-pureperl-perl, libopenoffice-oodoc-perl, libparse-cpan-packages-perl, libparse-debian-packages-perl, libparse-fixedlength-perl, libparse-syslog-perl, libparse-win32registry-perl, libpdf-create-perl, libpdf-report-perl, libperl-destruct-level-perl, libperl-metrics-simple-perl, libperl-minimumversion-perl, libperl6-slurp-perl, libpgobject-simple-perl, libplack-middleware-fixmissingbodyinredirect-perl, libplack-test-externalserver-perl, libplucene-perl, libpod-tests-perl, libpoe-component-client-ping-perl, libpoe-component-jabber-perl, libpoe-component-resolver-perl, libpoe-component-server-soap-perl, libpoe-component-syndicator-perl, libposix-strftime-compiler-perl, libposix-strptime-perl, libpostscript-simple-perl, libproc-processtable-perl, libprotocol-osc-perl, librcs-perl, libreadonly-xs-perl, libreturn-multilevel-perl, librivescript-perl, librouter-simple-perl, librrd-simple-perl, libsafe-isa-perl, libscope-guard-perl, libsemver-perl, libset-tiny-perl, libsharyanto-file-util-perl, libshell-command-perl, libsnmp-info-perl, libsoap-lite-perl, libstat-lsmode-perl, libstatistics-online-perl, libstring-compare-constanttime-perl, libstring-format-perl, libstring-toidentifier-en-perl, libstring-tt-perl, libsub-recursive-perl, libsvg-tt-graph-perl, libsvn-notify-perl, libswish-api-common-perl, libtap-formatter-junit-perl, libtap-harness-archive-perl, libtemplate-plugin-number-format-perl, libtemplate-plugin-yaml-perl, libtemplate-tiny-perl, libtenjin-perl, libterm-visual-perl, libtest-block-perl, libtest-carp-perl, libtest-classapi-perl, libtest-cmd-perl, libtest-consistentversion-perl, libtest-data-perl, libtest-databaserow-perl, libtest-differences-perl, libtest-file-sharedir-perl, libtest-hasversion-perl, libtest-kwalitee-perl, libtest-lectrotest-perl, libtest-module-used-perl, libtest-object-perl, libtest-perl-critic-perl, libtest-pod-coverage-perl, libtest-script-perl, libtest-script-run-perl, libtest-spelling-perl, libtest-strict-perl, libtest-synopsis-perl, libtest-trap-perl, libtest-unit-perl, libtest-utf8-perl, libtest-without-module-perl, libtest-www-selenium-perl, libtest-xml-simple-perl, libtest-yaml-perl, libtex-encode-perl, libtext-bibtex-perl, libtext-csv-encoded-perl, libtext-csv-perl, libtext-dhcpleases-perl, libtext-diff-perl, libtext-quoted-perl, libtext-trac-perl, libtext-vfile-asdata-perl, libthai, libthread-conveyor-perl, libthread-sigmask-perl, libtie-cphash-perl, libtie-ical-perl, libtime-stopwatch-perl, libtk-dirselect-perl, libtk-pod-perl, libtorrent, libturpial, libunicode-japanese-perl, libunicode-maputf8-perl, libunicode-stringprep-perl, libuniversal-isa-perl, libuniversal-moniker-perl, liburi-encode-perl, libvi-quickfix-perl, libvideo-capture-v4l-perl, libvideo-fourcc-info-perl, libwiki-toolkit-plugin-rss-reader-perl, libwww-mechanize-formfiller-perl, libwww-mechanize-gzip-perl, libwww-mechanize-perl, libwww-opensearch-perl, libx11-freedesktop-desktopentry-perl, libxc, libxml-dtdparser-perl, libxml-easy-perl, libxml-handler-trees-perl, libxml-libxml-iterator-perl, libxml-libxslt-perl, libxml-rss-perl, libxml-validator-schema-perl, libxml-xpathengine-perl, libxml-xql-perl, llvm-py, madbomber, makefs, mdpress, media-player-info, meta-kde-telepathy, metamonger, mmm-mode, mupen64plus-audio-sdl, mupen64plus-rsp-hle, mupen64plus-ui-console, mupen64plus-video-z64, mussort, newpid, node-formidable, node-github-url-from-git, node-transformers, nsnake, odin, otcl, parsley, pax, pcsc-perl, pd-purepd, pen, prank, proj, proot, puppet-module-puppetlabs-postgresql, python-async, python-pysnmp4, qrencode, r-bioc-graph, r-bioc-hypergraph, r-bioc-iranges, r-bioc-xvector, r-cran-pscl, rbenv, rlinetd, rs, ruby-ascii85, ruby-cutest, ruby-ejs, ruby-factory-girl, ruby-hdfeos5, ruby-kpeg, ruby-libxml, ruby-password, ruby-zip-zip, sdl-sound1.2, stterm, systemd, taktuk, tcc, tryton-modules-account-invoice, ttf-summersby, tupi, tuxpuck, unknown-horizons, unsafe-mock, vcheck, versiontools, vim-addon-manager, vlfeat, vsearch, xacobeo, xen-tools, yubikey-personalization-gui, yubikey-personalization.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which did not make their way to the archive yet:

  • #784541 on yasm by Lunar: remove build date from version strings.
  • #784694 on smcroute by Micha Lenk: remove build date from version string.
  • #784672 on gnumeric by Daniel Kahn Gillmor: remove timestamps in embedded gzip'ed data in shared library.
  • #774347 on sed by Lunar: fix permissions before creating the package.
  • #784352 on icebreaker by Reiner Herrmann: use UTC timezone when calculating version date.
  • #784325 on kde-workspace by Lunar: make the output of kdm confproc.pl stable.
  • #784602 on monkeysign by Daniel Kahn Gillmor: use time of debian/changelog entry when generating documentation.
  • #784723 on alot by Juan Picca: pass time of debian/changelog entry to Sphinx.
  • #784538 on file-rc by Lunar: use sed instead of grep+mv to keep correct file permissions.
  • #784335 on libapache2-mod-perl2 by Lunar: set PERL_HASH_SEED=0 during configure to make the generated .c and .h files stable.
  • #784267 on mpv by Lunar: pass --disable-build-date to ./configure.
  • #784793 on bugs-everywhere by Daniel Kahn Gillmor: use time of debian/changelog entry as build date.
  • #784318 on gnome-desktop3 by Lunar: use time of debian/chanelog entry as build date.
  • #774504 on debianutils by Lunar: fix file permissions.

reproducible.debian.net

Alioth now hosts a script that can be used to redo builds and test for a package. This was preliminary done manually through requests over the IRC channel. This should reduce the number of interruptions for jenkins' maintainers

The graph of the oldest build per day has been fixed. Maintainance scripts will not error out when they are no files to remove.

Holger Levsen started work on being able to test variations of CPU features and build date (as in build in another month of 1984) by using virtual machines.

debbindiff development

Version 18 has been released. It will uses proper comparators for pk3 and info files. Tar member names are now assumed to be UTF-8 encoded.

The limit for the maximum number of different lines has been removed. Let's see on reproducible.debian.net how it goes for pathological cases.

It's now possible to specify both --html and --text output. When neither of them is specified, the default will be to print a text report on the standard output (thanks to Paul Wise for the suggestion).

Documentation update

Nicolas Boulenguez investigated Ada libraries.

Package reviews

451 obsolete reviews have been removed and 156 added this week.

New identified issues: running kernel version getting captured, random filenames in GHC debug symbols, and timestamps in headers generated by qdbusxml2cpp.

Misc.

Holger Levsen went to re:publica and talked about reproducible builds to developers and users there.

Holger also had a chance to meet FreeBSD developers and discuss the status of FreeBSD. Investigations have started on how it could be made part of our current test system.

Laurent Guerby gave Lunar access to systems in the GCC Compile Farm. Hopefully access to these powerful machines will help to fix packages for GCC, Iceweasel, and similar packages requiring long build times.

17 May, 2015 12:00PM

May 16, 2015

Ian Campbell

A vhosting git setup with gitolite and gitweb

Since gitorious' shutdown I decided it was time to start hosting my own git repositories for my own little projects (although the company which took over gitorious has a Free software offering it seems that their hosted offering is based on the proprietary version, and in any case once bitten, twice shy and all that).

After a bit of investigation I settled on using gitolite and gitweb. I did consider (and even had a vague preference for) cgit but it wasn't available in Wheezy (even backports, and the backport looked tricky) and I haven't upgraded my VPS yet. I may reconsider cgit this once I switch to Jessie.

The only wrinkle was that my VPS is shared with a friend and I didn't want to completely take over the gitolite and gitweb namespaces in case he ever wanted to setup git.hisdomain.com, so I needed something which was at least somewhat compatible with vhosting. gitolite doesn't appear to support such things out of the box but I found an interesting/useful post from Julius Plenz which included sufficient inspiration that I thought I knew what to do.

After a bit of trial and error here is what I ended up with:

Install gitolite

The gitolite website has plenty of documentation on configuring gitolite. But since gitolite is in Debian its even more trivial than even the quick install makes out.

I decided to use the newer gitolite3 package from wheezy-backports instead of the gitolite (v2) package from Wheezy. I already had backports enabled so this was just:

# apt-get install gitolite3/wheezy-backports

I accepted the defaults and gave it the public half of the ssh key which I had created to be used as the gitolite admin key.

By default this added a user gitolite3 with a home directory of /var/lib/gitolite3. Since they username forms part of the URL used to access the repositories I want it to include the 3, so I edited /etc/passwd, /etc/groups, /etc/shadow and /etc/gshadow to say just gitolite but leaving the home directory as gitolite3.

Now I could clone the gitolite-admin repo and begin to configure things.

Add my user

This was simple as dropping the public half into the gitolite-admin repo as keydir/ijc.pub, then git add, commit and push.

Setup vhosting

Between the gitolite docs and Julius' blog post I had a pretty good idea what I wanted to do here.

I wasn't too worried about making the vhost transparent from the developer's (ssh:// URL) point of view, just from the gitweb and git clone side. So I decided to adapt things to use a simple $VHOST/$REPO.git schema.

I created /var/lib/gitolite3/local/lib/Gitolite/Triggers/VHost.pm containing:

package Gitolite::Triggers::VHost;

use strict;
use warnings;

use File::Slurp qw(read_file write_file);

sub post_compile {
    my %vhost = ();
    my @projlist = read_file("$ENV{HOME}/projects.list");
    for my $proj (sort @projlist) {
        $proj =~ m,^([^/\.]*\.[^/]*)/(.*)$, or next;
        my ($host, $repo) = ($1,$2);
        $vhost{$host} //= [];
        push @{$vhost{$host}} => $repo;
    }
    for my $v (keys %vhost) {
        write_file("$ENV{HOME}/projects.$v.list",
                   { atomic => 1 }, join("\n",@{$vhost{$v}}));
    }
}
1;

I then edited /var/lib/gitolite3/.gitolite.rc and ensured it contained:

LOCAL_CODE                =>  "$ENV{HOME}/local",

POST_COMPILE => [ 'VHost::post_compile', ],

(The first I had to uncomment, the second to add).

All this trigger does is take the global projects.list, in which gitolite will list any repo which is configured to be accessible via gitweb, and split it into several vhost specific lists.

Create first repository

Now that the basics were in place I could create my first repository (for hosting qcontrol).

In the gitolite-admin repository I edited conf/gitolite.conf and added:

repo hellion.org.uk/qcontrol
    RW+     =   ijc

After adding, committing and pushing I now have "/var/lib/gitolite3/projects.list" containing:

hellion.org.uk/qcontrol.git
testing.git

(the testing.git repository is configured by default) and /var/lib/gitolite3/projects.hellion.org.uk.list containing just:

qcontrol.git

For cloning the URL is:

gitolite@${VPSNAME}:hellion.org.uk/qcontrol.git

which is rather verbose (${VPSNAME} is quote long in my case too), so to simplify things I added to my .ssh/config:

Host gitolite
Hostname ${VPSNAME}
User gitolite
IdentityFile ~/.ssh/id_rsa_gitolite

so I can instead use:

gitolite:hellion.org.uk/qcontrol.git

which is a bit less of a mouthful and almost readable.

Configure gitweb (http:// URL browsing)

Following the documentation's advice I edited /var/lib/gitolite3/.gitolite.rc to set:

UMASK                           =>  0027,

and then:

$ chmod -R g+rX /var/lib/gitolite3/repositories/*

Which arranges that members of the gitolite group can read anything under /var/lib/gitolite3/repositories/*.

Then:

# adduser www-data gitolite

This adds the user www-data to the gitolite group so it can take advantage of those relaxed permissions. I'm not super happy about this but since gitweb runs as www-data:www-data this seems to be the recommended way of doing things. I'm consoling myself with the fact that I don't plan on hosting anything sensitive... I also arranged things such that members of the groups can only list the contents of directories from the vhost directory down by setting g=x not g=rx on higher level directories. Potentially sensitive files do not have group permissions at all either.

Next I created /etc/apache2/gitolite-gitweb.conf:

die unless $ENV{GIT_PROJECT_ROOT};
$ENV{GIT_PROJECT_ROOT} =~ m,^.*/([^/]+)$,;
our $gitolite_vhost = $1;
our $projectroot = $ENV{GIT_PROJECT_ROOT};
our $projects_list = "/var/lib/gitolite3/projects.${gitolite_vhost}.list";
our @git_base_url_list = ("http://git.${gitolite_vhost}");

This extracts the vhost name from ${GIT_PROJECT_ROOT} (it must be the last element) and uses it to select the appropriate vhost specific projects.list.

Then I added a new vhost to my apache2 configuration:

<VirtualHost 212.110.190.137:80 [2001:41c8:1:628a::89]:80>
        ServerName git.hellion.org.uk
        SetEnv GIT_PROJECT_ROOT /var/lib/gitolite3/repositories/hellion.org.uk
        SetEnv GITWEB_CONFIG /etc/apache2/gitolite-gitweb.conf
        Alias /static /usr/share/gitweb/static
        ScriptAlias / /usr/share/gitweb/gitweb.cgi/
</VirtualHost>

This configures git.hellion.org.uk (don't forget to update DNS too) and sets the appropriate environment variables to find the custom gitolite-gitweb.conf and the project root.

Next I edited /var/lib/gitolite3/.gitolite.rc again to set:

GIT_CONFIG_KEYS                 => 'gitweb\.(owner|description|category)',

Now I can edit the repo configuration to be:

repo hellion.org.uk/qcontrol
    owner   =   Ian Campbell
    desc    =   qcontrol
    RW+     =   ijc
    R       =   gitweb

That R permission for the gitweb pseudo-user causes the repo to be listed in the global projects.list and the trigger which we've added causes it to be listed in projects.hellion.org.uk.list, which is where our custom gitolite-gitweb.conf will look.

Setting GIT_CONFIG_KEYS allows those options (owner and desc are syntactic sugar for two of them) to be set here and propagated to the actual repo.

Configure git-http-backend (http:// URL cloning)

After all that this was pretty simple. I just added this to my vhost before the ScriptAlias / /usr/share/gitweb/gitweb.cgi/ line:

        ScriptAliasMatch \
                "(?x)^/(.*/(HEAD | \
                                info/refs | \
                                objects/(info/[^/]+ | \
                                         [0-9a-f]{2}/[0-9a-f]{38} | \
                                         pack/pack-[0-9a-f]{40}\.(pack|idx)) | \
                                git-(upload|receive)-pack))$" \
                /usr/lib/git-core/git-http-backend/$1

This (which I stole straight from the git-http-backend(1) manpage causes anything which git-http-backend should deal with to be sent there and everything else to be sent to gitweb.

Having done that access is enabled by editing the repo configuration one last time to be:

repo hellion.org.uk/qcontrol
    owner   =   Ian Campbell
    desc    =   qcontrol
    RW+     =   ijc
    R       =   gitweb daemon

Adding R permissions for daemon causes gitolite to drop a stamp file in the repository which tells git-http-backend that it should export it.

Configure git daemon (git:// URL cloning)

I actually didn't bother with this, git http-backend supports the smart HTTP mode which should be as efficient as the git protocol. Given that I couldn't see any reason to run another network facing daemon on my VPS.

FWIW it looks like vhosting could have been achieved by using the --interpolated-path option.

Conclusion

There's quite a few moving parts, but they all seems to fit together quite nicely. In the end apart from adding www-data to the gitolite group I'm pretty happy with how things ended up.

16 May, 2015 05:52PM

hackergotchi for Holger Levsen

Holger Levsen

20150516-lts-march-and-april

My LTS March and April

In March and April 2015 I sadly didn't get much LTS work done, for a variety of reasons. Most of these reasons make me happy, while at the same time I'm sad I had to reduce my LTS involvement and actually I even failed to allocate those few hours which were assigned to me. So I'll keep this blog post short too, as time is my most precious ressource atm.

In March I only sponsored the axis upload for and wrote DLA-169-1, besides that I spent some hours to implement JSON output for the security-tracker, which was more difficult than anticipated, because a.) different people had different (first) unspoken assumptions what output they wanted and b.) since the security-trackers database schema has grown over years getting the data out in a logically structured fashion ain't as easy as one would imagine...

In April I sponsored the openldap upload and wrote DLA-203-1 and then prepared debian-security-support 2015.04.04~~deb6u1 for squeeze-lts and triaged some of d-s-s's bugs. Adding support for oldoldstable (and thus keeping support for squeeze-lts) to the security-tracker was my joyful contribution for the very joyful Jessie release day.

So in total I only spent 7.5 (paid) hours in these two months on LTS, despite I should have spent 10. The only thing I can say to my defense is that I've spent more time on LTS (supporting both users as well as contributors on the list as well as on IRC) but this time ain't billable. Which I think is right, but it still eats from my "LTS time" and so sometimes I wish I could more easily ignore people and just concentrate on technical fixes...

16 May, 2015 04:56PM

Craig Small

Debian, WordPress and Multi-site

For quite some time, the Debian version of WordPress has had a configuration tweak that made it possible to run multiple websites on the same server. This came from a while ago when multi-site wasn’t available. While a useful feature, it does make the initial setup of WordPress for simple sites more complicated.

I’m looking at changing the Debian package slightly so that for a single-site use it Just Works. I have also looked into the way WordPress handles the content, especially themes and plugins, to see if there is a way of updating them through the website itself. This probably won’t suit everyone but I think its a better default.

The idea will be to setup Debian packages something like this by default and then if you want more fancier stuff its all still there, just not setup. It’s not setup at the moment but the default is a little confusing which I hope to change.

Multisite

The first step was to get my pair of websites into one. So first it was backing up time and then the removal of my config-websitename.php files in /etc/wordpress. I created a single /etc/wordpress/config-default.php file that used a new database.  This initial setup worked ok and I had the primary site going reasonably quickly.

The second site was a little trickier. The problem is that multisite does things like foo.example.com and bar.example.com while I wanted example.com and somethingelse.com There is a plugin wordpress-mu-domain-mapping that almost sorta-kinda works.  While it let me make the second site with a different name, it didn’t like aliases, especially if the alias was the first site.

Some evil SQL fixed that nicely.  “UPDATE wp_domain_mapping SET blog_id=1 WHERE id=2″

So now I had:

  • enc.com.au as my primary site
  • rnms.org as a second site
  • dropbear.xyz as an alias for my primary site

Files and Permissions

We really three separate sets of files in wordpress. These files come from three different sources and are updated using three different ways with a different release cycle.

The first is the wordpress code which is shipped in the Debian package. All of this code lives in /usr/share/wordpress and is only changed if you update the Debian package, or you fiddle around with it. It needs to be readable to the webserver but not writable. The config files in /etc/wordpress are in this lot too.

Secondly, we have the user generated data. This is things like your pictures that you add to the blog. As they are uploaded through the webserver, it needs to be writable to it. These files are located in /var/lib/wordpress/wp-content/uploads

Third, is the plugins and themes. These can either be unzipped and placed into a directory or directly loaded in from the webserver. I used to do the first way but are trying the second. These files are located in /var/lib/wordpress/wp-content

Ever tried to update your plugins and get the FTP prompt? This is because the wp-content directory is not writable. I adjusted the permissions and now when a plugin wants to update, I click yes and it magically happens!

You will have to reference the /var/lib/wordpress/wp-content subdirectory in two places:

  • In your /etc/config-default.php:  WP_CONTENT_DIR definition
  • In apache or htaccess: Either a symlink out of /usr/share/wordpress and turn on followsymlinks or an apache Alias and also permit access.

What broke

Images did, in a strange way. My media library is empty, but my images are still there. Something in the export and reimport did not work. For me its a minor inconvenience and due to moving from one system to another, but it still is there.

 

 

16 May, 2015 08:07AM by Craig

hackergotchi for Rogério Brito

Rogério Brito

A Small Python Project (coursera-dl) Activites

Lately, I have been dedicating a lot of my time (well, at least compared to what I used to) to Free Software projects. In particular, I have spent a moderate amount of time with two projects written in Python.

In this post, I want to talk about the first, more popular project is called coursera-dl. To be honest, I think that I may have devoted much more time to it than to any other project in particular.

With it I started to learn (besides the practices that I already used in Debian), how to program in Python, how to use unit tests (I started with Python's built-in unittest framework, then progressed to nose, and I am now using pytest), hooking up the results of the tests with a continuous integration system (in this case, Travis CI).

I must say that I am sold on this idea of testing software (after being a skeptical for way too long) and I can say that I find hacking on other projects without proper testing a bit uncomfortable, since I don't know if I am breaking unrelated parts of the project.

My use/migration to pytest was the result of a campaign from pytest.org called Adopt Pytest Month which a kind user of the project let me know about. I got a very skilled volunteer assigned from pytest to our project. Besides learning from their pull requests, one side-effect of this whole story was that I spent a moderate amount of hours trying to understand how properly package and distribute things on PyPI.

One tip learned along the way: contrary to the official documentation, use twine, not python setup.py upload. It is more flexible for uploading your package to PyPI.

You can see the package on PyPI. Anyway, I made the first upload of the package to PyPI on the 1st of May and it already has almost 1500 download, which is far more than what I expected.

A word of warning: there are other similarly named project, but they don't seem to have as much following as we have. A speculation from my side is that this may be, perhaps, due to me spending a lot of time interacting with users in the bug tracker that github provides.

Anyway, installation of the program is now as simple as:

pip install coursera

And all the dependencies will be neatly pulled in, without having to mess with multi-step procedures. This is a big win for the users.

Also, I even had an offer to package the program to have it available in Debian!

Well, despite all the time that this project demanded, I think that I have only good things to say, especially to the original author, John Lehmann. :)

If you like the project, please let me know and consider yourselves invited to participate lending a hand, testing/using the program or [triaging some bugs][issues].

16 May, 2015 04:54AM

hackergotchi for Norbert Preining

Norbert Preining

Plex Home Theater 1.4.1 for Debian Jessie and Sid

Recently Plex Plex Home Theater was updated to 1.4.1 with fixes for some errors, in particular concerning the new music handling introduced in 1.4.0. As with 1.4.0, I have compiled PHT for both jessie and sid, both for amd64 and i386.

plex-debian-new

Jessie

Add the following lines to your sources.list:

deb http://www.preining.info/debian/ jessie pht
deb-src http://www.preining.info/debian/ jessie pht

You can also grab the binary for amd64 directly here for amd64 and i386, you can get the source package with

dget http://www.preining.info/debian/pool/pht/p/plexhometheater/plexhometheater_1.4.1-1~bpo8+1.dsc

Sid

Add the following lines to your sources.list:

deb http://www.preining.info/debian/ sid pht
deb-src http://www.preining.info/debian/ sid pht

You can also grab the binary for amd64 directly here for amd64 and i386, you can get the source package with

dget http://www.preining.info/debian/pool/pht/p/plexhometheater/plexhometheater_1.4.1-1.dsc

The release file and changes file are signed with my official Debian key 0x860CDC13.

Enjoy!

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author

16 May, 2015 03:28AM by Norbert Preining

May 15, 2015

hackergotchi for Sune Vuorela

Sune Vuorela

Getting a Q_INVOKABLE C++ function reevaluated by QML engine

Unfortunately, with all the normal magic of QML property bindings, getting a property updated in a setup that involves return values from functions isn’t really doable, like this:

Text {
text: qtobject.calculatedValue()
}

I’m told there is a low priority feature request for a way of signalling that a function now returns a different value and all properties using it should be reevaluated.

I have so far discovered two different workarounds for that that I will be presenting here.

Using an extra property

Appending an extra property to trigger the reevaluation of the function is one way of doing it.

Text {
text: qtobject.calculatedValue() + qtobject.emptyNotifierThing
}

with the following on the C++ side:

Q_PROPERTY(QString emptyNotifierThing READ emptyString NOTIFY valueChanged)
QString emptyString() const {
return QString();
}

This is a bit more code to write and to remember to use, but it does get the job done.

Intermediate layer
Another way is to inject an intermediate layer, an extra object, that has the function. It can even be simplified by having a pointer to itself.

Text {
text: qtobject.dataAccess.calculatedValue()
}

with the following on the C++ side:

Q_PROPERTY(QObject* dataAccess READ dataAccess NOTIFY valueChanged)
QObject* dataAccess() {
return this;
}

It seems a bit simpler for the reader on the QML side, but also gets the job done.

I am not sure which way is the best one, but the intermediate layer has a nicer feeling to it when more complicated types are involved.

15 May, 2015 09:16AM by Sune Vuorela

hackergotchi for DebConf team

DebConf team

DebConf Open Weekend (Posted by DebConf Content Team )

The first two days of this year’s DebConf (August 15th and 16th) will constitute the Open Weekend. On these days, we are planning to have the Debian 22nd Birthday party, a Job Fair, and more than 20 hours of events and presentations, including some special invited speakers.

Given that we expect to have a broader and larger audience during the weekend, our goal is to have talks that are equally interesting for both Debian contributors and users.

If you want to present something that might be interesting to the larger Debian community, please go ahead and submit it. It can be for a talk of either 45 or 20 minutes; if you don’t have content for a full length talk, we encourage you to go for the half length one. If you consider that the event is better suited for either the OpenWeekend or the regular DebConf days, you may say so in the comment field. But keep in mind that all events might be rearranged by the content team to make sure they fit together nicely.

Call for proposals

The deadline to submit proposals is June 15th. Please submit your talk early with a good description and a catchy title. We look forward to seeing your proposals!

If you want to submit an event please go ahead and read the original CfP on DebConf15 http://debconf15.debconf.org/proposals.xhtml.

15 May, 2015 05:35AM by DebConf Organizers

May 14, 2015

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Drupal maintenance with Drush

Another of my articles for self. Writing it down on the website is much better than pushing it on a 3rd party social site. Your data is yours.

My site runs on Drupal. Given I'm not a web designer, it is not my core area. Thus I've always wanted to have minimal engagement with it. My practices have paid me well so far. And I should thank all the free tools that help do that. I like to keep a running snapshot of my website on my local laptop, to keep it handy when trying anything new. This means that the setup has to be almost identical to what is running remotely.

Thanks to Drush, managing Drupal is very easy. It allows me to easily try to out changes and push them from dev => staging => live withtout too much effort. It also helps me control the environment well. And since the whole transport is over SSH, no separate exceptions are required.

For long, the theme on my site had some issues. The taxonomy terms did not have proper spacing. See bug for details.

With the fix, this transformed into:

 

I wish Drush had support for revision control. Or maybe it already has, and I need to check ? Bug Fixes and Customizations would have been well recorded with a revision control system.

 

Categories: 

Keywords: 

Images: 

14 May, 2015 01:10PM by Ritesh Raj Sarraf

Craig Small

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!

14 May, 2015 12:10PM by mtcsmall

Raphael Geissert

On using https mirrors

On confidentiality:


So that they don't know what's inside

14 May, 2015 11:32AM by Raphael Geissert (noreply@blogger.com)

hackergotchi for MJ Ray

MJ Ray

Recorrecting Past Mistakes: Window Borders and Edges

A while ago, I switched from tritium to herbstluftwm. In general, it’s been a good move, benefitting from active development and greater stability, even if I do slightly mourn the move from python scripting to a shell client.

One thing that was annoying me was that throwing the pointer into an edge didn’t find anything clickable. Window borders may be pretty, but they’re a pretty poor choice as the thing that you can locate most easily, the thing that is on the screen edge.

It finally annoyed me enough to find the culprit. The .config/herbstluftwm/autostart file said “hc pad 0 26″ (to keep enough space for the panel at the top edge) and changing that to “hc pad 0 -8 -7 26 -7″ and reconfiguring the panel to be on the bottom (where fewer windows have useful controls) means that throwing the pointer at the top or the sides now usually finds something useful like a scrollbar or a menu.

I wonder if this is a useful enough improvement that I should report it as an enhancement bug.

14 May, 2015 04:58AM by mjr

hackergotchi for Gunnar Wolf

Gunnar Wolf

Everybody seems to have an opinion on the taxis vs. Uber debate...

The discussion regarding the legality and convenience of Uber, Cabify and similar taxi-by-app services has come to Mexico City — Over the last few days, I've seen newspapers talk about taxi drivers demonstrating against said companies, early attempts at regulating their service, and so on.

I hold the view that every member of a society should live by its accepted rules (i.e. laws) — and if they hold the laws as incorrect, unfair or wrong, they should strive to get the laws to change. Yes, it's a hard thing to do, most often filled with resistence, but it's the only socially responsible way to go.

Private driver hiring applications have several flaws, but maybe the biggest one is that they are... How to put it? I cannot find a word better than illegal. Taxi drivers in our city (and in most cities, as far as I have read) undergo a long process to ensure they are fit for the task. Is the process incomplete? Absolutely. But the answer is not to abolish it in the name of the free market. The process must be, if anything, tightened. The process for granting a public driver license to an individual is way stricter than to issue me a driving license (believe it or not, Mexico City abolished taking driving tests several years ago). Taxis do get physical and mechanical review — Is their status mint and perfect? No way. But compare them to taxis in other Mexican states, and you will see they are in general in a much better shape.

Now... One of the things that angered me most about the comments to articles such as the ones I'm quoting is the middle class mentality they are written from. I have seen comments ranging from stupidly racist humor attempts (Mr. Mayor, the Guild of Kidnappers and Robbers of Iztapalapa demand the IMMEDIATE prohibition on UBER as we are running low on clients or the often repeated comment that taxi drivers are (...) dirty, armpit-smelly that listen to whatever music they want) to economic culture-based discrimination Uber is just for credit card users as if it were enough of an argument... Much to the opposite, it's just discrimination, as many people in this city are not credit subjects and do not exist in the banking system, or cannot have an always-connected smartphone — Should they be excluded from the benefits of modernity just because of their economic difference?

And yes, I'm by far not saying Mexico City's taxi drivers are optimal. I am an urban cyclist, and my biggest concern/fear are usually taxi drivers (more so than microbus drivers, which are a class of their own). Again , as I said at the beginning of the post, I am of the idea that if current laws and their enforcement are not enough for a society, it has to change due to that society's pressure — It cannot just be ignored because nobody follows the rules anyway. There is quite a bit that can be learnt from Uber's ways, and there are steps that can be taken by the company to become formal and legal, in our country and in others where they are accused of the same lacking issues.

We all deserve better services. Not just those of us that can pay for a smartphone and are entitled to credit cards. And all passenger-bearing services require strict regulations.

14 May, 2015 04:46AM by gwolf

May 13, 2015

hackergotchi for Andrew Pollock

Andrew Pollock

[tech] LWN Chrome extension published

I finally got around to finishing off and publishing the LWN Chrome extension that I wrote a couple of months ago.

I received one piece of feedback from someone who read my blog via Planet Debian, but didn't appear to email me from a usable email address, so I'll respond to the criticisms here.

I wrote a Chrome extension because I use Google Chrome. To the best of my knowledge, it will work with Chromium as well, but as I've never used it, I can't really say for sure. I've chosen to licence the source under the Apache Licence, and make it freely available. So the extension is available to anyone who cares to download the source and "side load" it, if they don't want to use the Chrome Web Store.

As for whether a userscript would have done the job, maybe, but I have no experience with them.

Basically, I had an itch, and I scratched it, for the browser I choose to use, and I also chose to share it freely.

13 May, 2015 10:03PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Amiga floppy recovery project

I've still got it!

I've still got it!

Recovered floppy disks

Recovered floppy disks

My first computer was an Amiga A500, and my brother and I spent a fair chunk of our childhoods creating things with it. These things are locked away on 3.5" floppy disks, but they were also lost a long time ago.

A few weeks ago my dad found them in a box in his loft, so a disk-reading project is now on the horizon! Step one is to catalogue what we've got, which I've done here. Step two is to check which, if any, of these are not already in circulation amongst archivists. Thanks to Matthew Garrett for pointing me at the Software Preservation Society, which is a good first place to check.

When we get to the reading step, there are quite a few approaches I could take. Which one to use depends to some extent on which disks we need to read, and whether they employ any custom sector layout or other copy protection schemes. I think the easiest method using equipment I already have is probably Amiga Explorer and a null-modem cable, as this approach will work on an A500 with Workbench 1.3.

There are a variety of hardware tools and projects for reading Amiga floppies on a PC, but the most interesting one to me is DiscFerret, which is open hardware and software.

13 May, 2015 04:24PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Gitolite and Gitweb

This article is for self, so that I don't again forget the specifics. The last time I did the same setup, it wasn't very important in terms of security. gitolite(3) + gitweb can give an impressive git tool with very simple user acls. After you setup gitolite, ensure that the umask value in gitolite is approriate, i.e. the gitolite group has r-x privilege. This is needed for the web view. Add your apache user to the gitolite group. With the umask changes, and the group association, apache's user will now be able to read gitolite repos.

Now, imagine a repo setting like the following:

repo virtualbox
    RW+     =   admin
    R   =   gitweb

This allows 'R'ead for gitweb. But by Unix ACLs, now even www-data will have 'RX' on all (the ones created after the UMASK) the repositories.

rrs@chutzpah:~$ sudo ls -l /var/lib/gitolite3/repositories/
[sudo] password for rrs:
total 20
drwxr-x--- 7 gitolite3 gitolite3 4096 May 12 17:13 foo.git
drwx------ 8 gitolite3 gitolite3 4096 May 13 12:06 gitolite-admin.git
drwxr-x--- 7 gitolite3 gitolite3 4096 May 13 12:06 linux.git
drwx------ 7 gitolite3 gitolite3 4096 May 12 16:38 testing.git
drwxr-x--- 7 gitolite3 gitolite3 4096 May 12 17:20 virtualbox.git
13:10 ♒♒♒   ☺    

But just www-data. No other users. Because for 'O', there is no 'rwx'. And below shows gitolite's ACL in picture...

test@chutzpah:~$ git clone gitolite3@chutzpah:virtualbox
Cloning into 'virtualbox'...
Enter passphrase for key '/home/test/.ssh/id_rsa':
FATAL: R any virtualbox test DENIED by fallthru
(or you mis-spelled the reponame)
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Categories: 

Keywords: 

13 May, 2015 08:29AM by Ritesh Raj Sarraf

hackergotchi for Lucas Nussbaum

Lucas Nussbaum

systemd: Type=simple and avoiding forking considered harmful?

(This came up in a discussion on debian-user-french@l.d.o)

When converting from sysvinit scripts to systemd init files, the default practice seems to be to start services without forking, and to use Type=simple in the service description.

What Type=simple does is, well, simple. from systemd.service(5):

If set to simple (the default value if neither Type= nor BusName= are specified), it is expected that the process configured with ExecStart= is the main process of the service. In this mode, if the process offers functionality to other processes on the system, its communication channels should be installed before the daemon is started up (e.g. sockets set up by systemd, via socket activation), as systemd will immediately proceed starting follow-up units.

In other words, systemd just runs the command described in ExecStart=, and it’s done: it considers the service is started.

Unfortunately, this causes a regression compared to the sysvinit behaviour, as described in #778913: if there’s a configuration error, the process will start and exit almost immediately. But from systemd’s point-of-view, the service will have been started successfully, and the error only shows in the logs:

root@debian:~# systemctl start ssh
root@debian:~# echo $?
0
root@debian:~# systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
 Loaded: loaded (/lib/systemd/system/ssh.service; enabled)
 Active: failed (Result: start-limit) since mer. 2015-05-13 09:32:16 CEST; 7s ago
 Process: 2522 ExecStart=/usr/sbin/sshd -D $SSHD_OPTS (code=exited, status=255)
 Main PID: 2522 (code=exited, status=255)
mai 13 09:32:16 debian systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
mai 13 09:32:16 debian systemd[1]: Unit ssh.service entered failed state.
mai 13 09:32:16 debian systemd[1]: ssh.service start request repeated too quickly, refusing to start.
mai 13 09:32:16 debian systemd[1]: Failed to start OpenBSD Secure Shell server.
mai 13 09:32:16 debian systemd[1]: Unit ssh.service entered failed state.

With sysvinit, this error is detected before the fork(), so it shows during startup:

root@debian:~# service ssh start
 [....] Starting OpenBSD Secure Shell server: sshd/etc/ssh/sshd_config: line 4: Bad configuration option: blah
 /etc/ssh/sshd_config: terminating, 1 bad configuration options
 failed!
 root@debian:~#

It’s not trivial to fix that. The implicit behaviour of sysvinit is that fork() sort-of signals the end of service initialization. The systemd way to do that would be to use Type=notify, and have the service signals that it’s ready using systemd-notify(1) or sd_notify(3) (or to use socket activation, but that’s another story). However that requires changes to the service. Returning to the sysvinit behaviour by using Type=forking would help, but is not really a solution: but what if some of the initialization happens *after* the fork? This is actually the case for sshd, where the socket is bound after the fork (see strace -f -e trace=process,network /usr/sbin/sshd), so if another process is listening on port 22 and preventing sshd to successfully start, it would not be detected.

I wonder if systemd shouldn’t do more to detect problems during services initialization, as the transition to proper notification using sd_notify will likely take some time. A possibility would be to wait 100 or 200ms after the start to ensure that the service doesn’t exit almost immediately. But that’s not really a solution for several obvious reasons. A more hackish, but still less dirty solution could be to poll the state of processes inside the cgroup, and assume that the service is started only when all processes are sleeping. Still, that wouldn’t be entirely satisfying…

13 May, 2015 08:29AM by lucas

NOKUBI Takatsugu

Jessie installer with kvm/serial console

I tried to install Jessie on a brand-new virtual machine (kvm), but it has a problem about serial console login.

At least, wheezy installer worked fine because it adds getty entry on /etc/inittab. Jessie uses systemd but no care about getty service for serial console. The probrem is reported as #769406.

My solution is invoke “systemctl enable serial-getty@ttyS0.service” via ssh.

13 May, 2015 05:20AM by knok

Zlatan Todorić

How to answer as a master

I have spent some fair amount of time during the life to explore making great responses to generic question (technical one included) and I can say without doubt that it is a pretty simple thing one could learn. First of all, answering question via email or in personal, it is very important that people feel that the person answering is there and is really "getting" their question. So a personal notice at beginning or at the end is not necessary but is a big plus. Particular part of answer should have 3 phases: straight yes or no answer, brief explaining why yes or no, and then explaining the opposite solution. Very important to keep it precise and simple as possible while explaining all what is needed for the person which asked question.

For example Joe asks:

Can I install library libfoo1.2 without breaking software foo1.1

And Jane would answer:

Hi Joe,

very good question as people often do try things like that and could end up in complicated situation.

So the answer is NO.

Pulling the libfoo1.2 would break foo1.1 because there were numerous changes from libfoo1.1 that break backward compatibility and there was also rewriting and porting to a newer version of language.

Now having that out of way, you can safely pull also foo1.2 and install it with libfoo1.2 which is tested and should work for you without any problems.

Best regards,

Jane

And that's it. Lean, clean, cyborg.

13 May, 2015 12:17AM by Zlatan Todorić

May 12, 2015

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

In all fairness

Since I had a long rant about Lenovo customer service a while back:

My laptop died again during travels; at first, it was really unstable (whenever I'd hold it slightly wrong, it would instantly crash), then later, it would plain refuse to boot (not even anything on the display).

So I called Lenovo, and after some navigating of phone menus I got to someone who took my details, checked my warranty (“let's see, you have warranty until 2018”—no months of arguing this time!) opened a case and sent me on to tech support. Tech support said most likely, the motherboard was broken, and that a technician would call me; today, the tech called, and arrived at work to swap my motherboard. 30 minutes on the phone, 20 minutes waiting for the technician to switch the motherboard (I would probably have used more than an hour myself). And voila, working laptop. (Hope it's stable from now on.)

My only gripe is that I forgot to remind him after the repair to give me new rubber feet—he'd already said it wouldn't be a problem, but we both forgot about it. But overall, this is exactly how it should be—quite unlike last time.

12 May, 2015 08:15PM

Simon Josefsson

Certificates for XMPP/Jabber

I am revamping my XMPP server and I’ve written down notes on how to set up certificates to enable TLS.

I will run Debian Jessie with JabberD 2.x, using the recent jabberd2 jessie-backport. The choice of server software is not significant for the rest of this post.

Running XMPP over TLS is a good idea. So I need a X.509 PKI for this purpose. I don’t want to use a third-party Certificate Authority, since that gives them the ability to man-in-the-middle my XMPP connection. Therefor I want to create my own CA. I prefer tightly scoped (per-purpose or per-application) CAs, so I will set up a CA purely to issue certificates for my XMPP server.

The current XMPP specification, RFC 6120, includes a long section 13.7 that discuss requirements on Certificates.

One complication is the requirement to include an AIA for OCSP/CRLs — fortunately, it is not a strict “MUST” requirement but a weaker “SHOULD”. I note that checking revocation using OCSP and CRL is a “MUST” requirement for certificate validation — some specification language impedence mismatch at work there.

The specification demand that the CA certificate MUST have a keyUsage extension with the digitalSignature bit set. This feels odd to me, and I’m wondering if keyCertSign was intended instead. Nothing in the XMPP document, nor in any PKIX document as far as I am aware of, will verify that the digitalSignature bit is asserted in a CA certificate. Below I will assert both bits, since a CA needs the keyCertSign bit and the digitalSignature bit seems unnecessary but mostly harmless.

My XMPP/Jabber server will be “chat.sjd.se” and my JID will be “simon@josefsson.org”. This means the server certificate need to include references to both these domains. The relevant DNS records for the “josefsson.org” zone is as follows, see section 3.2.1 of RFC 6120 for more background.

_xmpp-client._tcp.josefsson.org.	IN	SRV 5 0 5222 chat.sjd.se.
_xmpp-server._tcp.josefsson.org.	IN	SRV 5 0 5269 chat.sjd.se.

The DNS records or the “sjd.se” zone is as follows:

chat.sjd.se.	IN	A	...
chat.sjd.se.	IN	AAAA	...

The following commands will generate the private key and certificate for the CA. In a production environment, you would keep the CA private key in a protected offline environment. I’m asserting a expiration date ~30 years in the future. While I dislike arbitrary limits, I believe this will be many times longer than the anticipated lifelength of this setup.

openssl genrsa -out josefsson-org-xmpp-ca-key.pem 3744
cat > josefsson-org-xmpp-ca-crt.conf << EOF
[ req ]
x509_extensions = v3_ca
distinguished_name = req_distinguished_name
prompt = no
[ req_distinguished_name ]
CN=XMPP CA for josefsson.org
[ v3_ca ]
subjectKeyIdentifier=hash
basicConstraints = CA:true
keyUsage=critical, digitalSignature, keyCertSign
EOF
openssl req -x509 -set_serial 1 -new -days 11147 -sha256 -config josefsson-org-xmpp-ca-crt.conf -key josefsson-org-xmpp-ca-key.pem -out josefsson-org-xmpp-ca-crt.pem

Let’s generate the private key and server certificate for the XMPP server. The wiki page on XMPP certificates is outdated wrt PKIX extensions. I will embed a SRV-ID field, as discussed in RFC 6120 section 13.7.1.2.1 and RFC 4985. I chose to skip the XmppAddr identifier type, even though the specification is somewhat unclear about it: section 13.7.1.2.1 says that it “is no longer encouraged in certificates issued by certification authorities” while section 13.7.1.4 says “Use of the ‘id-on-xmppAddr’ format is RECOMMENDED in the generation of certificates”. The latter quote should probably have been qualified to say “client certificates” rather than “certificates”, since the latter can refer to both client and server certificates.

Note the use of a default expiration time of one month: I believe in frequent renewal of entity certificates, rather than use of revocation mechanisms.

openssl genrsa -out josefsson-org-xmpp-server-key.pem 3744
cat > josefsson-org-xmpp-server-csr.conf << EOF
[ req ]
distinguished_name = req_distinguished_name
prompt = no
[ req_distinguished_name ]
CN=XMPP server for josefsson.org
EOF
openssl req -sha256 -new -config josefsson-org-xmpp-server-csr.conf -key josefsson-org-xmpp-server-key.pem -nodes -out josefsson-org-xmpp-server-csr.pem
cat > josefsson-org-xmpp-server-crt.conf << EOF
subjectAltName=@san
[san]
DNS=chat.sjd.se
otherName.0=1.3.6.1.5.5.7.8.7;UTF8:_xmpp-server.josefsson.org
otherName.1=1.3.6.1.5.5.7.8.7;UTF8:_xmpp-client.josefsson.org
EOF
openssl x509 -sha256 -CA josefsson-org-xmpp-ca-crt.pem -CAkey josefsson-org-xmpp-ca-key.pem -set_serial 2 -req -in josefsson-org-xmpp-server-csr.pem -out josefsson-org-xmpp-server-crt.pem -extfile josefsson-org-xmpp-server-crt.conf

With this setup, my XMPP server can be tested by the XMPP IM Observatory. You can see the c2s test results and the s2s test results. Of course, there are warnings regarding the trust anchor issue. It complains about a self-signed certificate in the chain. This is permitted but not recommended — however when the trust anchor is not widely known, I find it useful to include it. This allows people to have a mechanism of fetching the trust anchor certificate should they want to. Some weaker cipher suites trigger warnings, which is more of a jabberd2 configuration issue and/or a concern with jabberd2 defaults.

My jabberd2 configuration is simple — in c2s.xml I add a <id> entity with the “require-starttls”, “cachain”, and “pemfile” fields. In s2s.xml, I have the <pemfile>, <resolve-ipv6>, and <require-tls> entities.

Some final words are in order. While this setup will result in use of TLS for XMPP connections (c2s and s2s), other servers are unlikely to find my CA trust anchor, let alone be able to trust it for verifying my server certificate. I’m happy to read about Peter Saint-Andre’s recent SSL/TLS work, and in particular I will follow the POSH effort.

12 May, 2015 01:43PM by simon

hackergotchi for Michal Čihař

Michal Čihař

python-gammu 2.2

After recent porting python-gammu to Python 3, it was quite obvious to me that new release will have some problems. Fortunately they have proven to be rather cosmetic and no big bugs were found so far.

Anyway it's time to push the minor fixes to the users, so here comes python-gammu 2.2. As you can see, the changes are pretty small, but given that I don't expect much development in the future, it's good to release them early.

Filed under: English Gammu python-gammu SUSE Wammu | 0 comments

12 May, 2015 10:00AM by Michal Čihař (michal@cihar.com)

May 11, 2015

Bits from Debian

Debian Ruby team sprint 2015

The Debian Ruby Ruby team had a first sprint in 2014. The experience was very positive, and it was decided to do it again in 2015. Last April, the team once more met at the IRILL offices, in Paris, France.

The participants worked to improve the quality Ruby packages in Debian, including fixing release critical and security bugs, improving metadata and packaging code, and triaging test failures on the Debian Continuous Integration service.

The sprint also served to prepare the team infrastructure for the future Debian 9 release:

  • the gem2deb packaging helper to improve the semi-automated generation of Debian source packages from existing standard-compliant Ruby packages from Rubygems.

  • there was also an effort to prepare the switch to Ruby 2.2, the latest stable release of the Ruby language which was released after the Debian testing suite was already frozen for the Debian 8 release.

Group photo of sprint participants. Left to right: Christian Hofstaedtler, Tomasz Nitecki, Sebastien Badia and Antonio Terceiro

Left to right: Christian Hofstaedtler, Tomasz Nitecki, Sebastien Badia and Antonio Terceiro.

A full report with technical details has been posted to the relevant Debian mailing lists.

11 May, 2015 10:00PM by Antonio Terceiro

Simon Josefsson

Laptop decision fatigue

I admit defeat. I have made some effort into researching recent laptop models (see first and second post). Last week I asked myself what the biggest problem with my current 4+ year old X201 is. I couldn’t articulate any significant concern. So I have bought another second-hand X201 for semi-permanent use at my second office. At ~225 USD/EUR, including another docking station, it is an amazing value. I considered the X220-X240 but they have a different docking station, and were roughly twice the price — the latter allowed for a Samsung 850 PRO SSD purchase. Thanks everyone for your advice, anyway!

11 May, 2015 07:31PM by simon

hackergotchi for Julien Danjou

Julien Danjou

My interview about software tests and Python

I've recently been contacted by Johannes Hubertz, who is writing a new book about Python in German called "Softwaretests mit Python" which will be published by Open Source Press, Munich this summer. His book will feature some interviews, and he was kind enough to let me write a bit about software testing. This is the interview that I gave for his book. Johannes translated to German and it will be included in Johannes' book, and I decided to publish it on my blog today. Following is the original version.

How did you come to Python?

I don't recall exactly, but around ten years ago, I saw more and more people using it and decided to take a look. Back then, I was more used to Perl. I didn't really like Perl and was not getting a good grip on its object system.

As soon as I found an idea to work on – if I remember correctly that was rebuildd – I started to code in Python, learning the language at the same time.

I liked how Python worked, and how fast I was to able to develop and learn it, so I decided to keep using it for my next projects. I ended up diving into Python core for some reasons, even doing things like briefly hacking on projects like Cython at some point, and finally ended up working on OpenStack.

OpenStack is a cloud computing platform entirely written in Python. So I've been writing Python every day since working on it.

That's what pushed me to write The Hacker's Guide to Python in 2013 and then self-publish it a year later in 2014, a book where I talk about doing smart and efficient Python.

It had a great success, has even been translated in Chinese and Korean, so I'm currently working on a second edition of the book. It has been an amazing adventure!

Zen of Python: Which line is the most important for you and why?

I like the "There should be one – and preferably only one – obvious way to do it". The opposite is probably something that scared me in languages like Perl. But having one obvious way to do it is and something I tend to like in functional languages like Lisp, which are in my humble opinion, even better at that.

For a python newbie, what are the most difficult subjects in Python?

I haven't been a newbie since a while, so it's hard for me to say. I don't think the language is hard to learn. There are some subtlety in the language itself when you deeply dive into the internals, but for beginners most of the concept are pretty straight-forward. If I had to pick, in the language basics, the most difficult thing would be around the generator objects (yield).

Nowadays I think the most difficult subject for new comers is what version of Python to use, which libraries to rely on, and how to package and distribute projects. Though things get better, fortunately.

When did you start using Test Driven Development and why?

I learned unit testing and TDD at school where teachers forced me to learn Java, and I hated it. The frameworks looked complicated, and I had the impression I was losing my time. Which I actually was, since I was writing disposable programs – that's the only thing you do at school.

Years later, when I started to write real and bigger programs (e.g. rebuildd), I quickly ended up fixing bugs… I already fixed. That recalled me about unit tests and that it may be a good idea to start using them to stop fixing the same things over and over again.

For a few years, I wrote less Python and more C code and Lua (for the awesome window manager), and I didn't use any testing. I probably lost hundreds of hours testing manually and fixing regressions – that was a good lesson. Though I had good excuses at that time – it is/was way harder to do testing in C/Lua than in Python.

Since that period, I have never stopped writing "tests". When I started to hack on OpenStack, the project was adopting a "no test? no merge!" policy due to the high number of regressions it had during the first releases.

I honestly don't think I could work on any project that does not have – at least a minimal – test coverage. It's impossible to hack efficiently on a code base that you're not able to test in just a simple command. It's also a real problem for new comers in the open source world. When there are no test, you can hack something and send a patch, and get a "you broke this" in response. Nowadays, this kind of response sounds unacceptable to me: if there is no test, then I didn't break anything!

In the end, it's just too much frustration to work on non tested projects as I demonstrated in my study of whisper source code.

What do you think to be the most often seen pitfalls of TDD and how to avoid them best?

The biggest problems are when and at what rate writing tests.

On one hand, some people starts to write too precise tests way too soon. Doing that slows you down, especially when you are prototyping some idea or concept you just had. That does not mean that you should not do test at all, but you should probably start with a light coverage, until you are pretty sure that you're not going to rip every thing and start over. On the other hand, some people postpone writing tests for ever, and end up with no test all or a too thin layer of test. Which makes the project with a pretty low coverage.

Basically, your test coverage should reflect the state of your project. If it's just starting, you should build a thin layer of test so you can hack it on it easily and remodel it if needed. The more your project grow, the more you should make it sold and lay more tests.

Having too detailed tests is painful to make the project evolve at the start. Having not enough in a big project makes it painful to maintain it.

Do you think, TDD fits and scales well for the big projects like OpenStack?

Not only I think it fits and scales well, but I also think it's just impossible to not use TDD in such big projects.

When unit and functional tests coverage was weak in OpenStack – at its beginning – it was just impossible to fix a bug or write a new feature without breaking a lot of things without even noticing. We would release version N, and a ton of old bugs present in N-2 – but fixed in N-1 – were reopened.

For big projects, with a lot of different use cases, configuration options, etc, you need belt and braces. You cannot throw code in a repository thinking it's going to work ever, and you can't afford to test everything manually at each commit. That's just insane.

11 May, 2015 01:39PM by Julien Danjou

Craig Small

procps using GitLab CI

procps-ciThe procps project for a few years has been hosted at Gitorious.  With the announcement that Gitorious has been acquired by GitLab and that all repositories need to move there, procps moved along to GitLab. At first I thought it would just be a like for like thing, but then I noticed that GitLab has this GitLab CI feature and had to try it out.

CI here stands for Continuous Integration and is a way of automatically testing your program builds using a bunch of test scripts.  procps already has a set of tests, with some a level of coverage that has room for improvement, so it was a good candidate to use for CI. The way GitLab works is they have a central control point that is linked to the git repo and you create runners, which are the systems that actually compile the programs and run the tests. The runners then feed back their results and GitLab CI shows it all in pretty red or green.

The first problem was building this runner.  Most of the documentation assumes you are building testing for Ruby. procps is built using C with the test scripts in TCL, so there was going to be some need to make some adjustments.  I chose to use docker containers so that there was at least some isolation between the runners and the base system.  I soon found the docker container I used (ruby:2.6 as suggested by GitLab) didn’t have autopoint which mean autogen.sh failed so I had no configure script so no joy there.

Now my second problem was I had never used docker before and beyond that it was some sort of container thing so like virtual machines lite, I didn’t know much about it. The docker docs are very good and soon I had built my own custom docker image that had all the programs I need to compile and test procps. It’s basically a cut-down Debian image with a few things like gettext-bin and dejagnu added in.

Docker is not a full system

With that all behind me and a few “oh I need that too” (don’t forget you need git) moments we had a working CI runner.  This was a Good Thing.  You then find that your assumptions for your test cases may not always be correct.  This is especially noticeable when testing something like procps which needs to work of the proc filesystem.  A good example is, what uses session ID 1?  Generally its init or systemd, but in Docker this is what everything runs as. A test case which assumes things about SID=1 will fail, as it did.

This probably won’t be a problem for testing a lot of “normal” programs that don’t need to dig a deep into the host system as procps does, but it is something to remember. The docker environment looks a lot like a real system from the inside, but there are differences, so the lesson here is to write better tests (or fix the ones that failed, like I did).

The runner and polling

Now, there needs to be communication between the CI website and the runner, so the runner knows there is something for it to do.  The gitlab runner has this setup rather well, except the runner is rather aggressive about its polling, hitting the website every 3 seconds. For someone on some crummy Australian Internet with low speeds and download quotas, this can get expensive in network resources. As there is on average an update once a week or so these seems rather excessive.

My fix is pretty manual, actually its totally manual. I stop the daemon, then if I notice there are pending jobs I start the daemon, let it do its thing, then shut it down again.  This is certainly not an optimal solution but works for now.  I will look into doing something more clever later, possibly with webhooks.

11 May, 2015 12:26PM by Craig

Zlatan Todorić

Reboot to roots

I am working in software development field for more then 5 years. During that period I came across of other developers which vast majority can be described as terrible people. That is what sets probably difference between hacker mind and average software engineer. People tend to think if they have position of software developer they are god-like and should be treated like that. Hackers know who they are and tend to be Socrates-wise "I know that I know nothing". As student of mechanical engineering (majoring in mechatronics) I didn't have much of programming so naturally I had some personal difficulties to grasp software field but once I got in I stopped almost totally doing my student duties as it didn't cross path with my love towards hacking.

And now after that many years in software field, in which I mostly worked as freelancer for other people and companies (where it wasn't unusual that I got hired personally by a guy who couldn't or didn't know how to finish his work, I hand the code which he would present as his own work, but I don't mind as I needed the money to get on) and my work ended as proprietary - I am just fed up with all this nonsense. Someone would think what is problem there - well for me it is, as being part of Debian and wider Free software ecosystem I know proprietary software is not fostering better society in all together so it made me a bit unhappy about it. The thing that actually is really annoying me are asshole so-called software engineers and the most - bad behavior towards users. Users should be treated as supporters of your software and if proprietary company treats their users as shit then it's obvious that they don't care for the users but rather for the money and only money. Proprietary isn't here the only bad thing, open source community has it share of terrible responses - which comes naturally if you take the sheer amount of people in open source. I for instance was called on Debian IRC a peasant and even I sheep in my early days - but Debian as all communities has its bad seeds and sometimes even non-Debianities trolls hop on our channels. I didn't let myself put down because I personally got the chance to meet many of them and they change the flow of my life at that point.

I am unhappy at the current state of things worldwide and personally but I have now made a decision that I am shifting towards being technical customer support engineer or for me better named as Happiness Hero. Why, one would ask because as software engineer you would make more? Well, firstly I don't care that much about money (or to be even more precise, as capitalism has made a sick money-grabbing society, you can never pay me enough as I am more worth then entire capitalistic system so in the end I don't care that much how I am paid) and secondly happiness hero can earn more then software engineer which latest #talkpay hashtag on Twitter showed. I have made IMHO many good pieces of code that will never be contributed as mine, but I achieved no happiness and rather achieved few months ago a rather bad burnout. My shift towards happiness hero should look like this - I am pretty well prepared technically for that position, I have experience with helping users and in general with people and I tend to be pretty good at that (but that we would need to ask them) and with part that I enjoy helping people to debug their technical problem so they can spend more time on others things (read family, creativity, hobbies) is something I believe will put me well into position to achieve happiness so I can create for myself time to finish damn faculty (where TBH professors showed that they can be one of the worst enemies of their students - and I am highly doubting in this generic way of schooling for centuries) and if I get some decent paycheck with this capitalistic system I will have a chance to sponsor a poor family that has child talented in artistic field - or to say SHARE something material for cultural and emotional gain. Take that capitalism!

And the best part, I will start to believe more in myself so I have already few offers for that kind of position but am waiting for few more and I am screening companies to find one that fits me best - shares values with mine and has a great interaction with their users. One of the companies that makes me want to screen more companies is Buffer. When you read this and this you will see their way of life and perks are really something every company should do - then you will understand why I want to give my self a chance which I believe I finally deserved. And, no, I didn't apply for Buffer as I don't use their service nor did have time to read the two books needed for applying (which I think is really a great thing to foster people into getting more knowledge). I can only wish them best luck in future.

I am currently doing one small software development thing (freelance) for next two weeks and after that I am hoping to get enough good offer to start with happiness job and retreat from software development for some good amount of time. That doesn't mean I will cease my Free software activities - on contrary I think I will have more time and even more will to work on Free software development in my spare time. So if anyone or any company that works with open source community has a position for happiness hero, get in touch with me and we will have a good talk.

End point - I want to grow and become who am I, and not who I should be according to others.

11 May, 2015 11:17AM by Zlatan Todorić