December 21, 2014

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

If only there was a Romeo somewhere ...

Attention: rant coming. You have been warned, and may want to tune out now.

So the top of my Twitter timeline just had a re-tweet posting to this marvel on the state of Julia. I should have known better than to glance at it as it comes from someone providing (as per the side-bar) Thought leadership in Big Data, systems architecture and more. Reading something like this violates the first rule of airport book stores: never touch anything from the business school section, especially on (wait for it) leadership or, worse yet, thought leadership.

But it is Sunday, my first cup of coffee still warm (after finalising two R package updates on GitHub, and one upload to CRAN) and so I read on. Only to be mildly appalled by the usual comparison to R based on the same old Fibonacci sequence.

Look, I am as guilty as anyone of using it (for example all over chapter one of my Rcpp book), but at least I try to stress each and every time that this is kicking R where it is down as its (fairly) poor performance on functions calls (that is well-known and documented) obviously gets aggravated by recursive calls. But hey, for the record, let me I restate those results. So Julia beats R by a factor of 385. But let's take a closer look.

For n=25, I get R to take 241 milliseconds---as opposed to his 6905 milliseconds---simply by using the same function I use in every workshop, eg last used at Penn in November, which does not use the dreaded ifelse operator:

fibR <- function(n) {
  if (n < 2) return(n)
  return(f(n-1) + f(n-2))
}

Switching that to the standard C++ three-liner using Rcpp

library(Rcpp)
cppFunction('int fibCpp(int n) { 
  if (n < 2) return(n);  
  return(fibCpp(n-1) + fibCpp(n-2));  
  }')

and running a standard benchmark suite gets us the usual result of

R> library(rbenchmark)
R> benchmark(fibR(25),fibCpp(25),order="relative")[,1:4]
        test replications elapsed relative
2 fibCpp(25)          100   0.048    1.000
1   fibR(25)          100  24.674  514.042
R> 

So for the record as we need this later: that is 48 milliseconds for 100 replications, or about 0.48 milliseconds per run.

Now Julia. And of my standard Ubuntu server running the current release 14.10:

edd@max:~$ julia
ERROR: could not open file /home/edd//home/edd//etc/julia/juliarc.jl
 in include at boot.jl:238

edd@max:~$ 

So wait, what? You guys can't even ensure a working release on what is probably the most popular and common Linux installation? And I get to that after reading a post on the importance of "Community, Community, Community" and you can't even make sure this works on Ubuntu? Really?

So a little but of googling later, I see that julia -f is my friend for this flawed release, and I can try to replicate the original timing

edd@max:~$ julia -f
               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" to list help topics
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.2.1 (2014-02-11 06:30 UTC)
 _/ |\__'_|_|_|\__'_|  |  
|__/                   |  x86_64-linux-gnu

julia> fib(n) = n < 2 ? n : fib(n - 1) + fib(n - 2)
fib (generic function with 1 method)

julia> @elapsed fib(25)
0.002299559

julia> 

Interestingly the posts author claims 18 milliseconds. I see 2.3 milliseconds here. Maybe someone is having a hard time comparing things to the right of the decimal point. Or maybe his computer is an order of magnitude slower than mine. The more important thing is that Julia is of course faster than R (no surprise: LLVM at work) but also still a lot slower than a (trivial to write and deploy) C++ function. Nothing new here.

So let's recap. Comparison to R was based on a flawed version of a function we only use when we deliberately want to put R down, can be improved significantly when using a better implementation, results are still off by order of magnitude to what was reported ("math is hard"), and the standard C / C++ way of doing things is still several times faster than our new saviour language---which I can't even launch on the current version of one of the more common free operating systems. Ok then. Someone please wake me up in a few years and I will try again.

Now, coming to the end of the rant I should really stress that of course I too hope that Julia succeeds. Every user pulled away from Matlab is a win for all us. We're in this together and the endless navel gazing between ourselves is so tiresome and irrelevant. And as I argue here, even more so when we among ourselves stick to unfair comparisons as well as badly chosen implementation details.

What matters are wins against the likes of Matlab, Excel, SAS and so on. Let's build on our joint strength. I am sure I will use Julia one day, and I am grateful for everyone helping with it---as a lot of help seems to be needed. In the meantime, and with CRAN at 6130 packages that just work I'll continue to make use of this amazing community and trying my bit to help it grow and prosper. As part of our joint community.

21 December, 2014 03:48PM

hackergotchi for Steve McIntyre

Steve McIntyre

UEFI Debian installer work for Jessie, part 2

A month ago , I wrote about my plans for improved (U)EFI support in Jessie. It's about time I gave an update on progress!

I spoke about adding support for installing grub-efi into the removable media path (#746662). That went into Debian's grub packages already, but there were a couple of bugs. First of all, the code could end up prompting people about EFI questions even when they didn't have any grub-efi packages installed. Doh! (#773004). Then there was an unexpected bug with case-insensitive file name handling on FAT/VFAT filesystems (#773092). I've posted (and tested!) patches to fix both, hopefully in an upload any day now.

Next, I mentioned getting i386 UEFI support going again. This is a major feature that a lot of people have been asking for. It's also going to involve quite a bit of effort...

Our existing (amd64) UEFI-capable images in Debian use the standard x86 El Torito CD boot method, with two boot images provided. One of these images gives us the traditional isolinux BIOS-boot support. The second option is an alternate El Torito image, including at a 64-bit version of grub-efi. For most machines, this works just fine - the BIOS or UEFI firmware will automatically pick the correct image and everybody's happy. This even works on our multi-arch i386/amd64 CDs and DVDs - isolinux will boot either kernel from the first El Torito image, or the alternate UEFI image is amd64 only.

However, I can now see that there's been a long-standing issue with those multi-arch images, and it's to do with Macs. On the advice of Matthew Garrett, I've borrowed an old 32-bit Intel Mac to help testing, and it's quite instructive in terms of buggy firmware! The firmware on older 32-bit Intel Macs crashes hard when it detects more than one El Torito boot image, and I've now seen this happen myself. I've not had any bug reports about this, so I can only assume that we haven't had many users try that image. As far as I can tell, they've been using the normal i386 images in BIOS boot mode, and then struggling to get bootloaders working afterwards. There are a number of different posts on the net explaining how to do that. That's OK, but...

If I now start adding 32-bit UEFI support to our standard set of i386 images, this will prevent users of old Macs from installing Debian. I could just say "screw it" and decide to not support those users at all, but that's not a very nice thing to do. If we want to continue to support them and add 32-bit UEFI support, I'll have to add another flavour of i386 image, either a "Mac special" or a "32-bit UEFI special". I'm not keen on doing that if I could avoid it, but the two options are mutually exclusive. Given the Mac problem is only on older hardware which (hopefully!) will be dying out, I'll probably pick that one as the special-case CD, and I'll make an extra netinst flavour only for those users to boot off.

So, I've started playing with i386 UEFI stuff in the last couple of weeks too. I almost immediately found some showstopper behaviour bugs in the i386 versions of efivar and efibootmgr (#773412 and #773007), but I've debugged these with our Debian maintainer (Jared Dominguez) and the upstream developer (Peter Jones) and fixes should be in Jessie very soon.

As I mentioned last month, the machines that most people have been requesting support for are the latest Bay Trail-based laptops and tablets. There are using 64-bit Intel Atom CPUs, but crippled with 32-bit UEFI firmware with no BIOS compatibility mode. This makes for some interesting issues. It's probably impossible to get a true answer why these machines are so broken by design, but there are several rumours. As far as I can see, most of these machines seem to ship with a limited version of 32-bit Windows 8.1. 32-bit Windows is smaller than 64-bit Windows, so fits much better in limited on-board storage space. But 32-bit Windows won't boot from 64-bit UEFI, so the firmware needed buggering to match. Ugh!

To support these Bay Trail machines properly, we'll want to add a 32-bit UEFI installation option to our 64-bit images. I can tweak our CDs so that both 32-bit and 64-bit versions of grub-efi are included, and the on-board UEFI will load the right one needed. Then I'll need to make sure that all the 64-bit images also include grub-efi-ia32-bin from now on. With some extra logic, we'll need to remember that these new machines need that package installing instead of grub-efi-amd64-bin. It shouldn't be too hard, but let's see! :-)

So, I've been out and bought one of these machines, an Asus X205TA. Lucas agreed that Debian will reimburse me (thanks!), so I'm not stuck with spending my own money on an otherwise unwanted machine! I can see via Google that none of the mainstream Linux distros support the Bay Trail machines fully yet, so there's not a lot of documentation yet. Initial boot on the new machine was easy using a quick-hack i386 UEFI image on USB, but from there everything went downhill quickly. I'll need to investigate some more, but the laptop's own keyboard and trackpad are not detected by the installer system. Neither is its built-in WiFi. Yay! I had to go and dig out a USB hub to connect the installer image USB key, a keyboard, mouse and a USB hard drive to the machine, as it only has 2 USB ports. I've taken a complete backup of the on-board 32GB flash before I start experimenting, so I can restore the machine back to its virgin state for future testing.

I guess I now have a project to keep me busy over Christmas...!

In other news, we've been continuing work on UEFI support for and within the new arm64 port. My ARM/Linaro colleague Leif Lindholm has been back-porting upstream kernel features and bug fixes to make d-i work, and filing Debian bugs when silly things break on arm64 because people don't think about other architectures (e.g #773311, doh!). As there are more and more people interested in (U)EFI support these days, I've also proposed that we create a new debian-efi mailing list to help focus discussion. See #773327 and follow up there if you think you'd use the list too!

You can help! Same as 2 years ago, I'll need help testing some of these images. For the 32-bit UEFI support, I now have some relevant hardware myself, but testing on other machines too will be very important! I'll start pushing unofficial Jessie EFI test images shortly - watch this space.

21 December, 2014 01:51AM

December 20, 2014

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/20

today seen on IRC: a maintainer was surprised & happy that their package had migrated to testing without having filed an unblock request. once again an example of the awesome work of the release team which pro-actively unblocked the package. – a big thank you to the members of the release team!


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

20 December, 2014 09:56PM

Mark Brown

Adventures with ARM server

I recently got a CubieTruck with a terabyte SSD to use as a general always on server. This being an ARM board rather than a PC (with a rather nice form factor – it’s basically the same size as a SSD) you’d normally expect a blog post about it to include instructions for kernels and patches and so on but with these systems and current Debian testing there’s no need – Debian works out of the box (including our standard kernel) on it, the instructions worked easily and I now have a new machine sitting quietly in the corner serving away. Sadly it being a dual core A7 it’s not got the grunt to replace my kernel build test system, an ARM allmodconfig takes eleven and a bit hours as opposed to a little less than twenty minutes on my desktop (which does draw well over an order of magnitude more power doing it), but otherwise you’d never notice the difference when using the system.

The upshot of all this is that actually there’s no real adventure at all; for systems like these where the system vendors and the communities around them are doing the right things and working well with upstream things just work as you’d expect with minimal effort.

The one thing that’s noticeably different from installing on a PC and really could do with improving is that instead of being shipped as part of the board the boot firmware has to be written to a SD card, something that could be addressed as easily as simply shipping a suitably programmed SD card in the box even without any other modification of the hardware, though on board flash would be even nicer.

20 December, 2014 06:57PM by Mark Brown

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.7

Release 0.6.7 of digest package is now on CRAN and in Debian.

Jim Hester was at it again and added murmurHash. I cleaned up several sets of pedantic warnings in some of the source files, updated the test reference out, and that forms version 0.6.7.

CRANberries provides the usual summary of changes to the previous version.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 December, 2014 06:45PM

hackergotchi for Rudy Godoy

Rudy Godoy

Apache Phoenix for Cloudera CDH

Apache Phoenix is a relational database layer over HBase delivered as a client-embedded JDBC driver targeting low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets.

What the above statement means for developers or data scientists is that you can “talk” SQL to your HBase cluster. Sounds good right? Setting up Phoenix on Cloudera CDH can be really frustrating and time-consuming. I wrapped-up references from across the web with my own findings to have both play nice.

Building Apache Phoenix

Because of dependency mismatch for the pre-built binaries, supporting Cloudera’s CDH requires to build Phoenix using the versions that match the CDH deployment. The CDH version I used is CDH4.7.0. This guide applies for any version of CDH4+.

Note: You can find CDH components version in the “CDH Packaging and Tarball Information” section for the “Cloudera Release Guide”. Current release information (CDH5.2.1) is available in this link.

Preparing Phoenix build environment

Phoenix can be built using maven or gradle. General instructions can be found in the “Building Phoenix Project” webpage.

Before building Phoenix you need to have:

  • JDK v6 (or v7 depending which CDH version are you willing to support)
  • Maven 3
  • git

Checkout correct Phoenix branch

Phoenix has two major release versions:

  • 3.x – supports HBase 0.94.x   (Available on CDH4 and previous versions)
  • 4.x – supports HBase 0.98.1+ (Available since CDH5)

Clone the Phoenix git repository

git clone https://github.com/apache/phoenix.git

Work with the correct branch

git fetch origin
git checkout 3.2

Modify dependencies to match CDH

Before building Phoenix, you will need to modify the dependencies to match the version of CDH you are trying to support. Edit phoenix/pom.xml and do the following changes:

Add Cloudera’s Maven repository

+    <repository>
+        <id>cloudera</id>
+        https://repository.cloudera.com/artifactory/cloudera-repos/
+    </repository>

Change component versions to match CDH’s.

     
-    <hadoop-one.version>1.0.4</hadoop-one.version>
-    <hadoop-two.version>2.0.4-alpha</hadoop-two.version>
+    <hadoop-one.version>2.0.0-mr1-cdh4.7.0</hadoop-one.version>
+    <hadoop-two.version>2.0.0-cdh4.7.0</hadoop-two.version>
     <!-- Dependency versions -->
-    <hbase.version>0.94.19
+    <hbase.version>0.94.15-cdh4.7.0
     <commons-cli.version>1.2</commons-cli.version>
-    <hadoop.version>1.0.4
+    <hadoop.version>2.0.0-cdh4.7.0
     <pig.version>0.12.0</pig.version>
     <jackson.version>1.8.8</jackson.version>
     <antlr.version>3.5</antlr.version>
     <log4j.version>1.2.16</log4j.version>
     <slf4j-api.version>1.4.3.jar</slf4j-api.version>
     <slf4j-log4j.version>1.4.3</slf4j-log4j.version>
-    <protobuf-java.version>2.4.0</protobuf-java.version>
+    <protobuf-java.version>2.4.0a</protobuf-java.version>
     <commons-configuration.version>1.6</commons-configuration.version>
     <commons-io.version>2.1</commons-io.version>
     <commons-lang.version>2.5</commons-lang.version>

Change target version, only if you are building for Java 6. CDH4 is built for JRE 6.

           <artifactId>maven-compiler-plugin</artifactId>
           <version>3.0</version>
           <configuration>
-            <source>1.7</source>
-            <target>1.7</target>
+            <source>1.6</source>
+            <target>1.6</target>
           </configuration>

Phoenix building

Once, you have made the changes you are set to build Phoenix. Our CDH4.7.0 cluster uses Hadoop 2, so make sure to activate the hadoop2 profile.

mvn package -DskipTests -Dhadoop.profile=2

If everything goes well, you should see the following result:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Phoenix .................................... SUCCESS [2.729s]
[INFO] Phoenix Hadoop Compatibility ...................... SUCCESS [0.882s]
[INFO] Phoenix Core ...................................... SUCCESS [24.040s]
[INFO] Phoenix - Flume ................................... SUCCESS [1.679s]
[INFO] Phoenix - Pig ..................................... SUCCESS [1.741s]
[INFO] Phoenix Hadoop2 Compatibility ..................... SUCCESS [0.200s]
[INFO] Phoenix Assembly .................................. SUCCESS [30.176s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:02.186s
[INFO] Finished at: Mon Dec 15 13:18:48 PET 2014
[INFO] Final Memory: 45M/1330M
[INFO] ------------------------------------------------------------------------

Phoenix Server component deployment

Since Phoenix is a JDBC layer on top of HBase a server component has to be deployed on every HBase node. The goal is to have Phoenix server component added to HBase classpath.

You can achieve this goal either by copying the server component directly to HBase’s lib directory, or copy the component to an alternative path then modify HBase classpath definition.

For the first approach, do:

cp phoenix-assembly/target/phoenix-3.2.3-SNAPSHOT-server.jar /opt/cloudera/parcels/CDH/lib/hbase/lib/

Note: In this case CDH is a synlink to the current active CDH version.

For the second approach, do:

cp phoenix-assembly/target/phoenix-3.2.3-SNAPSHOT-server.jar /opt/phoenix/

Then add the following line to /etc/hbase/conf/hbase-env.sh

/etc/hbase/conf/hbase-env.sh
export HBASE_CLASSPATH_PREFIX=/opt/phoenix/phoenix-3.2.3-SNAPSHOT-server.jar

Wether you’ve used any of the methods, you have to restart HBase. If you are using Cloudera Manager, restart the HBase service.

To validate that Phoenix is on HBase class path, do:

sudo -u hbase hbase classpath | tr ':' '\n' | grep phoenix

Phoenix server validation

Phoenix provides a set of client tools that you can use to validate the server component functioning. However, since we are supporting CDH4.7.0 we’ll need to make few changes to such utilities so they use the correct dependencies.

phoenix/bin/sqlline.py:

sqlline.py is a wrapper for the JDBC client, it provides a SQL console interface to HBase through Phoenix.

index f48e527..bf06148 100755
--- a/bin/sqlline.py
+++ b/bin/sqlline.py
@@ -53,7 +53,8 @@ colorSetting = "true"
 if os.name == 'nt':
     colorSetting = "false"
-java_cmd = 'java -cp "' + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
+extrajars="/opt/cloudera/parcels/CDH/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcerls/CDH/lib/oozie/libserver/hbase-0.94.15-cdh4.7.0.jar"
+java_cmd = 'java -cp ".' + os.pathsep + extrajars + os.pathsep + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
     '" -Dlog4j.configuration=file:' + \
     os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
     " sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver \

phoenix/bin/psql.py:

psql.py is a wrapper tool that can be used to create and populate HBase tables.

index 34a95df..b61fde4 100755
--- a/bin/psql.py
+++ b/bin/psql.py
@@ -34,7 +34,8 @@ else:
 # HBase configuration folder path (where hbase-site.xml reside) for
 # HBase/Phoenix client side property override
-java_cmd = 'java -cp "' + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
+extrajars="/opt/cloudera/parcels/CDH/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcerls/CDH/lib/oozie/libserver/hbase-0.94.15-cdh4.7.0.jar"
+java_cmd = 'java -cp ".' + os.pathsep + extrajars + os.pathsep + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
     '" -Dlog4j.configuration=file:' + \
     os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
     " org.apache.phoenix.util.PhoenixRuntime " + args

After you have done such changes you can test connectivity by issuing the following commands:

./bin/sqlline.py zookeeper.local
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:zookeeper.local none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:zookeeper.local
14/12/16 19:26:10 WARN conf.Configuration: dfs.df.interval is deprecated. Instead, use fs.df.interval
14/12/16 19:26:10 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
14/12/16 19:26:10 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:10 WARN conf.Configuration: topology.script.number.args is deprecated. Instead, use net.topology.script.number.args
14/12/16 19:26:10 WARN conf.Configuration: dfs.umaskmode is deprecated. Instead, use fs.permissions.umask-mode
14/12/16 19:26:10 WARN conf.Configuration: topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl
14/12/16 19:26:11 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/16 19:26:12 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:12 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
Connected to: Phoenix (version 3.2)
Driver: PhoenixEmbeddedDriver (version 3.2)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
77/77 (100%) Done
Done
sqlline version 1.1.2
0: jdbc:phoenix:zookeeper.local>

Then, you can either issue SQL-commands or Phoenix-commands.

0: jdbc:phoenix:zookeeper.local> !tables
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
|                TABLE_CAT                 |               TABLE_SCHEM                |                TABLE_NAME                |                TABLE_TYPE |
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
| null                                     | SYSTEM                                   | CATALOG                                  | SYSTEM TABLE              |
| null                                     | SYSTEM                                   | SEQUENCE                                 | SYSTEM TABLE              |
| null                                     | SYSTEM                                   | STATS                                    | SYSTEM TABLE              |
| null                                     | null                                     | STOCK_SYMBOL                             | TABLE                     |
| null                                     | null                                     | WEB_STAT                                 | TABLE                     |
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+

20 December, 2014 03:40PM by Rudy Godoy

Craig Small

WordPress 4.1 for Debian

Release 4.1 of WordPress came out on Friday so after some work to fit in with the Debian standards, the Debian package 4.1-1 of WordPress will be uploaded shortly.  WordPress have also updated their themes with a 14-day early theme called twentyfifteen.  This is the default theme for WordPress 4.1 on-wards.

I have also made some adjustments with the embedded code that WordPress ships. This is the usually JavaScript or PHP code that WordPress has in their release tarballs that comes from other projects. There is a fine line between keeping the WordPress install the same and having to deal with the maintenance of the embedded code. An example of a good one not to use embedded code is php-getid which the Debian maintainer has put in some additional patches for a better security fix while the alternative is jquery which is a little sad in the Debian-word being so many versions behind. php-snoopy got reverted to embedded code because its not exactly the same as upstream.

A significant (or invisible, depends on your browser) is the mediaelement components now don’t use the un-maintainable silverlight and flash plugins, which is the same how the libjs-mediaelement package works. In fact the code IS from that package.

dh_linktree

As I was looking into the embedded js/php code in WordPress, I also had to look into how the previous maintainer kept all the versions in order without some horrible mess of patchfiles and symlinks. The answer was dh_linktree. This program plugs into the standard debhelper rules file and can basically use symlinks in the package to use the standard Debian versions of files. It a bit cleverer than symlinks in that you can say use the link always or only if the files are the same.

If you need to remove some of your embedded code out of Debian packages, have a look into it. It might save you a lot of agnst or hand-crafted rules files.

20 December, 2014 04:53AM by Craig

December 19, 2014

hackergotchi for Wouter Verhelst

Wouter Verhelst

joytest UI improvements

After yesterday's late night accomplishments, today I fixed up the UI of joytest a bit. It's still not quite what I think it should look like, but at least it's actually usable with a 27-axis, 19-button "joystick" (read: a PS3 controller). Things may disappear off the edge of the window, but you can scroll towards it. Also, I removed the names of the buttons and axes from the window, and installed them as tooltips instead. Few people will be interested in the factoid that "button 1" is a "BaseBtn4", anyway.

The result now looks like this:

If you plug in a new joystick, or remove one from the system, then as soon as udev finishes up creating the necessary device node, joytest will show the joystick (by name) in the treeview to the left. Clicking on a joystick will show that joystick's data to the right. When one pushes a button, the relevant checkbox will be selected; and when one moves an axis, the numbers will start changing.

I really should have some widget to actually show the axis position, rather than some boring numbers. Not sure how to do that.

19 December, 2014 10:21PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rocker is now the official R image for Docker

big deal

Something happened a little while ago which we did not have time to commensurate properly. Our Rocker image for R is now the official R image for Docker itself. So getting R (via Docker) is now as simple as saying docker pull r-base.

This particular container is essentially just the standard r-base Debian package for R (which is one of a few I maintain there) plus a mininal set of extras. This r-base forms the basis of our other containers as e.g. the rather popular r-studio container wrapping the excellent RStudio Server.

A lot of work went into this. Carl and I also got a tremendous amount of help from the good folks at Docker. Details are as always at the Rocker repo at GitHub.

Docker itself continues to make great strides, and it has been great fun help to help along. With this post I achieved another goal: blog about Docker with an image not containing shipping containers. Just kidding.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 December, 2014 08:51PM

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/19

yesterday I learned that I can go to FOSDEM in early 2015 because a conflicting event was cancelled. that makes me happy because FOSDEM is great for seeing other debian folks, & especially for meeting friends.


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

19 December, 2014 07:07PM

Richard Hartmann

Release Critical Bug report for Week 51

Real life has been interesting as of late; as you can see, I didn't post bug stats last week. If you have specific data from last Friday, please let me know and I will update.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1095 (Including 179 bugs affecting key packages)
    • Affecting Jessie: 189 (key packages: 117) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 134 (key packages: 90) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 32 bugs are tagged 'patch'. (key packages: 24) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 13 bugs are marked as done, but still affect unstable. (key packages: 9) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 89 bugs are neither tagged patch, nor marked done. (key packages: 57) Help make a first step towards resolution!
      • Affecting Jessie only: 55 (key packages: 27) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 29 bugs are in packages that are unblocked by the release team. (key packages: 11)
        • 26 bugs are in packages that are not unblocked. (key packages: 16)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144) ???
51 178 (124+54) 323 (190+133) 189 (134+55)
52 115 (78+37) 289 (190+99)
1 93 (60+33) 287 (171+116)
2 82 (46+36) 271 (162+109)
3 25 (15+10) 249 (165+84)
4 14 (8+6) 244 (176+68)
5 2 (0+2) 224 (132+92)
6 release! 212 (129+83)
7 release+1 194 (128+66)
8 release+2 206 (144+62)
9 release+3 174 (105+69)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

19 December, 2014 04:25PM by Richard 'RichiH' Hartmann

hackergotchi for Steve Kemp

Steve Kemp

Switched to using attic for backups

Even though seeing the word attic reminds me too much of leaking roofs and CVS, I've switched to using the attic backup tool.

I want a simple system which will take incremental backups, perform duplication-elimination (to avoid taking too much space), support encryption, and be fast.

I stopped using backup2l because the .tar.gz files were too annoying, and it was too slow. I started using obnam because I respect Lars and his exceptionally thorough testing-regime, but had to stop using it when things started getting "too slow".

I'll document the usage/installation in the future. For the moment the only annoyance is that it is contained in the Jessie archive, not the Wheezy one. Right now only 2/19 of my hosts are Jessie.

19 December, 2014 01:51PM

Petter Reinholdtsen

Of course USA loses in cyber war - NSA and friends made sure it would happen

So, Sony caved in (according to Rob Lowe) and demonstrated that America lost its first cyberwar (according to Newt Gingrich). It should not surprise anyone, after the whistle blower Edward Snowden documented that the government of USA and their allies for many years have done their best to make sure the technology used by its citizens is filled with security holes allowing the secret services to spy on its own population. No one in their right minds could believe that the ability to snoop on the people all over the globe could only be used by the personnel authorized to do so by the president of the United States of America. If the capabilities are there, they will be used by friend and foe alike, and now they are being used to bring Sony on its knees.

I doubt it will a lesson learned, and expect USA to lose its next cyber war too, given how eager the western intelligence communities (and probably the non-western too, but it is less in the news) seem to be to continue its current dragnet surveillance practice.

There is a reason why China and others are trying to move away from Windows to Linux and other alternatives, and it is not to avoid sending its hard earned dollars to Cayman Islands (or whatever tax haven Microsoft is using these days to collect the majority of its income. :)

19 December, 2014 12:10PM

Of course USA looses in cyber war - NSA and friends made sure it would happen

So, Sony caved in (according to Rob Lowe) and demonstrated that America lost its first cyberwar (according to Newt Gingrich). It should not surprise anyone, after the whistle blower Edward Snowden documented that the government of USA and their allies for many years have done their best to make sure the technology used by its citizens is filled with security holes allowing the secret services to spy on its own population. No one in their right minds could believe that the ability to snoop on the people all over the globe could only be used by the personnel authorized to do so by the president of the United States of America. If the capabilities are there, they will be used by friend and foe alike, and now they are being used to bring Sony on its knees.

I doubt it will a lesson learned, and expect USA to loose its next cyber war too, given how eager the western intelligence communities (and probably the non-western too, but it is less in the news) seem to be to continue its current dragnet surveillance practice.

There is a reason why China and others are trying to move away from Windows to Linux and other alternatives, and it is not to avoid sending its hard earned dollars to Cayman Islands (or whatever tax haven Microsoft is using these days to collect the majority of its income. :)

19 December, 2014 12:10PM

hackergotchi for Kenshi Muto

Kenshi Muto

smart "apt" command

During evaluating Jessie, I found 'apt' command and noticed it was pretty good for novice-usual users.

Usage: apt [options] command

CLI for apt.
Basic commands: 
 list - list packages based on package names
 search - search in package descriptions
 show - show package details

 update - update list of available packages

 install - install packages
 remove  - remove packages

 upgrade - upgrade the system by installing/upgrading packages
 full-upgrade - upgrade the system by removing/installing/upgrading packages

 edit-sources - edit the source information file

'apt list' is like a combination of 'dpkg -l' + 'apt-cache pkgnames'. 'apt search' is a bit slower than 'apt-cache search' but provides with useful information. 'apt show' formats bytesizes and hides some (for experts) fields. install/remove/upgrade/full-upgrade are mostly same as apt-get. 'apt edit-sources' opens a editor and checks the integrity.

So, I'd like to recommend 'apt' command to Debian users.

Well, why did I write this entry...? ;) I found a mistranslation I had made in ja.po of apt. Because it is critical mistranslation (Japanese users will confuse by it), I want to fix it strongly.

Dear apt deity maintainers, could you consider to update apt for Jessie? (#772678. FYI there are other translation updates also: #772913, #771982, and #771967)

19 December, 2014 10:12AM by Kenshi Muto

Enrico Zini

upgrade-encrypted-cyanogenmod

Upgrade Cyanogenmod with an encrypted phone

Cyanogenmod found an update, it downloaded it, then it rebooted to install it and nothing happened. It turns out that the update procedure cannot work if the zip file to install is in encrypted media, so a workaround is to move the zip into unencrypted external storage.

As far as I know, my Nexus 4 has no unencrypted external storage.

This is how I managed to upgrade it, I write it here so I can find it next time:

  1. enable USB debugging
  2. adb pull /cmupdater/cm-11-20141115-SNAPSHOT-M12-mako.zip
  3. adb reboot recovery
  4. choose "install zip from sideload"
  5. adb sideload cm-11-20141115-SNAPSHOT-M12-mako.zip

19 December, 2014 09:21AM

December 18, 2014

hackergotchi for Wouter Verhelst

Wouter Verhelst

Introducing libjoy

I've owned a Logitech Wingman Gamepad Extreme since pretty much forever, and although it's been battered over the years, it's still mostly functional. As a gamepad, it has 10 buttons. What's special about it, though, is that the device also has a mode in which a gravity sensor kicks in and produces two extra axes, allowing me to pretend I'm really talking to a joystick. It looks a bit weird though, since you end up playing your games by wobbling the gamepad around a bit.

About 10 years ago, I first learned how to write GObjects by writing a GObject-based joystick API. Unfortunately, I lost the code at some point due to an overzealous rm -rf call. I had planned to rewrite it, but that never really happened.

About a year back, I needed to write a user interface for a customer where a joystick would be a major part of the interaction. The code there was written in Qt, so I write an event-based joystick API in Qt. As it happened, I also noticed that jstest would output names for the actual buttons and axes; I had never noticed this, because due to my 10 buttons and 4 axes, which by default produce a lot of output, the jstest program would just scroll the names off my screen whenever I plugged it in. But the names are there, and it's not too difficult.

Refreshing my memory on the joystick API made me remember how much fun it is, and I wrote the beginnings of what I (at the time) called "libgjs", for "Gobject JoyStick". I didn't really finish it though, until today. I did notice in the mean time that someone else released GObject bindings for javascript and also called that gjs, so in the interest of avoiding confusion I decided to rename my library to libjoy. Not only will this allow me all kinds of interesting puns like "today I am releasing more joy", it also makes for a more compact API (compare joy_stick_open() against gjs_joystick_open()).

The library also comes with a libjoy-gtk that creates a GtkListStore* which is automatically updated as joysticks are added and removed to the system; and a joytest program, a graphical joystick test program which also serves as an example of how to use the API.

still TODO:

  • Clean up the API a bit. There's a bit too much use of GError in there.
  • Improve the UI. I suck at interface design. Patches are welcome.
  • Differentiate between JS_EVENT_INIT kernel-level events, and normal events.
  • Improve the documentation to the extent that gtk-doc (and, thus, GObject-Introspection) will work.

What's there is functional, though.

Update: if you're going to talk about code, it's usually a good idea to link to said code. Thanks, Emanuele, for pointing that out ;-)

18 December, 2014 11:29PM

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/18

what constantly fascinates me in debian is that people sit at home, have an idea, work on it, & then suddenly present it to an unexpecting public; all without prior announcements or discussions, & totally apart from any hot discussion-de-jour. the last example I encountered & tried out just now is the option to edit source packages online & submit patches. - I hope we as a project can keep up with this creativity!


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

18 December, 2014 10:01PM

John Goerzen

Aerial Photos: Our Little House on the Prairie

DJI00812

This was my first attempt to send up the quadcopter in winter. It’s challenging to take good photos of a snowy landscape anyway. Add to that the fact that the camera is flying, and it’s cold, which is hard on batteries and motors. I was rather amazed at how well it did!

DJI00816

DJI00830

18 December, 2014 06:37PM by John Goerzen

hackergotchi for Mario Lang

Mario Lang

deluXbreed #2 is out!

The third installment of my crossbreed digital mix podcast is out!

This time, I am featuring Harder & Louder and tracks from Behind the Machine and the recently released Remixes.

  1. Apolloud - Nagazaki
  2. Apolloud - Hiroshima
  3. SA+AN - Darksiders
  4. Im Colapsed - Cleaning 8
  5. Micromakine & Switch Technique - Ascension
  6. Micromakine - Cyberman (Dither Remix)
  7. Micromakine - So Good! (Synapse Remix)

How was DarkCast born and how is it done?

I always loved 175BPM music. It is an old thing that is not going away soon :-). I recently found that there is a quite active culture going on, at least on BandCamp. But single tracks are just that, not really fun to listen to in my opinion. This sort of music needs to be mixed to be fun. In the past, I used to have most tracks I like/love as vinyl, so I did some real-world vinyl mixing myself. But these days, most fun music is only available digitally, at least easily. Some people still do vinyl releases, but they are actually rare.

So for my personal enjoyment, I started to digitally mix tracks I really love, such that I can listen to them without "interruption". And since I am an iOS user since three years now, using the podcast format to get stuff onto my devices was quite a natural choice.

I use SoX and a very small shell script to create these mixes. Here is a pseudo-template:

sox --combine mix-power \
"|sox \"|sox 1.flac -p\" \"|sox 3.flac -p speed 0.987 delay 2:28.31 2:28.31\" -p" \
"|sox \"|sox 2.flac -p delay 2:34.1 2:34.1\" -p" \
mix.flac

As you can imagine, it is quite a bit of fiddling to get these scripts to do what you want. But it is a non-graphical method to get things done. If you know of a better tool, possibly with a bit of real-time controls, to get the same job done, wihtout having to resort to a damn GUI, let me know.

18 December, 2014 09:47AM by Mario Lang

December 17, 2014

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/17

my list of IRC channels (& the list of people I'm following on micro-blogging platforms) has a heavy debian bias. a thing I noticed today is that I had read (or at least: seen) messages in 6 languages (English, German, Castilian, Catalan, French, Italian). – thanks guys for the free language courses :) (& the opportunity to at least catch a glimpse into other cultures)


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

17 December, 2014 10:30PM

hackergotchi for Keith Packard

Keith Packard

MST-monitors

Multi-Stream Transport 4k Monitors and X

I'm sure you've seen a 4k monitor on a friends desk running Mac OS X or Windows and are all ready to go get one so that you can use it under Linux.

Once you've managed to acquire one, I'm afraid you'll discover that when you plug it in, you're limited to 30Hz refresh rates at the full size, unless you're running a kernel that is version 3.17 or later. And then...

Good Grief! What Is My Computer Doing!

Ok, so now you're running version 3.17 and when X starts up, it's like you're using a gigantic version of Google Cardboard. Two copies of a very tall, but very narrow screen greets you.

Welcome to MST island.

In order to drive these giant new panels at full speed, there isn't enough bandwidth in the display hardware to individually paint each pixel once during each frame. So, like all good hardware engineers, they invented a clever hack.

This clever hack paints the screen in parallel. I'm assuming that they've got two bits of display hardware, each one hooked up to half of the monitor. Now, each paints only half of the pixels, avoiding costly redesign of expensive silicon, at least that's my surmise.

In the olden days, if you did this, you'd end up running two monitor cables to your computer, and potentially even having two video cards. Today, thanks to the magic of Display Port Multi-Stream Transport, we don't need all of that; instead, MST allows us to pack multiple cables-worth of data into a single cable.

I doubt the inventors of MST intended it to be used to split a single LCD panel into multiple "monitors", but hardware engineers are clever folk and are more than capable of abusing standards like this when it serves to save a buck.

Turning Two Back Into One

We've got lots of APIs that expose monitor information in the system, and across which we might be able to wave our magic abstraction wand to fix this:

  1. The KMS API. This is the kernel interface which is used by all graphics stuff, including user-space applications and the frame buffer console. Solve the problem here and it works everywhere automatically.

  2. The libdrm API. This is just the KMS ioctls wrapped in a simple C library. Fixing things here wouldn't make fbcons work, but would at least get all of the window systems working.

  3. Every 2D X driver. (Yeah, we're trying to replace all of these with the one true X driver). Fixing the problem here would mean that all X desktops would work. However, that's a lot of code to hack, so we'll skip this.

  4. The X server RandR code. More plausible than fixing every driver, this also makes X desktops work.

  5. The RandR library. If not in the X server itself, how about over in user space in the RandR protocol library? Well, the problem here is that we've now got two of them (Xlib and xcb), and the xcb one is auto-generated from the protocol descriptions. Not plausible.

  6. The Xinerama code in the X server. Xinerama is how we did multi-monitor stuff before RandR existed. These days, RandR provides Xinerama emulation, but we've been telling people to switch to RandR directly.

  7. Some new API. Awesome. Ok, so if we haven't fixed this in any existing API we control (kernel/libdrm/X.org), then we effectively dump the problem into the laps of the desktop and application developers. Given how long it's taken them to adopt current RandR stuff, providing yet another complication in their lives won't make them very happy.

All Our APIs Suck

Dave Airlie merged MST support into the kernel for version 3.17 in the simplest possible fashion -- pushing the problem out to user space. I was initially vaguely tempted to go poke at it and try to fix things there, but he eventually convinced me that it just wasn't feasible.

It turns out that all of our fancy new modesetting APIs describe the hardware in more detail than any application actually cares about. In particular, we expose a huge array of hardware objects:

  • Subconnectors
  • Connectors
  • Outputs
  • Video modes
  • Crtcs
  • Encoders

Each of these objects exposes intimate details about the underlying hardware -- which of them can work together, and which cannot; what kinds of limits are there on data rates and formats; and pixel-level timing details about blanking periods and refresh rates.

To make things work, some piece of code needs to actually hook things up, and explain to the user why the configuration they want just isn't possible.

The sticking point we reached was that when an MST monitor gets plugged in, it needs two CRTCs to drive it. If one of those is already in use by some other output, there's just no way you can steal it for MST mode.

Another problem -- we expose EDID data and actual video mode timings. Our MST monitor has two EDID blocks, one for each half. They happen to describe how they're related, and how you should configure them, but if we want to hide that from the application, we'll have to pull those EDID blocks apart and construct a new one. The same goes for video modes; we'll have to construct ones for MST mode.

Every single one of our APIs exposes enough of this information to be dangerous.

Every one, except Xinerama. All it talks about is a list of rectangles, each of which represents a logical view into the desktop. Did I mention we've been encouraging people to stop using this? And that some of them listened to us? Foolishly?

Dave's Tiling Property

Dave hacked up the X server to parse the EDID strings and communicate the layout information to clients through an output property. Then he hacked up the gnome code to parse that property and build a RandR configuration that would work.

Then, he changed to RandR Xinerama code to also parse the TILE properties and to fix up the data seen by application from that.

This works well enough to get a desktop running correctly, assuming that desktop uses Xinerama to fetch this data. Alas, gtk has been "fixed" to use RandR if you have RandR version 1.3 or later. No biscuit for us today.

Adding RandR Monitors

RandR doesn't have enough data types yet, so I decided that what we wanted to do was create another one; maybe that would solve this problem.

Ok, so what clients mostly want to know is which bits of the screen are going to be stuck together and should be treated as a single unit. With current RandR, that's some of the information included in a CRTC. You pull the pixel size out of the associated mode, physical size out of the associated outputs and the position from the CRTC itself.

Most of that information is available through Xinerama too; it's just missing physical sizes and any kind of labeling to help the user understand which monitor you're talking about.

The other problem with Xinerama is that it cannot be configured by clients; the existing RandR implementation constructs the Xinerama data directly from the RandR CRTC settings. Dave's Tiling property changes edit that data to reflect the union of associated monitors as a single Xinerama rectangle.

Allowing the Xinerama data to be configured by clients would fix our 4k MST monitor problem as well as solving the longstanding video wall, WiDi and VNC troubles. All of those want to create logical monitor areas within the screen under client control

What I've done is create a new RandR datatype, the "Monitor", which is a rectangular area of the screen which defines a rectangular region of the screen. Each monitor has the following data:

  • Name. This provides some way to identify the Monitor to the user. I'm using X atoms for this as it made a bunch of things easier.

  • Primary boolean. This indicates whether the monitor is to be considered the "primary" monitor, suitable for placing toolbars and menus.

  • Pixel geometry (x, y, width, height). These locate the region within the screen and define the pixel size.

  • Physical geometry (width-in-millimeters, height-in-millimeters). These let the user know how big the pixels will appear in this region.

  • List of outputs. (I think this is the clever bit)

There are three requests to define, delete and list monitors. And that's it.

Now, we want the list of monitors to completely describe the environment, and yet we don't want existing tools to break completely. So, we need some way to automatically construct monitors from the existing RandR state while still letting the user override portions of it as needed to explain virtual or tiled outputs.

So, what I did was to let the client specify a list of outputs for each monitor. All of the CRTCs which aren't associated with an output in any client-defined monitor are then added to the list of monitors reported back to clients. That means that clients need only define monitors for things they understand, and they can leave the other bits alone and the server will do something sensible.

The second tricky bit is that if you specify an empty rectangle at 0,0 for the pixel geometry, then the server will automatically compute the geometry using the list of outputs provided. That means that if any of those outputs get disabled or reconfigured, the Monitor associated with them will appear to change as well.

Current Status

Gtk+ has been switched to use RandR for RandR versions 1.3 or later. Locally, I hacked libXrandr to override the RandR version through an environment variable, set that to 1.2 and Gtk+ happily reverts back to Xinerama and things work fine. I suspect the plan here will be to have it use the new Monitors when present as those provide the same info that it was pulling out of RandR's CRTCs.

KDE appears to still use Xinerama data for this, so it "just works".

Where's the code

As usual, all of the code for this is in a collection of git repositories in my home directory on fd.o:

git://people.freedesktop.org/~keithp/randrproto master
git://people.freedesktop.org/~keithp/libXrandr master
git://people.freedesktop.org/~keithp/xrandr master
git://people.freedesktop.org/~keithp/xserver randr-monitors

RandR protocol changes

Here's the new sections added to randrproto.txt

                  ❧❧❧❧❧❧❧❧❧❧❧

1.5. Introduction to version 1.5 of the extension

Version 1.5 adds monitors

 • A 'Monitor' is a rectangular subset of the screen which represents
   a coherent collection of pixels presented to the user.

 • Each Monitor is be associated with a list of outputs (which may be
   empty).

 • When clients define monitors, the associated outputs are removed from
   existing Monitors. If removing the output causes the list for that
   monitor to become empty, that monitor will be deleted.

 • For active CRTCs that have no output associated with any
   client-defined Monitor, one server-defined monitor will
   automatically be defined of the first Output associated with them.

 • When defining a monitor, setting the geometry to all zeros will
   cause that monitor to dynamically track the bounding box of the
   active outputs associated with them

This new object separates the physical configuration of the hardware
from the logical subsets  the screen that applications should
consider as single viewable areas.

1.5.1. Relationship between Monitors and Xinerama

Xinerama's information now comes from the Monitors instead of directly
from the CRTCs. The Monitor marked as Primary will be listed first.

                  ❧❧❧❧❧❧❧❧❧❧❧

5.6. Protocol Types added in version 1.5 of the extension

MONITORINFO { name: ATOM
          primary: BOOL
          automatic: BOOL
          x: INT16
          y: INT16
          width: CARD16
          height: CARD16
          width-in-millimeters: CARD32
          height-in-millimeters: CARD32
          outputs: LISTofOUTPUT }

                  ❧❧❧❧❧❧❧❧❧❧❧

7.5. Extension Requests added in version 1.5 of the extension.

┌───
    RRGetMonitors
    window : WINDOW
     ▶
    timestamp: TIMESTAMP
    monitors: LISTofMONITORINFO
└───
    Errors: Window

    Returns the list of Monitors for the screen containing
    'window'.

    'timestamp' indicates the server time when the list of
    monitors last changed.

┌───
    RRSetMonitor
    window : WINDOW
    info: MONITORINFO
└───
    Errors: Window, Output, Atom, Value

    Create a new monitor. Any existing Monitor of the same name is deleted.

    'name' must be a valid atom or an Atom error results.

    'name' must not match the name of any Output on the screen, or
    a Value error results.

    If 'info.outputs' is non-empty, and if x, y, width, height are all
    zero, then the Monitor geometry will be dynamically defined to
    be the bounding box of the geometry of the active CRTCs
    associated with them.

    If 'name' matches an existing Monitor on the screen, the
    existing one will be deleted as if RRDeleteMonitor were called.

    For each output in 'info.outputs, each one is removed from all
    pre-existing Monitors. If removing the output causes the list of
    outputs for that Monitor to become empty, then that Monitor will
    be deleted as if RRDeleteMonitor were called.

    Only one monitor per screen may be primary. If 'info.primary'
    is true, then the primary value will be set to false on all
    other monitors on the screen.

    RRSetMonitor generates a ConfigureNotify event on the root
    window of the screen.

┌───
    RRDeleteMonitor
    window : WINDOW
    name: ATOM
└───
    Errors: Window, Atom, Value

    Deletes the named Monitor.

    'name' must be a valid atom or an Atom error results.

    'name' must match the name of a Monitor on the screen, or a
    Value error results.

    RRDeleteMonitor generates a ConfigureNotify event on the root
    window of the screen.

                  ❧❧❧❧❧❧❧❧❧❧❧

17 December, 2014 09:36AM

December 16, 2014

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/16

today I met with a young friend (attending the final year of technical high school) for coffee. he's exploring free software since one or two years, & he's running debian jessie on his laptop since some time. it's really amazing to see how exciting this travel into the free software cosmos is for him; & it's good to see that linux & debian are not only appealing to greybeards like me :)


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

16 December, 2014 10:46PM

Raphael Geissert

Editing Debian online with sources.debian.net

How cool would it be to fix that one bug you just found without having to download a source package? and without leaving your browser?

Inspired by github's online code editing, during Debconf 14 I worked on integrating an online editor on debsources (the software behind sources.debian.net). Long story short: it is available today, for users of chromium (or anything supporting chrome extensions).

After installing the editor for sources.debian.net extension, go straight to sources.debian.net and enjoy!

Go from simple debsources:


To debsources on steroids:


All in all, it brings:
  • Online editing of all of Debian
  • In-browser patch generation, available for download
  • Downloading the modified file
  • Sending the patch to the BTS
  • Syntax highlighting for over 120 file formats!
  • More hidden gems from Ace editor that can be integrated thanks to patches from you

Clone it or fork it:
git clone https://github.com/rgeissert/ace-sourced.n.git

For example, head to apt's source code, find a typo and correct it online: open apt.cc, click on edit, make the changes, click on email patch. Yes! it can generate a mail template for sending the patch to the BTS: just add a nice message and your patch is ready to be sent.

Didn't find any typo to fix? how sad, head to codesearch and search Debian for a spelling mistake, click on any result, edit, correct, email! you will have contributed to Debian in less than 5 minutes without leaving your browser.

The editor was meant to be integrated into debsources itself, without the need of a browser extension. This is expected to be done when the requirements imposed by debsources maintainers are sorted out.

Kudos to Harlan Lieberman who helped debug some performance issues in the early implementations of the integration and for working on the packaging of the Ace editor.

16 December, 2014 08:00AM by Raphael Geissert (noreply@blogger.com)

December 15, 2014

hackergotchi for Gustavo Noronha Silva

Gustavo Noronha Silva

Web Engines Hackfest 2014

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

15 December, 2014 11:20PM by kov

hackergotchi for Holger Levsen

Holger Levsen

20121214-not-everybody-is-equal

We ain't equal in Debian neither and wishful thinking won't help.

"White people think calling them white is racist." - "White people think calling them racist is racist."

(Thanks to and via 2damnfeisty and blackgirlsparadise!)

Posted here (in this white male community...) as a food for thought. What else is invisible for whom? Or hardly visible or distorted or whatever shade of (in)visible... - and how can we know about things we cannot (yet) see...

15 December, 2014 07:25PM

hackergotchi for Thomas Goirand

Thomas Goirand

Supporting 3 init systems in OpenStack packages

tl;dr: Providing support for all 3 init systems (sysv-rc, Upstart and systemd) isn’t hard, and generating the init scripts / Upstart job / systemd using a template system is a lot easier than I previously thought.

As always, when writing this kind of blog post, I do expect that others will not like what I did. But that’s the point: give me your opinion in a constructive way (please be polite even if you don’t like what you see… I had too many times had to read harsh comments), and I’ll implement your ideas if I find it nice.

History of the implementation: how we came to the idea

I had no plan to do this. I don’t believe what I wrote can be generalized to all of the Debian archive. It’s just that I started doing things, and it made sense when I did it. Let me explain how it happened.

Since it’s clear that many, and especially the most advanced one, may have an opinion about which init system they prefer, and because I also support Ubuntu (at least Trusty), I though it was a good idea to support all the “main” init system: sysv-rc, Upstart and systemd. Though I have counted (for the sake of being exact in this blog) : OpenStack in Debian contains currently 64 init scripts to run daemons in total. That’s quite a lot. A way too much to just write them, all by hand. Though that’s what I was doing for the last years… until this the end of this last summer!

So, doing all by hand, I first started implementing Upstart. Its support was there only when building in Ubuntu (which isn’t the correct thing to do, this is now fixed, read further…). Then we thought about adding support for systemd. Gustavo Panizzo, one of the contributors in the OpenStack packages, started implementing it in Keystone (the auth server for OpenStack) for the Juno release which was released this October. He did that last summer, early enough so we didn’t expect anyone to use the Juno branch Keystone. After some experiments, we had kind of working. What he did was invoking “/etc/init.d/keystone start-systemd”, which was still using start-stop-daemon. Yes, that’s not perfect, and it’s better to use systemd foreground process handling, but at least, we had a unique place where to write the startup scripts, where we check the /etc/default for the logging configuration, configure the log file, and so on.

Then around in october, I took a step backward to see the whole picture with sysv-rc scripts, and saw the mess, with all the tiny, small difference between them. It became clear that I had to do something to make sure they were all the same, with the support for the same things (like which log system to use, where to store the PID, create /var/lib/project, /var/run/project and so on…).

Last, on this month of December, I was able to fix the remaining issues for systemd support, thanks to the awesome contribution of Mikael Cluseau on the Alioth OpenStack packaging list. Now, the systemd unit file is still invoking the init script, but it’s not using start-stop-daemon anymore, no PID file involved, and daemons are used as systemd foreground processes. Finally, daemons service files are also activated on installation (they were not previously).

Implementation

So I took the simplistic approach to use always the same template for the sysv-rc switch/case, and the start and stop functions, happening it at the end of all debian/*.init.in scripts. I started to try to reduce the number of variables, and I was surprised of the result: only a very small part of the init scripts need to change from daemon to daemon. For example, for nova-api, here’s the init script (LSB header stripped-out):

DESC="OpenStack Compute API"
PROJECT_NAME=nova
NAME=${PROJECT_NAME}-api

That is it: only 3 lines, defining only the name of the daemon, the name of the project it attaches (eg: nova, cinder, etc.), and a long description. There’s of course much more complicated init scripts (see the one for neutron-server in the Debian archive for example), but the vast majority only needs the above.

Here’s the sysv-rc init script template that I currently use:

#!/bin/sh
# The content after this line comes from openstack-pkg-tools
# and has been automatically added to a .init.in script, which
# contains only the descriptive part for the daemon. Everything
# else is standardized as a single unique script.

# Author: Thomas Goirand <zigo@debian.org>

# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin

if [ -z "${DAEMON}" ] ; then
	DAEMON=/usr/bin/${NAME}
fi
PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid
if [ -z "${SCRIPTNAME}" ] ; then
	SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
	SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
	SYSTEM_GROUP=${PROJECT_NAME}
fi
if [ "${SYSTEM_USER}" != "root" ] ; then
	STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}"
fi
if [ -z "${CONFIG_FILE}" ] ; then
	CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf
fi
LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log
if [ -z "${NO_OPENSTACK_CONFIG_FILE_DAEMON_ARG}" ] ; then
	DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}"
fi

# Exit if the package is not installed
[ -x $DAEMON ] || exit 0

# If ran as root, create /var/lock/X, /var/run/X, /var/lib/X and /var/log/X as needed
if [ "x$USER" = "xroot" ] ; then
	for i in lock run log lib ; do
		mkdir -p /var/$i/${PROJECT_NAME}
		chown ${SYSTEM_USER} /var/$i/${PROJECT_NAME}
	done
fi

# This defines init_is_upstart which we use later on (+ more...)
. /lib/lsb/init-functions

# Manage log options: logfile and/or syslog, depending on user's choosing
[ -r /etc/default/openstack ] && . /etc/default/openstack
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
[ "x$USE_SYSLOG" = "xyes" ] && DAEMON_ARGS="$DAEMON_ARGS --use-syslog"
[ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --log-file=$LOGFILE"

do_start() {
	start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \
			--test > /dev/null || return 1
	start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \
			-- $DAEMON_ARGS || return 2
}

do_stop() {
	start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE
	RETVAL=$?
	rm -f $PIDFILE
	return "$RETVAL"
}

do_systemd_start() {
	exec $DAEMON $DAEMON_ARGS
}

case "$1" in
start)
	init_is_upstart > /dev/null 2>&1 && exit 1
	log_daemon_msg "Starting $DESC" "$NAME"
	do_start
	case $? in
		0|1) log_end_msg 0 ;;
		2) log_end_msg 1 ;;
	esac
;;
stop)
	init_is_upstart > /dev/null 2>&1 && exit 0
	log_daemon_msg "Stopping $DESC" "$NAME"
	do_stop
	case $? in
		0|1) log_end_msg 0 ;;
		2) log_end_msg 1 ;;
	esac
;;
status)
	status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
systemd-start)
	do_systemd_start
;;  
restart|force-reload)
	init_is_upstart > /dev/null 2>&1 && exit 1
	log_daemon_msg "Restarting $DESC" "$NAME"
	do_stop
	case $? in
	0|1)
		do_start
		case $? in
			0) log_end_msg 0 ;;
			1) log_end_msg 1 ;; # Old process is still running
			*) log_end_msg 1 ;; # Failed to start
		esac
	;;
	*) log_end_msg 1 ;; # Failed to stop
	esac
;;
*)
	echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload|systemd-start}" >&2
	exit 3
;;
esac

exit 0

Nothing particularly fancy here… You’ll noticed that it’s really OpenStack centric (see the LOGFILE and CONFIGFILE things…). You may have also noticed the call to “init_is_upstart” which is needed for upstart support. I’m not sure if it’s at the correct place in the init script. Should I put that on top of the script? Was I right with the exit values for it? Please send me your comments…

Then I thought about generalizing all of this. Because not only the sysv-rc scripts needed to be squared-up, but also Upstart. The approach here was to source the sysv-rc script in debian/*.init.in, and then generate the Upstart job accordingly, using the above 3 variables (or more as needed). Here, the fun is that, instead of taking the approach of calculating everything at runtime with the sysv-rc, for Upstart jobs, many things are calculated at build time. For each debian/*.init.in script that the debian/rules finds, pkgos-gen-upstart-job is called. Here’s pkgos-gen-upstart-job:

#!/bin/sh

INIT_TEMPLATE=${1}
UPSTART_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.upstart/'`

# Get the variables defined in the init template
. ${INIT_TEMPLATE}

## Find out what should go in After=
#SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'`
#
#if [ -n "${SHOULD_START}" ] ; then
#	AFTER="After="
#	for i in ${SHOULD_START} ; do
#		AFTER="${AFTER}${i}.service "
#	done
#fi

if [ -z "${DAEMON}" ] ; then
        DAEMON=/usr/bin/${NAME}
fi
PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid
if [ -z "${SCRIPTNAME}" ] ; then
	SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
	SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_GROUP}" ] ; then
	SYSTEM_GROUP=${PROJECT_NAME}
fi
if [ "${SYSTEM_USER}" != "root" ] ; then
	STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}"
fi
if [ -z "${CONFIG_FILE}" ] ; then
	CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf
fi
LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log
DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}"

echo "description \"${DESC}\"
author \"Thomas Goirand <zigo@debian.org>\"

start on runlevel [2345]
stop on runlevel [!2345]

chdir /var/run

pre-start script
	for i in lock run log lib ; do
		mkdir -p /var/\$i/${PROJECT_NAME}
		chown ${SYSTEM_USER} /var/\$i/${PROJECT_NAME}
	done
end script

script
	[ -x \"${DAEMON}\" ] || exit 0
	DAEMON_ARGS=\"${DAEMON_ARGS}\"
	[ -r /etc/default/openstack ] && . /etc/default/openstack
	[ -r /etc/default/\$UPSTART_JOB ] && . /etc/default/\$UPSTART_JOB
	[ \"x\$USE_SYSLOG\" = \"xyes\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --use-syslog\"
	[ \"x\$USE_LOGFILE\" != \"xno\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --log-file=${LOGFILE}\"

	exec start-stop-daemon --start --chdir /var/lib/${PROJECT_NAME} \\
		${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} \\
		--exec ${DAEMON} -- --config-file=${CONFIG_FILE} \${DAEMON_ARGS}
end script
" >${UPSTART_FILE}

The only thing which I don’t know how to do, is how to implement the Should-Start / Should-Stop in an Upstart job. Can anyone shoot me a mail and tell me the solution?

Then, I wanted to add support for systemd. Here, we cheated, since we only just called the sysv-rc script from the systemd unit, however, the systemd-start target uses exec, so the process stays in the foreground. It’s also much smaller than the Upstart thing. However, here, I could implement the “After” thing, corresponding to the Should-Start:

#!/bin/sh

INIT_TEMPLATE=${1}
SERVICE_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.service/'`

# Get the variables defined in the init template
. ${INIT_TEMPLATE}

if [ -z "${SCRIPTNAME}" ] ; then
	SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
	SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_GROUP}" ] ; then
	SYSTEM_GROUP=${PROJECT_NAME}
fi

# Find out what should go in After=
SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'`

if [ -n "${SHOULD_START}" ] ; then
	AFTER="After="
	for i in ${SHOULD_START} ; do
		AFTER="${AFTER}${i}.service "
	done
fi

echo "[Unit]
Description=${DESC}
$AFTER

[Service]
User=${SYSTEM_USER}
Group=${SYSTEM_GROUP}
WorkingDirectory=/var/lib/${PROJECT_NAME}
PermissionsStartOnly=true
ExecStartPre=/bin/mkdir -p /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME}
ExecStartPre=/bin/chown ${SYSTEM_USER}:${SYSTEM_GROUP} /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME}
ExecStart=${SCRIPTNAME} systemd-start
Restart=on-failure

[Install]
WantedBy=multi-user.target
" >${SERVICE_FILE}

As you can see, it’s calling /etc/init.d/${SCRIPTNAME} sytemd-start, which isn’t great. I’d be happy to have comments from systemd user / maintainers on how to fix it to make it better.

Integrating in debian/rules

To integrate with the Debian package build system, we only need had to write this:

override_dh_installinit:
	# Create the init scripts from the template
	for i in `ls -1 debian/*.init.in` ; do \
		MYINIT=`echo $$i | sed s/.init.in//` ; \
		cp $$i $$MYINIT.init ; \
		cat /usr/share/openstack-pkg-tools/init-script-template >>$$MYINIT.init ; \
		pkgos-gen-systemd-unit $$i ; \
	done
	# If there's an upstart.in file, use that one instead of the generated one
	for i in `ls -1 debian/*.upstart.in` ; do \
		MYPKG=`echo $$i | sed s/.upstart.in//` ; \
		cp $$MYPKG.upstart.in $$MYPKG.upstart ; \
	done
	# Generate the upstart job if there's no already existing .upstart.in
	for i in `ls debian/*.init.in` ; do \
		MYINIT=`echo $$i | sed s/.init.in/.upstart.in/` ; \
		if ! [ -e $$MYINIT ] ; then \
			pkgos-gen-upstart-job $$i ; \
		fi \
	done
	dh_installinit --error-handler=true
	# Generate the systemd unit file
	# Note: because dh_systemd_enable is called by the
	# dh sequencer *before* dh_installinit, we have
	# to process it manually.
	for i in `ls debian/*.init.in` ; do \
		pkgos-gen-systemd-unit $$i ; \
		MYSERVICE=`echo $$i | sed 's/debian\///'` ; \
		MYSERVICE=`echo $$MYSERVICE | sed 's/.init.in/.service/'` ; \
		dh_systemd_enable $$MYSERVICE ; \
	done

As you can see, it’s possible to use a debian/*.upstart.in and not use the templating system, in the more complicated case (I needed it mostly for neutron-server and neutron-plugin-openvswitch-agent).

Conclusion

I do not pretend that what I wrote in the openstack-pkg-tools is the ultimate solution. But I’m convince that it answers our own need as the OpenStack maintainers in Debian. There’s a lot of room for improvements (like implementing the Should-Start in Upstart jobs, or stop calling the sysv-rc script in the systemd units), but that this is a very good move that we did to use templates and generated scripts, as the init scripts are a way more easy to maintain now, in a much more unified way. As I’m not completely satisfied for the systemd and Upstart implementation, I’m sure that there’s already a huge improvements on the sysv-rc script maintainability.

Last and again: please send your comments and help improving the above! :)

15 December, 2014 08:15AM by Goirand Thomas

December 14, 2014

hackergotchi for Mario Lang

Mario Lang

Data-binding MusicXML

My long-term free software project (Braille Music Compiler) just produced some offspring! xsdcxx-musicxml is now available on GitHub.

I used CodeSynthesis XSD to generate a rather complete object model for MusicXML 3.0 documents. Some of the classes needed a bit of manual adjustment, to make the client API really nice and tidy.

During the process, I have learnt (as is almost always the case when programming) quite a lot. I have to say, once you got the hang of it, CodeSynthesis XSD is really a very powerful tool. I definitely prefer having these 100k lines of code auto-generated from a XML Schema, instead of having to implement small parts of it by hand.

If you are into MusicXML for any reason, and you like C++, give this library a whirl. At least to me, it is what I was always looking for: Rather type-safe, with a quite self-explanatory API.

For added ease of integration, xsdcxx-musicxml is sub-project friendly. In other words, if your project uses CMake and Git, adding xsdcxx-musicxml as a subproject is as easy as using git submodule add and putting add_subdirectory(xsdcxx-musicxml) into your CMakeLists.txt.

Finally, if you want to see how this library can be put to use: The MusicXML export functionality of BMC is all in one C++ source file: musicxml.cpp.

14 December, 2014 08:30PM by Mario Lang

Enrico Zini

html5-sse

HTML5 Server-sent events

I have a Django view that runs a slow script server-side, and streams the script output to Javascript. This is the bit of code that runs the script and turns the output into a stream of events:

def stream_output(proc):
    '''
    Take a subprocess.Popen object and generate its output, line by line,
    annotated with "stdout" or "stderr". At process termination it generates
    one last element: ("result", return_code) with the return code of the
    process.
    '''
    fds = [proc.stdout, proc.stderr]
    bufs = [b"", b""]
    types = ["stdout", "stderr"]
    # Set both pipes as non-blocking
    for fd in fds:
        fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK)
    # Multiplex stdout and stderr with different prefixes
    while len(fds) > 0:
        s = select.select(fds, (), ())
        for fd in s[0]:
            idx = fds.index(fd)
            buf = fd.read()
            if len(buf) == 0:
                fds.pop(idx)
                if len(bufs[idx]) != 0:
                    yield types[idx], bufs.pop(idx)
                types.pop(idx)
            else:
                bufs[idx] += buf
                lines = bufs[idx].split(b"\n")
                bufs[idx] = lines.pop()
                for l in lines:
                    yield types[idx], l
    res = proc.wait()
    yield "result", res

I used to just serialize its output and stream it to JavaScript, then monitor onreadystatechange on the XMLHttpRequest object browser-side, but then it started failing on Chrome, which won't trigger onreadystatechange until something like a kilobyte of data has been received.

I didn't want to stream a kilobyte of padding just to work-around this, so it was time to try out Server-sent events. See also this.

This is the Django view that sends the events:

class HookRun(View):
    def get(self, request):
        proc = run_script(request)
        def make_events():
            for evtype, data in utils.stream_output(proc):
                if evtype == "result":
                    yield "event: {}\ndata: {}\n\n".format(evtype, data)
                else:
                    yield "event: {}\ndata: {}\n\n".format(evtype, data.decode("utf-8", "replace"))

        return http.StreamingHttpResponse(make_events(), content_type='text/event-stream')

    @method_decorator(never_cache)
    def dispatch(self, *args, **kwargs):
        return super().dispatch(*args, **kwargs)

And this is the template that renders it:

{% extends "base.html" %}
{% load i18n %}

{% block head_resources %}
{{block.super}}
<style type="text/css">
.out {
    font-family: monospace;
    padding: 0;
    margin: 0;
}
.stdout {}
.stderr { color: red; }
.result {}
.ok { color: green; }
.ko { color: red; }
</style>
{# Polyfill for IE, typical... https://github.com/remy/polyfills/blob/master/EventSource.js #}
<script src="{{ STATIC_URL }}js/EventSource.js"></script>
<script type="text/javascript">
$(function() {
    // Manage spinners and other ajax-related feedback
    $(document).nav();
    $(document).nav("ajax_start");

    var out = $("#output");

    var event_source = new EventSource("{% url 'session_hookrun' name=name %}");
    event_source.addEventListener("open", function(e) {
      //console.log("EventSource open:", arguments);
    });
    event_source.addEventListener("stdout", function(e) {
      out.append($("<p>").attr("class", "out stdout").text(e.data));
    });
    event_source.addEventListener("stderr", function(e) {
      out.append($("<p>").attr("class", "out stderr").text(e.data));
    });
    event_source.addEventListener("result", function(e) {
      if (+e.data == 0)
          out.append($("<p>").attr("class", "result ok").text("{% trans 'Success' %}"));
      else
          out.append($("<p>").attr("class", "result ko").text("{% trans 'Script failed with code' %} " + e.data));
      event_source.close();
      $(document).nav("ajax_end");
    });
    event_source.addEventListener("error", function(e) {
      // There is an annoyance here: e does not contain any kind of error
      // message.
      out.append($("<p>").attr("class", "result ko").text("{% trans 'Error receiving script output from the server' %}"));
      console.error("EventSource error:", arguments);
      event_source.close();
      $(document).nav("ajax_end");
    });
});
</script>
{% endblock %}

{% block content %}

<h1>{% trans "Processing..." %}</h1>

<div id="output">
</div>

{% endblock %}

It's simple enough, it seems reasonably well supported besides needing a polyfill for IE and, astonishingly, it even works!

14 December, 2014 03:32PM

Daniel Leidert

Issues with Server4You vServer running Debian Stable (Wheezy)

I recently acquired a vServer hosted by Server4You and decided to install a Debian Wheezy image. Usually I boot any device in backup mode and first install a fresh Debian copy using debootstrap over the provided image, to have a clean system. In this case I did not and I came across a few glitches I want to talk about. So hopefully, if you are running the same system image, it saves you some time to figure out, why the h*ll some things don't work as expected :)

Cron jobs not running

I installed unattended-upgrades and adjusted all configuration files to enable unattended upgrades. But I never received any mail about an update although looking at the system, I saw updates waiting. I checked with

# run-parts --list /etc/cron.daily

and apt was not listed although /etc/cron.daily/apt was there. After spending some time to figure out, what was going on, I found the rather simple cause: Several scripts were missing the executable bit, thus did not run. So it seems, for whatever reason, the image authors have tempered with file permissions and of course, not by using dpkg-statoverride :( It was easy to fix the file permissions for everything beyond /etc/cron*, but that still leaves a very bad feeling, that there are more files that have been tempered with! I'm not speaking about customizations. That are easy to find using debsums. I'm speaking about file permissions and ownership.

Now there seems no easy way to either check for changed permissions or ownership. The only solution I found is to get a list of all installed packages on the system, install them into a chroot environment and get all permission and ownership information from this very fresh system. Then compare file permissions/ownership of the installed system with this list. Not fun.

init from testing / upstart on hold

Today I've discovered, that apt-get wanted to update the init package. Of course I was curious, why unattended-upgrades didn't yet already do so. Turns out, init is only in testing/unstable and essential there. I purged it, but apt-get keeps bugging me to update/install this package. I really began to wonder, what is going on here, because this is a plain stable system:

  • no sources listed for backports, volatile, multimedia etc.
  • sources listed for testing and unstable
  • only packages from stable/stable-updates installed
  • sets APT::Default-Release "stable";

First I checked with aptitude:

# aptitude why init
Unable to find a reason to install init.

Ok, so why:

# apt-get dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

JFTR: I see a stable system bugging me to install systemd for no obvious reason. The issue might be similar! I'm still investigating. (not reproducible anymore)

Now I tried to debug this:

# apt-get -o  Debug::pkgProblemResolver="true" dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Starting
Starting 2
Investigating (0) upstart [ amd64 ] < 1.6.1-1 | 1.11-5 > ( admin )
Broken upstart:amd64 Conflicts on sysvinit [ amd64 ] < none -> 2.88dsf-41+deb7u1 | 2.88dsf-58 > ( admin )
Conflicts//Breaks against version 2.88dsf-58 for sysvinit but that is not InstVer, ignoring
Considering sysvinit:amd64 5102 as a solution to upstart:amd64 10102
Added sysvinit:amd64 to the remove list
Fixing upstart:amd64 via keep of sysvinit:amd64
Done
Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

Eh, upstart?

# apt-cache policy upstart
upstart:
Installed: 1.6.1-1
Candidate: 1.6.1-1
Version table:
1.11-5 0
500 http://ftp.de.debian.org/debian/ testing/main amd64 Packages
500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
*** 1.6.1-1 0
990 http://ftp.de.debian.org/debian/ stable/main amd64 Packages
100 /var/lib/dpkg/status
# dpkg -l upstart
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=============================-===================-===================-===============================================================
hi upstart 1.6.1-1 amd64 event-based init daemon

Ok, at least one package is at hold. This is another questionable customization, but in case easy to fix. But I still don't understand apt-get and the difference to aptitude behaviour? Can someone please enlighten me?

Customized files

This isn't really an issue, but just for completion: several files have been customized. debsums easily shows which ones:

# debsums -ac
I don't have the original list anymore - please check yourself

14 December, 2014 01:58PM by Daniel Leidert (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 0.0.4.20141212

A new version of rfoaas is now on CRAN. The rfoaas package provides an interface for R to the most excellent FOAAS service -- which provides a modern, scalable and RESTful web service for the frequent need to tell someone to eff off.

The FOAAS backend gets updated in spurts, and yesterday a few pull requests were integrated, including one from yours truly. So with that it was time for an update to rfoaas. As the version number upstream did not change (bad, bad, practice) I appended the date the version number.

CRANberries also provides a diff to the previous release. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 December, 2014 12:20AM

December 13, 2014

hackergotchi for Holger Levsen

Holger Levsen

20141213-on-having-fun-in-debian

On having fun in Debian

(Thanks to cardboard-crack.com for this awesome comic!)

13 December, 2014 04:11PM

hackergotchi for Keith Packard

Keith Packard

present-compositor

Present and Compositors

The current Present extension is pretty unfriendly to compositing managers, causing an extra frame of latency between the applications operation and the scanout buffer. Here's how I'm fixing that.

An extra frame of lag

When an application uses PresentPixmap, that operation is generally delayed until the next vblank interval. When using X without composting, this ensures that the operation will get started in the vblank interval, and, if the rendering operation is quick enough, you'll get the frame presented without any tearing.

When using a compositing manager, the operation is still delayed until the vblank interval. That means that the CopyArea and subsequent Damage event generation don't occur until the display has already started the next frame. The compositing manager receives the damage event and constructs a new frame, but it also wants to avoid tearing, so that frame won't get displayed immediately, instead it'll get delayed until the next frame, introducing the lag.

Copy now, complete later

While away from the keyboard this morning, I had a sudden idea -- what if we performed the CopyArea and generated Damage right when the PresentPixmap request was executed but delayed the PresentComplete event until vblank happened.

With the contents updated and damage delivered, the compositing manager can immediately start constructing a new scene for the upcoming frame. When that is complete, it can also use PresentPixmap (either directly or through OpenGL) to queue the screen update.

If it's fast enough, that will all happen before vblank and the application contents will actually appear at the desired time.

Now, at the appointed vblank time, the PresentComplete event will get delivered to the client, telling it that the operation has finished and that its contents are now on the screen. If the compositing manager was quick, this event won't even be a lie.

We'll be lying less often

Right now, the CopyArea, Damage and PresentComplete operations all happen after the vblank has passed. As the compositing manager delays the screen update until the next vblank, then every single PresentComplete event will have the wrong UST/MSC values in it.

With the CopyArea happening immediately, we've a pretty good chance that the compositing manager will get the application contents up on the screen at the target time. When this happens, the PresentComplete event will have the correct values in it.

How can we do better?

The only way to do better is to have the PresentComplete event generated when the compositing manager displays the frame. I've talked about how that should work, but it's a bit twisty, and will require changes in the compositing manager to report the association between their PresentPixmap request and the applications' PresentPixmap requests.

Where's the code

I've got a set of three patches, two of which restructure the existing code without changing any behavior and a final patch which adds this improvement. Comments and review are encouraged, as always!

git://people.freedesktop.org/~keithp/xserver.git present-compositor

13 December, 2014 08:28AM

December 12, 2014

hackergotchi for Thorsten Glaser

Thorsten Glaser

WTF is Jessie; PA4 paper size

My personal APT repository now has a jessie suite – currently just a clone of the sid suite, but so, people can get on the correct “upgrade channel” already.

Besides that, the usual small updates to my metapackages, bugfixes, etc. – You might have noticed that it’s now on a (hopefully permanent) location. I’ve put a donated eee-pc from my father to good use and am now running a Debian system at home. (Fun, as I’m emeritus now, officially, and haven’t had one during my time as active uploading DD.) I’ve created a couple of cowbuilder chroots (pbuilderrc to achieve that included in the repo) and can build packages, but for i386 only (amd64 is still done on the x32 desktop at work), but, more importantly, I can build, sign and publish the repo, so it may grow. (popcon data is interesting. More than double the amount of machines I have installed that stuff on.)

Update: I’ve started writing a NEWS file and cobbled together an RSS 2.0 feed from that… still plaintext content, but at least signalling in feedreaders upon updates.


Installing gimp and inkscape, I’m asked for a default paper size by libpaper1. PA4 is still not an option, I wonder why. I also haven’t managed to get MirPorts GNU groff and Artifex Ghostscript to use that paper size, so the various PDF manpages I produce are still using DIN ISO A4, rendering e.g. Mexicans unable to print them. Help welcome.

12 December, 2014 11:56PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

a10n for l10n

The abbreviated title above means "Appreciation for Localization" :)

I wanted to say a word of thanks for the awesome work done by debian localization teams. I speak English, and my other language skills are weak. I'm lucky: most software I use is written by default in a language that I can already understand.

The debian localization teams do great work in making sure that packages in debian gets translated into many other languages, so that many more people around the world can take advantage of free software.

I was reminded of this work recently (again) with the great patches submitted to GnuPG and related packages. The changes were made by many different people, and coordinated with the debian GnuPG packaging team by David Prévot.

This work doesn't just help debian and its users. These localizations make their way back upstream to the original projects, which in turn are available to many other people.

If you use debian, and you speak a language other than english, and you want to give back to the community, please consider joining one of the localization teams. They are a great way to help out our project's top priorities: our users and free software.

Thank you to all the localizers!

(this post was inspired by gregoa's debian advent calendar. i won't be posting public words of thanks as frequently or as diligently as he does, any more than i'll be fixing the number of RC bugs that he fixes. This are just two of the ways that gregoa consistently leads the community by example. He's an inspiration, even if living up to his example is a daunting challenge.)

12 December, 2014 11:00PM by Daniel Kahn Gillmor (dkg)

Jingjie Jiang

Week1

Down the rabbit hole

Starting from this week, my OPW period officially begins.
I am thankful and very grateful to this chance. One for the reason I can get an opportunity to contribute to a beneficial, working, meaningful, real-world software. The other seemingly reason is, I can learn much experience and design philosophy from my mentors zack and matthieu. :)

This week my fix is on, bug #763921. It’s basically making the folder page rendering providing more information, specifically the ls -l format. This offers information such as filetype, permission, size, etc.

I learned some new knowledge about “man 2 stat”, and also got more familiar(actually confident) with front end stuff, namely css.

I also get myself familiar with the python test (coverage). Next week, I will try to increase the test coverage a bit. Tests is an essential part of software. It ensures the correctness and robustness. And more importantly, by making tests, we can easily debug the software. The so called, 磨刀不误砍柴工。

The trello cards of next week is interesting. (in case you dunno the site, it’s here:http://trello.com

Let’s see it.


12 December, 2014 02:37PM by sophiejjj

hackergotchi for EvolvisForge blog

EvolvisForge blog

Tip of the day: don’t use –purge when cross-grading

A surprise to see my box booting up with the default GRUB 2.x menu, followed by “cannot find a working init”.

What happened?

Well, grub:i386 and grub:x32 are distinct packages, so APT helpfully decided to purge the GRUB config. OK. Manual boot menu entry editing later, re-adding “GRUB_DISABLE_SUBMENU=y” and “GRUB_CMDLINE_LINUX=”syscall.x32=y”” to /etc/default/grub, removing “quiet” again from GRUB_CMDLINE_LINUX_DEFAULT, and uncommenting “GRUB_TERMINAL=console”… and don’t forget to “sudo update-grub”. There. This should work.

On the plus side, nvidia-driver:i386 seems to work… but not with boinc-client:x32 (why, again? I swear, its GPU detection has been driving me nuts on >¾ of all systems I installed it on, already!).

On the minus side, I now have to figure out why…

tglase@tglase:~ $ sudo ifup -v tap1
Configuring interface tap1=tap1 (inet)
run-parts –exit-on-error –verbose /etc/network/if-pre-up.d
run-parts: executing /etc/network/if-pre-up.d/bridge
run-parts: executing /etc/network/if-pre-up.d/ethtool
ip addr add 192.168.0.3/255.255.255.255 broadcast 192.168.0.3 peer 192.168.0.4 dev tap1 label tap1
Cannot find device “tap1″
Failed to bring up tap1.

… this happens. This used to work before the cktN kernels.

12 December, 2014 09:04AM by Thorsten Glaser

hackergotchi for Joey Hess

Joey Hess

a brainfuck monad

Inspired by "An ASM Monad", I've built a Haskell monad that produces brainfuck programs. The code for this monad is available on hackage, so cabal install brainfuck-monad.

Here's a simple program written using this monad. See if you can guess what it might do:

import Control.Monad.BrainFuck

demo :: String
demo = brainfuckConstants $ \constants -> do
        add 31
        forever constants $ do
                add 1
                output

Here's the brainfuck code that demo generates: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>++++++++++++++++++++++++++++++++<<<<<<<<[>>>>>>>>+.<<<<<<<<]

If you feed that into a brainfuck interpreter (I'm using hsbrainfuck for my testing), you'll find that it loops forever and prints out each character, starting with space (32), in ASCIIbetical order.

The implementation is quite similar to the ASM monad. The main differences are that it builds a String, and that the BrainFuck monad keeps track of the current position of the data pointer (as brainfuck lacks any sane way to manipulate its instruction pointer).

newtype BrainFuck a = BrainFuck (DataPointer -> ([Char], DataPointer, a))

type DataPointer = Integer

-- Gets the current address of the data pointer.
addr :: BrainFuck DataPointer
addr = BrainFuck $ \loc -> ([], loc, loc)

Having the data pointer address available allows writing some useful utility functions like this one, which uses the next (brainfuck opcode >) and prev (brainfuck opcode <) instructions.

-- Moves the data pointer to a specific address.
setAddr :: Integer -> BrainFuck ()
setAddr n = do
        a <- addr
        if a > n
                then prev >> setAddr n
                else if a < n
                        then next >> setAddr n
                        else return ()

Of course, brainfuck is a horrible language, designed to be nearly impossible to use. Here's the code to run a loop, but it's really hard to use this to build anything useful..

-- The loop is only entered if the byte at the data pointer is not zero.
-- On entry, the loop body is run, and then it loops when
-- the byte at the data pointer is not zero.
loopUnless0 :: BrainFuck () -> BrainFuck ()
loopUnless0 a = do
        open
        a
        close

To tame brainfuck a bit, I decided to treat data addresses 0-8 as constants, which will contain the numbers 0-8. Otherwise, it's very hard to ensure that the data pointer is pointing at a nonzero number when you want to start a loop. (After all, brainfuck doesn't let you set data to some fixed value like 0 or 1!)

I wrote a little brainfuckConstants that runs a BrainFuck program with these constants set up at the beginning. It just generates the brainfuck code for a series of ASCII art fishes: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>

With the fishes^Wconstants in place, it's possible to write a more useful loop. Notice how the data pointer location is saved at the beginning, and restored inside the loop body. This ensures that the provided BrainFuck action doesn't stomp on our constants.

-- Run an action in a loop, until it sets its data pointer to 0.
loop :: BrainFuck () -> BrainFuck ()
loop a = do
    here <- addr
    setAddr 1
    loopUnless0 $ do
        setAddr here
        a

I haven't bothered to make sure that the constants are really constant, but that could be done. It would just need a Control.Monad.BrainFuck.Safe module, that uses a different monad, in which incr and decr and input don't do anything when the data pointer is pointing at a constant. Or, perhaps this could be statically checked at the type level, with type level naturals. It's Haskell, we can make it safer if we want to. ;)

So, not only does this BrainFuck monad allow writing brainfuck code using crazy haskell syntax, instead of crazy brainfuck syntax, but it allows doing some higher-level programming, building up a useful(!?) library of BrainFuck combinators and using them to generate brainfuck code you'd not want to try to write by hand.

Of course, the real point is that "monad" and "brainfuck" so obviously belonged together that it would have been a crime not to write this.

12 December, 2014 05:02AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.2

A new release 0.4.2 of RProtoBuf is now on CRAN. RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

Murray and Jeroen did almost all of the heavy lifting. Many changes were triggered by two helpful referee reports, and we are slowly getting to the point where we will resubmit a much improved paper. Full details are below.

Changes in RProtoBuf version 0.4.2 (2014-12-10)

  • Address changes suggested by anonymous reviewers for our Journal of Statistical Software submission.

  • Make Descriptor and EnumDescriptor objects subsettable with "[[".

  • Add length() method for Descriptor objects.

  • Add names() method for Message, Descriptor, and EnumDescriptor objects.

  • Clarify order of returned list for descriptor objects in as.list documentation.

  • Correct the definition of as.list for EnumDescriptors to return a proper list instead of a named vector.

  • Update the default print methods to use cat() with fill=TRUE instead of show() to eliminate the confusing [1] since the classes in RProtoBuf are not vectorized.

  • Add support for serializing function, language, and environment objects by falling back to R's native serialization with serialize_pb and unserialize_pb to make it easy to serialize into a Protocol Buffer all of the more than 100 datasets which come with R.

  • Use normalizePath instead of creating a temporary file with file.create when getting absolute path names.

  • Add unit tests for all of the above.

CRANberries also provides a diff to the previous release. RProtoBuf page which has a draft package vignette, a a 'quick' overview vignette, and a unit test summary vignette. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 December, 2014 02:19AM

December 11, 2014

Enrico Zini

ssl-protection

SSL "protection"

In my experience with my VPS, setting up pretty much any service exposed to the internet, even a simple thing to put a calendar in my phone requires an SSL certificate, which costs money, which needs to be given to some corporation or another.

When the only way to get protection from a threat is to give money to some big fish, I feel like I'm being forced to pay protection money.

I look forward to this.

11 December, 2014 02:35PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s fourth report about Debian Long Term Support

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In November 42.5 work hours have been equally split among 3 paid contributors. Their reports are available:

  • Thorsten Alteholz did his share as usual.
  • Raphaël Hertzog worked 18 hours (catching up the remaining 4 hours of October).
  • Holger Levsen did his share but did not manage to catch up with the backlog of the previous months. As such, those unused work hours have been redispatched among other contributors for the month of December.

New paid contributors

Last month we mentioned the possibility to recruit more paid contributors to better share the work load and this has already happened: Ben Hutchings and Mike Gabriel join the list of paid contributors.

Ben, as a kernel maintainer, will obviously take care of releasing Linux security updates. We are glad to have him on board because backporting kernel fixes really need some skills that nobody else had within the team of paid contributors.

Evolution of the situation

Compared to last month, the number of paid work hours has almost not increased (we are at 45.7 hours per month) but we are in the process of adding a few more sponsors: Roche Diagnostics International AG, Misal-System, Bitfolk LTD. And we are still in contact with a couple of other companies which have announced their willingness to contribute but which are waiting the new fiscal year.

But even with those new sponsors, we still have some way to go to reach our minimal goal of funding the equivalent of a half-time position. So consider asking your company representative to join this project!

In terms of security updates waiting to be handled, the situation looks better than last month: the dla-needed.txt file lists 27 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 58 affected packages in total. Like last month, we’re a bit behind in terms of CVE triaging and there are still many packages using SSLv3 where we have no clear plan (in response to the POODLE issues).

The good side is that even though the kernel update spent a large chunk of time to Holger and Raphaël, we still managed to further reduce the backlog of security issues.

Thanks to our sponsors

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

11 December, 2014 11:32AM by Raphaël Hertzog

hackergotchi for Steve Kemp

Steve Kemp

An anniversary and a retirement

On this day last year I we got married.

This morning my wife cooked me breakfast in bed for the second time in her life, the first being this time last year. In thanks I will cook a three course meal this evening.

 

In unrelated news the BlogSpam service will be retiring the XML/RPC API come 1st January 2015.

This means that any/all plugins which have not been updated to use the JSON API will start to fail.

Fingers crossed nobody will hate me too much..

11 December, 2014 10:56AM

December 10, 2014

hackergotchi for Clint Adams

Clint Adams

In Uganda, a popular marbles game is called dool.

Sophie stood before me. “I'm leaving with that guy,” she gestured.

“Yes, I thought that would happen,” I chuckled.

She hugged me. The guy, whose name we managed to never utter, did not hug me, though he usually does. They went home together.

That was the last time I saw Sophie.

The rest of us sat down, finished our drinks, and split up. I went with Sophie's ex-girlfriend and the guy who sometimes serves as her ironic beard.

They smoked their disgusting light cigarettes, the kind with very little tobacco but lots of horrible chemicals that make me cough and hopefully fail to give me lung cancer, because watching someone else die of that was excruciating enough.

So we get to our next destination and there is a Peruvian girl sitting on a stool and shopping for shoes on her phone. I am fascinated. Phone app developers had told me that people actually did this but I thought it was just wishful thinking on their part.

The Peruvian girl, who is named something that sounds like it was uttered accidentally by Tommy Gnosis, complains to Sophie's ex-girlfriend that some guy keeps harassing her. We instinctively form a human barrier to shield her from this alleged transgressor, who, it turns out, is the pompous drug dealer with whom Sophie's ex-girlfriend is just about to conduct business.

“I'll be right back,” she says. “Hit on her.”

“What‽ Why‽” I shout after her. There is no response.

Sophie's ex-girlfriend and the drug dealer return from the darkness, having swapped possessions.

The drug dealer is a blowhard and proceeds to regale us with stories so little interest to me that I can't even remember what they were about, but as drug dealers are wont to do, he abuses the power of his possession to maintain the delusion that people would tolerate his presence even if he didn't have illegal commodities to sell them.

When the beard and Sophie's ex-girlfriend go out for a smoke break, I went home.

10 December, 2014 08:13PM

hackergotchi for Chris Lamb

Chris Lamb

Starting IPython automatically from zsh

Instead of a calculator, I tend to use IPython for those quotidian bits of "mental" arithmetic:

In  [1]: 17 * 22.2
Out [1]: 377.4

However, I often forget to actually start IPython, resulting in me running the following in my shell:

$ 17 * 22.2
zsh: command not found: 17

Whilst I could learn do this maths within Zsh itself, I would prefer to dump myself into IPython instead — being able to use "_" and Python modules generally is just too useful.

After following this pattern too many times, I put together the following snippet that will detect whether I have prematurely attempted a calculation inside zsh and pretend that I ran it in IPython all along:

zmodload zsh/pcre

math_regex='^[\d\-][\d\.\s\+\*\/\-]*$'

function math_precmd() {
    if [ "${?}" = 0 ]
    then
        return
    fi

    if [ -z "${math_command}" ]
    then
        return
    fi

    if whence -- "$math_command" 2>&1 >/dev/null
    then
        return
    fi

    if [ "${math_command}" -pcre-match "${math_regex}" ]
    then
        echo
        ipython -i -c "_=${math_command}; print _"
    fi
}

function math_preexec() {
    typeset -g math_command="${1}"
}

typeset -ga precmd_functions
typeset -ga preexec_functions

precmd_functions+=math_precmd
preexec_functions+=math_preexec

For example:

lamby@seriouscat:~% 17 * 22.2
zsh: command not found: 17

377.4

In  [1]: _ + 1
Out [1]: 378.4

(Canonical version from my zshrc.d)

10 December, 2014 06:07PM

December 09, 2014

hackergotchi for Joey Hess

Joey Hess

podcasts that don't suck, 2014 edition

  • The Memory Palace: This is the way history should be taught, but rarely is. Nate DiMeo takes past events and puts you in the middle of them, in a way that makes you emphathise so much with people from the past. Each episode is a little short story, and they're often only a few minutes long. A great example is this description of when Niagra falls stopped. I have listened to the entire back archive, and want more. Only downside is it's a looong time between new episodes.

  • The Haskell Cast: Panel discussion with a guest, there is a lot of expertise amoung them and I'm often scrambling to keep up with the barrage of ideas. If this seems too tame, check out The Type Theory Podcast instead..

  • Benjamen Walker's Theory of Everything: Only caught 2 episodes so far, but they've both been great. Short, punchy, quirky, geeky. Astoundingly good production values.

  • Lightspeed magazine and Escape Pod blur together for me. Both feature 20-50 minute science fiction short stories, and occasionally other genre fictions. They seem to get all the award-winning short stories. I sometimes fall asleep to these which can make for strange dreams. Two strongly contrasting examples: "Observations About Eggs from the Man Sitting Next to Me on a Flight from Chicago, Illinois to Cedar Rapids, Iowa" and "Pay Phobetor"

  • Serial: You probably already know about this high profile TAL spinoff. If you didn't before: You're welcome. :) Nuff said.

  • Redecentralize: Interviews with creators of decentralized internet tools like Tahoe-LAFS, Ethereum, Media Goblin, TeleHash. I just wish it went into more depth on protocols and how they work.

  • Love and Radio: This American Life squared and on acid.

  • Debian & Stuff: My friend Asheesh and that guy I ate Thai food with once in Portland in a marvelously unfocused podcast that somehow connects everything up in the end. Only one episode so far; what are you guys waiting on? :P

  • Hacker Public Radio: Anyone can upload an episode, and multiple episodes are published each week, which makes this a grab bag to pick and choose from occasionally. While mostly about Linux and Free Software, the best episodes are those that veer var afield, such as the 40 minute river swim recording featured in Wildswimming in France.

Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars.

PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!

09 December, 2014 07:05PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Playing with ExtreMon

Munin is a great tool. If you can script it, you can monitor it with munin. Unfortunately, however, munin is slow; that is, it will take snapshots once every five minutes, and not look at systems in between. If you have a short load spike that takes just a few seconds, chances are pretty high munin missed it. It also comes with a great webinterfacefrontendthing that allows you to dig deep in the history of what you've been monitoring.

By the time munin tells you that your Kerberos KDCs are all down, you've probably had each of your users call you several times to tell you that they can't log in. You could use nagios or one of its brethren, but it takes about a minute before such tools will notice these things, too.

Maybe use CollectD then? Rather than check once every several minutes, CollectD will collect information every few seconds. Unfortunately, however, due to the performance requirements to accomplish that (without causing undue server load), writing scripts for CollectD is not as easy as it is for Munin. In addition, webinterfacefrontendthings aren't really part of the CollectD code (there are several, but most that I've looked at are lacking in some respect), so usually if you're using CollectD, you're missing out some.

And collectd doesn't do the nagios thing of actually telling you when things go down.

So what if you could see it when things go bad?

At one customer, I came in contact with Frank, who wrote ExtreMon, an amazing tool that allows you to visualize the CollectD output as things are happening, in a full-screen fully customizable visualization of the data. The problem is that ExtreMon is rather... complex to set up. When I tried to talk Frank into helping me getting things set up for myself so I could play with it, I got a reply along the lines of...

well, extremon requires a lot of work right now... I really want to fix foo and bar and quux before I start documenting things. Oh, and there's also that part which is a dead end, really. Ask me in a few months?

which is fair enough (I can't argue with some things being suboptimal), but the code exists, and (as I can see every day at $CUSTOMER) actually works. So I decided to just figure it out by myself. After all, it's free software, so if it doesn't work I can just read the censored code.

As the manual explains, ExtreMon is a plugin-based system; plugins can add information to the "coven", read information from it, or both. A typical setup will run several of them; e.g., you'd have the from_collectd plugin (which parses the binary network protocol used by collectd) to get raw data into the coven; you'd run several aggregator plugins (which take that raw data and interpret it, allowing you do express things along the lines of "if the system's load gets above X, set load.status to warning"; and you'd run at least one output plugin so that you can actually see the damn data somewhere.

While setting up ExtreMon as is isn't as easy as one would like, I did manage to get it to work. Here's what I had to do.

You will need:

  • A monitor with a FullHD (or better) resolution. Currently, the display frontend of ExtreMon assumes it has a FullHD display at all time. Even if you have a lower resolution. Or a higher one.
  • Python3
  • OpenJDK 6 (or better)

First, we clone the ExtreMon git repository:

git clone https://github.com/m4rienf/ExtreMon.git extremon
cd extremon

There's a README there which explains the bare necessities on getting the coven to work. Read it. Do what it says. It's not wrong. It's not entirely complete, though; it fails to mention that you need to

  • install CollectD (which is required for its types.db)
  • Configure CollectD to have a line like Hostname "com.example.myhost" rather than the (usual) FQDNLookup true. This is because extremon uses the java-style reverse hostname, rather than the internet-style FQDN.

Make sure the dump.py script outputs something from collectd. You'll know when it shows something not containing "plugin" or "plugins" in the name. If it doesn't, fiddle with the #x3. lines at the top of the from_collectd file until it does. Note that ExtreMon uses inotify to detect whether a plugin has been added to or modified in its plugins directory; so you don't need to do anything special when updating things.

Next, we build the java libraries (which we'll need for the display thing later on):

cd java/extremon
mvn install
cd ../client/
mvn install

This will download half the Internet, build some java sources, and drop the precompiled .jar files in your $HOME/.m2/repository.

We'll now build the display frontend. This is maintained in a separate repository:

cd ../..
git clone https://github.com/m4rienf/ExtreMon-Display.git display
cd display
mvn install

This will download the other half of the Internet, and then fail, because Frank forgot to add a few repositories. Patch (and push request) on github

With that patch, it will build, but things will still fail when trying to sign a .jar file. I know of four ways on how to fix that particular problem:

  1. Add your passphrase for your java keystore, in cleartext, to the pom.xml file. This is a terrible idea.
  2. Pass your passphrase to maven, in cleartext, by using some command line flags. This is not much better.
  3. Ensure you use the maven-jarsigner-plugin 1.3.something or above, and figure out how the maven encrypted passphrase store thing works. I failed at that.
  4. Give up on trying to have maven sign your jar file, and do it manually. It's not that hard, after all.

If you're going with 1 through 3, you're on your own. For the last option, however, here's what you do. First, you need a key:

keytool -genkeypair -alias extremontest

after you enter all the information that keytool will ask for, it will generate a self-signed code signing certificate, valid for six months, called extremontest. Producing a code signing certificate with longer validity and/or one which is signed by an actual CA is left as an exercise to the reader.

Now, we will sign the .jar file:

jarsigner target/extremon-console-1.0-SNAPSHOT.jar extremontest

There. Who needs help from the internet to sign a .jar file? Well, apart from this blog post, of course.

You will now want to copy your freshly-signed .jar file to a location served by HTTPS. Yes, HTTPS, not HTTP; ExtreMon-Display will fail on plain HTTP sites.

Download this SVG file, and open it in an editor. Find all references to be.grep as well as those to barbershop and replace them with your own prefix and hostname. Store it along with the .jar file in a useful directory.

Download this JNLP file, and store it on the same location (or you might want to actually open it with "javaws" to see the very basic animated idleness of my system). Open it in an editor, and replace any references to barbershop.grep.be by the location where you've stored your signed .jar file.

Add the chalice_in_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.1.3 of the manual (or something functionally equivalent) to your webserver's configuration. Make sure to have authentication—chalice_in_http is an input mechanism.

Add the chalice_out_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.2.1 of the manual (or something functionally equivalent) to your webserver's configuration. Authentication isn't strictly required for the output plugin, but you might wish for it anyway if you care whether the whole internet can see your monitoring.

Now run javaws https://url/x3console.jnlp to start Extremon-Display.

At this point, I got stuck for several hours. Whenever I tried to run x3mon, this java webstart thing would tell me simply that things failed. When clicking on the "Details" button, I would find an error message along the lines of "Could not connect (name must not be null)". It would appear that the Java people believe this to be a proper error message for a fairly large number of constraints, all of which are slightly related to TLS connectivity. No, it's not the keystore. No, it's not an API issue, either. Or any of the loads of other rabbit holes that I dug myself in.

Instead, you should simply make sure you have Server Name Indication enabled. If you don't, the defaults in Java will cause it to refuse to even try to talk to your webserver.

The ExtreMon github repository comes with a bunch of extra plugins; some are special-case for the place where I first learned about it (and should therefore probably be considered "examples"), others are general-purpose plugins which implement things like "is the system load within reasonable limits". Be sure to check them out.

Note also that while you'll probably be getting most of your data from CollectD, you don't actually need to do that; you can write your own plugins, completely bypassing collectd. Indeed, the from_collectd thing we talked about earlier is, simply, also a plugin. At $CUSTOMER, for instance, we have one plugin which simply downloads a file every so often and checks it against a checksum, to verify that a particular piece of nonlinear software hasn't gone astray yet again. That doesn't need collectd.

The example above will get you a small white bar, the width of which is defined by the cpu "idle" statistic, as reported by CollectD. You probably want more. The manual (chapter 4, specifically) explains how to do that.

Unfortunately, in order for things to work right, you need to pretty much manually create an SVG file with a fairly strict structure. This is the one thing which Frank tells me is a dead end and needs to be pretty much rewritten. If you don't feel like spending several days manually drawing a schematic representation of your network, you probably want to wait until Frank's finished. If you don't mind, or if you're like me and you're impatient, you'll be happy to know that you can use inkscape to make the SVG file. You'll just have to use dialog behind ctrl+shift+X. A lot.

Once you've done that though, you can see when your server is down. Like, now. Before your customers call you.

09 December, 2014 06:43PM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

MySQL Meet-up 20141208

I had an enjoyable time last night at Twitter with local MySQL DBAs and developers. We had an attendee who has no experience with SQL or programming at all. She is interested in organizing her collection of recipes and had heard a rumor that MySQL was a good tool to use for this task. She indicated that her desktop runs Windows 7. I think I’m going to encourage her to turn her concept in to a community project, as she is not the first person I’ve met who wants to organize recipes!

We were hosted by Rob at Twitter, who used to work with Lisa back before she retired. He’s a member of the site reliability team and keeps the fail whale from rearing its blubbery head.

Pizza was provided by my dear friend and long-time open source buddy Gerry Narvaja with the assistance of the folks in the kitchen at Zeek’s.

We discussed new techniques in the areas of load balancing and high availability. Five nines is no longer the thing that people talk about, instead it’s six nines. It’s a brave new world out there!

I was not the only person who was excited about one of the latest features in MariaDB / MySQL to come out of HP, the high resolution time data types.

One of the attendees is an old hand at COBOL and was asking if anyone knows where one can get a COBOL runtime environment. I’ve never thought about that before… Let me ask the googs… Looks like there’s an active project called GNU COBOL which is officially part of the GNU project:

09 December, 2014 05:31PM by C.J. Adams-Collier

Enrico Zini

radicale-davdroid

Radicale and DAVDroid

radicale and DAVdroid appeal to me. Let's try to make the whole thing work.

A self-signed SSL certificate

Generating the certificate:

    openssl req -nodes -x509 -newkey rsa:2048 -keyout cal-key.pem -out cal-cert.pem -days 3650
    [...]
    Country Name (2 letter code) [AU]:IT
    State or Province Name (full name) [Some-State]:Bologna
    Locality Name (eg, city) []:
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:enricozini.org
    Organizational Unit Name (eg, section) []:
    Common Name (e.g. server FQDN or YOUR name) []:cal.enricozini.org
    Email Address []:postmaster@enricozini.org

Installing it on my phone:

    openssl x509 -in cal-cert.pem -outform DER -out cal-cert.crt
    adb push cal-cert.crt /mnt/sdcard/
    enrico --follow-instructions http://davdroid.bitfire.at/faq/entry/importing-a-certificate

Installing radicale in my VPS

An updated radicale package, with this patch to make it work with DAVDroid:

    apt-get source radicale
    # I reviewed 063f7de7a2c7c50de5fe3f8382358f9a1124fbb6
    git clone https://github.com/Kozea/Radicale.git
    Move the python code from git to the Debian source
    dch -v 0.10~enrico  "Pulled in the not yet released 0.10 work from upstream"
    debuild -us -uc -rfakeroot

Install the package:

    # dpkg -i python-radicale_0.10~enrico0-1_all.deb
    # dpkg -i radicale_0.10~enrico0-1_all.deb

Create a system user to run it:

    # adduser --system --disabled-password radicale

Configure it for mod_wsgi with auth done by Apache:

    # For brevity, this is my config file with comments removed

    [storage]
    # Storage backend
    # Value: filesystem | multifilesystem | database | custom
    type = filesystem

    # Folder for storing local collections, created if not present
    filesystem_folder = /var/lib/radicale/collections

    [logging]
    config = /etc/radicale/logging

Create the wsgi file to run it:

    # mkdir /srv/radicale
    # cat <<EOT > /srv/radicale/radicale.wsgi
    import radicale
    radicale.log.start()
    application = radicale.Application()
    EOT
    # chown radicale.radicale /srv/radicale/radicale.wsgi
    # chmod 0755 /srv/radicale/radicale.wsgi

Make radicale commit to git

    # apt-get install python-dulwich
    # cd /var/lib/radicale/collections
    # git init
    # chown radicale.radicale -R /var/lib/radicale/collections/.git

Apache configuration

Add a new site to apache:

    $ cat /etc/apache2/sites-available/cal.conf
    # For brevity, this is my config file with comments removed
    <IfModule mod_ssl.c>
    <VirtualHost *:443>
            ServerName cal.enricozini.org
            ServerAdmin enrico@enricozini.org

            Alias /robots.txt /srv/radicale/robots.txt
            Alias /favicon.ico /srv/radicale/favicon.ico

            WSGIDaemonProcess radicale user=radicale group=radicale threads=1 umask=0027 display-name=%{GROUP}
            WSGIProcessGroup radicale
            WSGIScriptAlias / /srv/radicale/radicale.wsgi

            <Directory /srv/radicale>
                    # WSGIProcessGroup radicale
                    # WSGIApplicationGroup radicale
                    # WSGIPassAuthorization On
                    AllowOverride None
                    Require all granted
            </Directory>

            <Location />
                    AuthType basic
                    AuthName "Enrico's Calendar"
                    AuthBasicProvider file
                    AuthUserFile /usr/local/etc/radicale/htpasswd
                    Require user enrico
            </Location>

            ErrorLog{APACHE_LOG_DIR}/cal-enricozini-org-error.log
            LogLevel warn

            CustomLog{APACHE_LOG_DIR}/cal-enricozini-org-access.log combined

            SSLEngine on
            SSLCertificateFile    /etc/ssl/certs/cal.pem
            SSLCertificateKeyFile /etc/ssl/private/cal.key
    </VirtualHost>
    </IfModule>

Then enable it:

    # a2ensite cal.conf
    # service apache2 reload

Create collections

DAVdroid seems to want to see existing collections on the server, so we create them:

    $ apt-get install cadaver
    $ cat <<EOT > /tmp/empty.ics
    BEGIN:VCALENDAR
    VERSION:2.0
    END:VCALENDAR
    EOT
    $ cat <<EOT > /tmp/empty.vcf
    BEGIN:VCARD
    VERSION:2.1
    END:VCARD
    EOT
    $ cadaver https://cal.enricozini.org
    WARNING: Untrusted server certificate presented for `cal.enricozini.org':
    [...]
    Do you wish to accept the certificate? (y/n) y
    Authentication required for Enrico's Calendar on server `cal.enricozini.org':
    Username: enrico
    Password: ****
    dav:/> cd enrico/contacts.vcf/
    dav:/> put /tmp/empty.vcf
    dav:/> cd ../calendar.ics/
    dav:/> put /tmp/empty.ics
    dav:/enrico/calendar.ics/> ^D
    Connection to `cal.enricozini.org' closed.

DAVdroid configuration

  1. Add a new DAVdroid sync account
  2. Use server/username configuration
  3. For server, use https:////
  4. Add username and password

It should work.

Related links

09 December, 2014 03:35PM

hackergotchi for Tanguy Ortolo

Tanguy Ortolo

Using bsdtar to change an archive format

Streamable archive formats

Package icon

Archive formats such as tar(5) and cpio(5) have the advantage of being streamable, so you can use them for transferring data with pipes and remote shells, without having to store the archive in the middle of the process, for instance:

$ cd public_html/blog
$ rgrep -lF "archive" data/articles \
      | pax -w \
      | ssh newserver "mkdir public_html/blog ;
                       cd public_html/blog ;
                       pax -r"

Turning a ZIP archive into tarball

Unfortunately, many people will send you data in non-streamable archive formats such as ZIP¹. For such cases, bsdtar(1) can be useful, as it is able to convert an archive from one format to another:

$ bsdtar -cf - @archive.zip \
      | COMMAND

These arguments tell bsdtar to:

  • create an archive;
  • write it to stdout (contrary to GNU tar which defaults to stdout, bsdtar defaults to a tape device);
  • put into it the files it will find in the archive archive.zip.

The result is a tape archive, which is easier to manipulate in a stream than a ZIP archive.

Notes

  1. Some will say that although ZIP is based on an file index, it can be stream because that index is placed at the end of the archive. In fact, that characteristic only allows to stream the archive creation, but requires to store the full archive before being able to extract it. .

09 December, 2014 03:00PM by Tanguy

Hideki Yamane

ThinkPad X121e with UFEI boot

I have ThinkPad X121e and recenly exchanged its HDD to SSD, then I've tried to boot from UEFI but I couldn't. And I considered its something wrong with this old BIOS verion but new one can improve the situation, tried to update it. Steps are below.
  1. get iso image file from Lenovo (Japanese site) (release note)
  2. put iso image into /boot
  3. add custom grub file as /etc/grub.d/99_bios (note: I don't separate /boot partition, maybe you should specify path for file if you don't do so).
    $ sudo sh -c "touch /etc/grub.d/99_bios; chmod +x /etc/grub.d/99_bios"
    and edit /etc/grub.d/99_bio file.
    #! /bin/sh
    menuentry "BIOS Update" {
    linux16 memdisk iso
    initrd16 xxxxxxxxxx.iso
    }
  4. update grub menu with and check /boot/grub/grub.cfg file
    $ sudo update-grub
    $ tail /boot/grub/grub.cfg
  5. make sure memdisk command is installed
    $ sudo apt-get install syslinux
  6. just reboot and select bios update menu
Looks okay, its firmware update was success but I cannot boot it (installation was okay). Hmm...

As Matthew Garrett blogged before, probably ThinkPad X121e's firmware doesn't allow to boot from any entries in UEFI except "Windows Boot Manager" :-(

 ...So I have to back to legacy BIOS. *sigh*

09 December, 2014 01:09PM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Thorsten Glaser

Thorsten Glaser

The colon in the shell: corrigenda

Bernhard’s article on Plänet Debian about the “colon” command in the shell could use a clarification and a security-relevant correcture.

There is, indeed, no difference between the : and true built-in commands.

Stéphane Chazelas points out that writing : ${VARNAME:=default} is bad, : "${VARNAME:=default}" is correct. Reason: someone could preset $VARNAME with, for example, /*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/* which will exhaust during globbing.

Besides that, the article is good. Thanks Bernhard for posting it!

PS: I sometimes use the colon as comment leader in the last line of a script or function, because it, unlike the octothorpe, sets $? to 0, which can be useful.

Update: As jilles pointed out in IRC, “colon” (‘:’) is a POSIX special built-in (most importantly, it keeps assignments), whereas “true” is a regular built-in utility.

09 December, 2014 10:40AM by MirOS Developer tg (tg@mirbsd.org)

Russ Allbery

wallet 1.2

wallet is a system for secure credential management and distribution.

This release renames the duo object type to duo-pam (since it really only handles PAM integrations) and adds new object types duo-radius, duo-ldap, and duo-rdp to handle other types of Duo Security integrations.

It also adds a rename command, which can be used to rename existing objects without destroying them and recreating them. Currently, this only supports file objects.

My only role in this release was to do the final release management and a bit of release testing. The new code was implemented by Jon Robertson (who's also done a lot of work on wallet in the past).

You can get the latest release from the wallet distribution page.

09 December, 2014 05:39AM

December 08, 2014

hackergotchi for Bernhard R. Link

Bernhard R. Link

The Colon in the Shell.

I was recently asked about some construct in a shell script starting with a colon(:), leading me into a long monologue about it. Afterwards I realized I had forgotten to mention half of the nice things. So here for your amusement some usage for the colon in the shell:

To find the meaning of ":" in the bash manpage[1], you have to look at the start of the SHELL BUILTIN COMMANDS section. There you find:

: [arguments]
	No effect; the command does nothing beyond expanding arguments and performing any specified redirections.  A zero exit code is returned.

If you wonder what the difference to true is: I don't know any difference (except that there is no /bin/:)

So what is the colon useful for? You can use it if you need a command that does nothing, but still is a command.

  • For example, if you want to avoid using a negation (for fear of history expansion still being on by default on a interactive bash or wanting to support ancient shells), you cannot simply write
    if conditon ; then
    	# this will be an error
    else
    	echo condition is false
    fi
    
    but need some command there, for which the colon can be used:
    if conditon ; then
    	: # nothing to do in this case
    else
    	echo condition is false
    fi
    
    To confuse your reader, you can use the fact that the colon ignores it's arguments and you only have normal words there:
    if conditon ; then
    	: nothing to do in this case # <- this works but is not good style
    else
    	echo condition is false
    fi
    
    though I strongly recommend against it (exercise: why did I use a # there for my remark?).
  • This of course also works in other cases:
    while processnext ; do
    	:
    done
    
  • The ability to ignore the actual arguments (but still processing them as with every command that ignores it arguments) can also be used, like in:
    : ${VARNAME:=default}
    
    which sets VARNAME to a default if unset or empty. (One could also use that the first time it is used, or ${VARNAME:-default} everywhere, but this can be more readable).
  • In other cases you do not strictly need a command, but using the colon can clear things up, like creating or truncating a file using a redirection:
    : > /path/to/file
    

Then there is more things you can do with the colon, most I'd put under "abuse":

  • misuing it for comments:
    : ====== here
    
    While it has the advantage of also showing up in -x output, the to be expected confusion of the reader and the danger of using any shell active character makes this general a bad idea.
  • As it practically the same as true it can be used as a shorter form of true. Given that true is more readable that is a bad idea. (At least it isn't as evil as using the empty string to denote true.)
    # bad style!
    if condition ; then doit= ; doit2=: ; else doit=false ; doit2=false ; fi
    if $doit ; then echo condition true ; fi
    if $doit2 && true ; then echo condition true ; fi
    
  • Another way to scare people:
    ignoreornot=
    $ignoreornot echo This you can see.
    ignoreornot=:
    $ignoreornot echo This you cannot see.
    
    While it works, I recommend against it: Easily confusing and any > in there or $(...) will likely rain harvoc over you.
  • Last and least, one can shadow the built-in colon with a different one. Only useful for obfuscation, and thus likely always evil. :(){:&:};: anyone?

This is of course not a complete list. But unless I missed something else, those are the most common cases I run into.

[1] <rant>If you never looked at it, better don't start: the bash manpage is legendary for being quite useless as hiding all information in other information in a quite absurd order. Unless you look at documentation about how to write a shell script parser, then the bash manpage is really what you want to read.</rant>

08 December, 2014 07:35PM

hackergotchi for Andrew Pollock

Andrew Pollock

[tech] A geek Dad goes to Kindergarten with a box full of Open Source and some vegetables

Zoe's Kindergarten encourages parents to come in and spend some time with the kids. I've heard reports of other parents coming in and doing baking with the kids or other activities at various times throughout the year.

Zoe and I had both wanted me to come in for something, but it had taken me until the last few weeks of the year to get my act together and do something.

I'd thought about coming in and doing some baking, but that seemed rather done to death already, and it's not like baking is really my thing, so I thought I'd do something technological. I just wracked my brains for something low effort and Kindergarten-age friendly.

The Kindergarten has a couple of eduss touch screens. They're just some sort of large-screen with a bunch of inputs and outputs on them. I think the Kindergarten mostly uses them for showing DVDs and hooking up a laptop and possibly doing something interactive on them.

As they had HDMI input, and my Raspberry Pi had HDMI output, it seemed like a no-brainer to do something using the Raspberry Pi. I also thought hooking up the MaKey MaKey to it would make for a more fun experience. I just needed to actually have it all do something, and that's where I hit a bit of a creative brick wall.

I thought I'd just hack something together where based on different inputs on the MaKey MaKey, a picture would get displayed and a sound played. Nothing fancy at all. I really struggled to get a picture displayed full screen in a time efficient manner. My Pi was running Raspbian, so it was relatively simple to configure LightDM to auto-login and auto-start something. I used triggerhappy to invoke a shell script, which took care of playing a sound and an image.

Playing a sound was easy. Displaying an image less so, especially if I wanted the image loaded fast. I really wanted to avoid having to execute an image viewer every time an input fired, because that would be just way too slow. I thought I'd found a suitable application in Geeqie, because it supported being out of band managed, but it's problem was it also responded to the inputs from the MaKey MaKey, so it became impossible to predictably display the right image with the right input.

So the night before I was supposed to go to Kindergarten, I was up beating my head against it, and decided to scrap it and go back to the drawing board. I was looking around for a Kindergarten-friendly game that used just the arrow keys, and I remembered the trusty old Frozen Bubble.

This ended up being absolutely perfect. It had enough flags to control automatic startup, so I could kick it straight into a dumbed-down full screen 1 player game (--fullscreen --solo --no-time-limit)

The kids absolutely loved it. They were cycled through in groups of four and all took turns having a little play. I brought a couple of heads of broccoli, a zucchini and a potato with me. I started out using the two broccoli as left and right and the zucchini to fire, but as it turns out, not all the kids were as good with the "left" and "right" as Zoe, so I swapped one of the broccoli for a potato and that made things a bit less ambiguous.

The responses from the kids were varied. Quite a few clearly had their minds blown and wanted to know how the broccoli was controlling something on the screen. Not all of them got the hang of the game play, but a lot did. Some picked it up after having a play and then watching other kids play and then came back for a more successful second attempt. Some weren't even sure what a zucchini was.

Overall, it was a very successful activity, and I'm glad I switched to Frozen Bubble, because what I'd originally had wouldn't have held up to the way the kids were using it. There was a lot of long holding/touching of the vegetables, which would have fired hundreds of repeat events, and just totally overwhelmed triggerhappy. Quite a few kids wanted to pick up and hold the vegetables instead of just touch them to send an event. As it was, the Pi struggled to play Frozen Bubble enough as it was.

The other lesson I learned pretty quickly was that an aluminium BBQ tray worked a lot better as the grounding point for the MaKey MaKey than having to tether an anti-static strap around each kid's ankle as they sat down in front of the screen. Once I switched to the tray, I could rotate kids through the activity much faster.

I just wish I was a bit more creative, or there were more Kindergarten-friendly arrow-key driven Linux applications out there, but I was happy with what I managed to hack together with a fairly minimal amount of effort.

08 December, 2014 05:04PM

Niels Thykier

Jessie has half the number of RC bugs compared to Wheezy

In the last 24 hours, the number of RC bugs currently affecting Jessie was reduced to just under half of the same number for Wheezy.

 

There are still a lot of bugs to be fixed, but please keep up the good work. :)

 


08 December, 2014 07:08AM by Niels Thykier

hackergotchi for Clint Adams

Clint Adams

I don't care about the sunshine, yeah

Rhoda is guarded. She is secretly in love with her brother. When he gets a girlfriend, she finds a boyfriend. She tells no one what she's really thinking.

Rhoda likes to seize the day. In the midst of evening conversation, she will excuse herself to “use the bathroom” or “come right back”, then, within literally two minutes, she will go home with an acquaintance or stranger. Often these encounters or the aftermaths thereof do not go according to her liking, and she will make veiled remarks of hostility, should she see those people again.

Rhoda was unhappy so she changed “everything” in her life. Though multiple things remained constant, she concluded that she was, in fact, the problem.

Rhoda does not make enough money on which to live. She works part-time, and turns down all other job offers. Other people make up for her financial shortcomings so that she is not homeless and starving.

Rhoda is certain that it is more difficult to be female than male.

08 December, 2014 02:32AM

December 07, 2014

hackergotchi for Ana Beatriz Guerrero Lopez

Ana Beatriz Guerrero Lopez

Keysigning, it’s never too late

I managed to sign all the keys from DebConf14 within a month after the conference ended, but I had still a pile of keys to sign
going back to DebConf11 from several minidebconfs and other small events. Today it was frakking cold and I finally managed to sign something close to 100 keys. I didn’t sign any key with less than 2048 bits, expired or revoked.

If you got your key signed by me (or get it in the next hours), sorry it took so long!

07 December, 2014 11:30PM by ana

hackergotchi for Junichi Uekawa

Junichi Uekawa

Got nasne.

Got nasne. Got it working with Android, PlayStation VITA, and a Sony TV. Each platform seem to require some form of payment in addition to buying the Nasne itself. Something about ecosystem and lock in that I don't like.

07 December, 2014 10:10PM by Junichi Uekawa