February 07, 2016

hackergotchi for SparkyLinux


Updates 2016/02/07


There are a few 3th party updates in our repository ready to go:
– DDM 2.2.5
– TOR Browser 5.5.1
– SpiderOakOne 6.1.2
– VMware Workstation Player Installer 12.1.0
– WPS Office
– Xdashboard 0.5.5
– XnViewMP 0.78

Upgrade your system as usually:
sudo apt-get update
sudo apt-get dist-upgrade

or via ‘Update Tool’ or ‘Synaptic’ as you wish.


07 February, 2016 09:14PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Linux Padawan: Master Spotlight: Silverlion

Meet Harry SilverLion.   How did you first get started using Linux? What distros, software or resources did you use while learning? Honestly speaking I can not recall the exact time and date of my very first contact with Linux. What I do remember though, is an event in 2006. Back then – during my […]

07 February, 2016 11:46AM

February 06, 2016

Dimitri John Ledkov: Blogging about Let's encrypt over HTTP

So let's encrypt thing started. And it can do challenges over http (serving text files) and over dns (serving .txt records).

My "infrastructure" is fairly modest. I've seen too many of my email accounts getting swamped with spam, and or companies going bust. So I got my own domain name surgut.co.uk. However, I don't have money or time to run my own services. So I've signed up for the Google Apps account for my domain to do email, blogging, etc.

Then later i got the libnih.la domain to host API docs for the mentioned library. In the world of .io startups, I thought it's an incredibly funny domain name.

But I also have a VPS to host static files on ad-hoc basis, run VPN, and an irc bouncer. My irc bouncer is ZNC and I used a self-signed certificate there, thus i had "ignore" ssl errors in all of my irc clients... which kind of defeats the purposes somewhat.

I run my VPS on i386 (to save on memory usage) and on Ubuntu 14.04 LTS managed with Landscape. And my little services are just configured by hand there (not using juju).

My first attempt at getting on the let's encrypt bandwagon was to use the official client. By fetching debs from xenial, and installing that on LTS. But the package/script there is huge, has support for things I don't need, and wants dependencies I don't have on 14.04 LTS.

However I found a minimalist implementation letsencrypt.sh implemented in shell, with openssl and curl. It was trivial to get dependencies for and configure. Specified a domains text file, and that was it. And well, added sym links in my NGINX config to serve the challenges directory & a hook to deploy certificate to znc and restart that. I've added a cronjob to renew the certs too. Thinking about it, it's not complete as I'm not sure if NGINX will pick up certificate change and/or if it will need to be reloaded. I shall test that, once my cert expires.

Tweaking config for NGINX was easy. And I was like, let's see how good it is. I pointed https://www.ssllabs.com/ssltest/ at my https://x4d.surgut.co.uk/ and I got a "C" rating. No forward secrecy, vulnerable to down grade attacks, BEAST, POODLE and stuff like that. I went googling for all types of NGINX configs and eventually found website with "best known practices" https://cipherli.st/ However, even that only got me to "B" rating, as it still has Diffie-Hellman things that ssltest caps at "B" rating. So I disabled those too. I've ended up with this gibberish:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:AES256+EECDH";
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_cache shared:SSL:10m;
#ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
#resolver $DNS-IP-1 $DNS-IP-2 valid=300s;
#resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

I call it gibberish, because IMHO, I shouldn't need to specify any of the above... Anyway I got my A+ rating.

However, security is as best as the weakest link. I'm still serving things over HTTP, maybe I should disable that. And I'm yet to check how "good" the TLS is on my znc. Or if I need to further harden my sshd configuration.

This has filled a big gap in my infrastructure. However a few things remain served over HTTP only.

http://blog.surgut.co.uk is hosted by an Alphabet's / Google's Blogger service. Which I would want to be served over HTTPS.

http://libnih.la is hosted by GitHub Inc service. Which I would want to be served over HTTPS.

I do not want to manage those services, experience load / spammers / DDoS attacks etc. But I am happy to sign CSRs with let's encrypt and deploy certs over to those companies. Or allow them to self-obtain certificates from let's encrypt on my behalf. I used gandi.net as my domain names provider, which offers an RPC API to manage domains and their zones files, thus e.g. I can also generate an API token for those companies to respond with a dns-01 challenge from let's encrypt.

One step at a time I guess.

The postings on this site are my own and don't necessarily represent any past/present/future employers' positions, strategies, or opinions.

06 February, 2016 11:30PM by Dimitri John Ledkov (noreply@blogger.com)

Thomas Bechtold: Installing Debian Stretch on a Cubox-i

I have a Cubox-i and these are my notes to install Debian with the standard u-boot and linux kernel from the Debian archive.

Some requirements on the host:

apt-get install qemu-user-static debootstrap

Assuming the SD-Card is available as /dev/sdb :

# define our target device (mmc card) and the directory we use
export TARGETDEV=/dev/sdb
export MNTDIR=/mnt/tmp

# clean some blocks
dd if=/dev/zero of=$TARGETDEV bs=1M count=4

# create a single partition and ext4 filesystem
echo "n

"|fdisk $TARGETDEV
mkfs.ext4 -L rootfs "$TARGETDEV"1

mkdir -p $MNTDIR
mkdir -p $MNTDIR/etc/{default,flash-kernel}
echo "SolidRun Cubox-i Dual/Quad" >> $MNTDIR/etc/flash-kernel/machine
echo 'LINUX_KERNEL_CMDLINE="root=/dev/mmcblk0p1 rootfstype=ext4 ro rootwait console=ttymxc0,115200 console=tty1"' >> $MNTDIR/etc/default/flash-kernel
echo '/dev/mmcblk0p1 / ext4 defaults,noatime 0 0' >> $MNTDIR/etc/fstab

# get and install packages via debootstrap
qemu-debootstrap --foreign  --include=ntp,ntpdate,less,u-boot,u-boot-tools,flash-kernel,linux-image-armmp,kmod,openssh-server,firmware-linux-free,bash-completion,dialog,fake-hwclock,locales,vim --arch=armhf stretch $MNTDIR http://ftp.de.debian.org/debian/

# copy u-boot files to SD-Card (and it's 69, not 42. See cuboxi README from u-boot source tree)
dd if=$MNTDIR/usr/lib/u-boot/mx6cuboxi/SPL of=$TARGETDEV bs=1K seek=1
dd if=$MNTDIR/usr/lib/u-boot/mx6cuboxi/u-boot.img of=$TARGETDEV bs=1K seek=69

# set root password
chroot $MNTDIR passwd root
# serial console
echo 'T0:23:respawn:/sbin/getty -L ttymxc0 115200 vt100' >> $MNTDIR/etc/inittab

# hostname
echo "cubox" >> $MNTDIR/etc/hostname

# network eth0
cat <<eof>> $MNTDIR/etc/network/interfaces.d/eth0
auto eth0
allow-hotplug eth0
iface eth0 inet dhcp

# loopback
cat <<eof>> $MNTDIR/etc/network/interfaces.d/lo
auto lo
iface lo inet loopback

That’s it. Insert the SD-Card, connect with Putty or minicom and yo should see a booting system and be able to login.

06 February, 2016 11:24PM

Sujeevan Vijayakumaran: UbuCon Summit and SCALE 14x in Pasadena

From the 21st to the 24th the 14th Southern California Linux Expo took place in Pasadena. Since many years, there is also an UbuCon happening. This year was the first time when the newly created "UbuCon Summit" took place.

The people who are involved in the international Ubuntu-Community probably heard about the UbuCon Summit. It is the new attempt by Ubuntus Community Team and the community to bring the community back together, after the end of the Ubuntu Developer Summits. The Ubuntu Developer Summits got abandoned in 2012 because it was too expensive. Personally I didn't have a chance to visit a Ubuntu Developer Summit, mainly because I wasn't much involved in the international community back then. I couldn't decline to jump at the chance to attend the UbuCon Summit.

At this point I want to thank everybody who donated to the Ubuntu Community Donations program. Without that I wouldn't have the chance to go to the UbuCon Summit, Thanks everyone!

Day 0

After a long (and delayed) flight, I arrived in Pasadena. The day "0" was the day before the UbuCon Summit officially started. It was the first meet and greet at a wine bar. For me it was the first time to meet a couple of people which I only knew online before, so I was really happy to finally meet Nathan Haines, Richard Gaskin, Michael Hall, José Rey, Elizabeth K. Joseph and all the others which I forgot to mention.

Day 1

At 10 o'clock in the morning the UbuCon Summit officially started. Before the keynote by Mark Shuttleworth started, Nathan Haines and Richard Gaskin as the main organisers of the event, welcomed everybody in their welcoming speech. The ballroom was already full at that time, there weren't that many free seats left. Mark mainly focused Snappy, Containers and the Internet of Things on his keynote, interestingly the Phone did only get a small reference at the end of the keynote, when he slightly touched the topic of convergence. After his talk, a few lightning talks followed. Sergio Schvezov talked about building snap packages with snapcraft, followed by Jorge Castro about Gaming on Ubuntu. Didier Roche presented the tool "Ubuntu Make" and was followed by the new Kubuntu lead Scarlett Clark. She was really shy, so it was just a couple of minutes long talk about the current tasks of the Kubuntu project. After that the group photo followed, which is not yet public. (Where is it Nathan? ;)).

After lunch, the actual talks started. Nathan Haines started with "The Future of Ubuntu" and on the same time Sergio Schvezov and Canonicals Product Manager Manik Taneja talked about snappy. I've attended the snappy talk, where I didn't learn much new stuff, because I was already familiar with the basics of snappy. After that, my own talk followed about the project Labdoo.org. The project aims to collect old and used laptops in industrial countries, which are not used anymore or got replaced by new hardware. This was actually my very fist talk in English, as a non-native English speaker this was a harder step for me. Anyhow only 15-20 people attended my talk and I was already done after 30 minutes, because I was too nervous and also spoke too fast so I ended up forgetting half of the stuff… Beside my not that great performance at my talk, Jono Bacons talk was on the same time, where all the people went.

After these talks, three more talks followed in two tracks. I didn't follow the next talks completely, so I had more time to talk to so many other cool people.

At the evening there was the Canonical-sponsored social-event. I had another priority for that time, so I went to the talk "Floss reflections" by Jon 'maddog' Hall, Jono Bacon and Keila Banks. It was followed by Bryan Lundukes amusing "Linux Sucks" talk which you shouldn't miss seeing on YouTube. Finally, I joined the social event at the Brazilian bar afterwards. It was totally worth! :)

Day 2

The second day was mainly the Unconference-Day. But before that the second keynote of SCALE took place. Cory Doctorow talked about the topic "No Matter Who's Winning the War on General Purpose Computing, You're Losing". One interesting slot followed after the keynote. The "Ubuntu Leadership Panel" discussed many things, including positive and negative aspects in the last year. The panel included Daniel Holbach as part of the Community Counil, David Planella as the Community Team Manager, Olli Ries as the Director of Engineering, Mark Shuttleworth as the Ubuntu Founder, Elizabeth Joseph as a former Community Council Member, Nathan Haines as part of the LoCo Council and José Rey as a UbuCon LA organiser.

One of the sponsors of the UbuCon Summit was Dell. Barton George, founder of "Project Sputnik" - the ubuntu powered Dell XPS 13 Developer Edition - also talked about the project in a lightning talk. After that the raffle took place, where I sadly didn't win, boo! A cool thing about Barton George was, that he was very open and cool to talk to, he even started talking to people (like me) and was interested what we do in the community. The second lightning talk was by Michael Hall about Ubuntu Convergence Demo, he even connected his Nexus 4 to the projector to make the demo. Alan Pope and Jorge Castro followed then by the instructions about the unconference sessions. There were many sessions which were spontaneously added to the schedule. The sessions covered topics like "Snappy for Sys-Admins", "Snap Packaging", "In App Purchases" or "Attracting Non-Ubuntu App Developers". Sadly the amount of people radically reduced. There were only like 30-40 people in all unconference session altogether and the bigger part of these were Canonical employees. The UbuCon Summit ended in the afternoon, after the Unconference-Session.

During the Unconference-Session the exhibit hall opened, where many companies and project had their booth, of course there was also a booth from Canonical and Ubuntu. They had different ubuntu devices, like laptops, phones and a drone running snappy Ubuntu, even though it wasn't really snappy but rather slow…

At the afternoon it was time for Bad Voltage Live! It was really cool and not only because I listen to their podcast regularly. The beginning was delayed because Jono Bacons Macbook (running Mac OS and Keynote) had issues playing audio. That was funny, but they somehow got managed to fix it partly. I recommend watching the recording on YouTube, specially the beginning with all the audio issues!

After the show there was another social-event, the after party of Bad Voltage, which was sponsored by a sponsor, again at the wine bar.

Day 3

The third day was the first SCALE-only day and it had a huge schedule with up to ten simultaneously talks. Like the other days, also the third day started with a keynote, this time again from Mark Shuttleworth. The topic of his talk was "Free Software in the age of app stores". The talk was pretty similar to his talk on the opening of the UbuCon, so I would say that there was like 20% new or other stuff in it.

I've visited a few talks on Saturday, mainly "Continuous Delivery of Infrastructure with Jenkins", "Building Awesome Communities with GitHub" from Jono Bacon and "Docker, Kubernetes, and Mesos: Compared". Sadly my unterstanding of the container technologies aren't that high currently, so I didn't understand most of the talk. But beside that the other talks were pretty good.

On the other time I was mostly walking around in the exhibit hall, also sometimes standing and talking with people at the Ubuntu booth. There were a lot of people at the Ubuntu booth and many were interested in convergence. One guy was so much excited about the Nexus 4 and the attached screen/keyboard/mouse that he said "This is so awesome, I would kill for it!". It was kind of strange, but made a good laugh. ;-)

At the end of the day the "Game Night" took place in a smaller part of the exhibit hall. The idea and the realisation were great! There were a lot of people playing different types of games in the hall, like table tennis, pinball, kicker table or even with Lego. I didn't see many people with their phones in their hands (except for making photos) and nearly no person who was using their laptop.

Day 4

The last day started with the keynote of Sarah Sharp about "Improving Diversity with Maslow's Hierarchy of Needs". This keynote was in fact very interesting and she mentioned many points which I didn't think of. She started her talk with the instruction to raise your hand if you're male and then proceed with the second clause if you're white. So I had to put down my hand again. All these people had to say "Improving diversity in open source communities is my responsibility." While I generally agree with that, I was a little confused because I didn't have to raise my hand because I'm not white. Personally I never had (and never directly heard about) any issues in open source communities because I'm not white. Beside that point her talk really impressed many people. She even got standing ovations afterwards.

Next talk for me was "From Sys Admin to Netflix SRE“ from two Netflix guys. This was a talk again, where I couldn't follow completely because I didn't know much about the sys admin stuff. The third and last talk for me was from Dustin Kirkland about "adapt" which was quite interesting.


Looking back at the event, I've noticed that I skipped too many talk slots. Even though there were many talks at the same time, so I had to decide which one I should go to. Compared to German Linux or Open Source Conferences I've noticed that the (felt) percentage of attending women were higher than in Germany. One cool point of SCALE is also that they help young and new people to talk at conferences. I've never seen kids under the age of 15 talking in front of many people yet. Sadly the video and live-streaming recording weren't that great. The YouTube channel gets all the talks slowly added. My talk is also available with broken slides and bad audio quality.

I really enjoyed UbuCon and SCALE. It was the first time to go to a Linux and Open Source Conference which was not in Germany and it was also the first time at an international Ubuntu event. At the end I talked to so many people and also skipped some talks because I was talking to different people. I hope to have the chance to go to another UbuCon Summit in the next year and hope to see you all there again! :-)

Beside the UbuCon Summit the next bigger UbuCon is the UbuCon Europe which takes place in Essen, Germany from the 18th to 20th November 2016! Me as the organiser of the event I hope to see you there too!

06 February, 2016 09:00PM

Matthew Helmke: Slashdot Effect

The Slashdot Effect isn’t what it used to be (or maybe I’m not terribly interesting…possible). This blog was linked to from the beginning of an article a couple days ago. On Thursday, this blog had 178 views. On January 26, 2009, we had 7,120 views, which is the highest number recorded since I switched to WordPress and my stats were reset, mostly because StumbleUpon listed this post. Before that, back in 2008 we had more than 20,000 visitors in one day when I posted this.

06 February, 2016 04:43PM

Costales: ¡1 Año con Ubuntu Phone!

Justo hoy hace un año que tengo un Ubuntu Phone :)) Aquella primera prerelease para insiders, donde conocí a unos compañeros maravillosos, marcó la salida al mundo del primer dispositivo con Ubuntu Touch.

Presentación en Londres hace 1 año

Es imposible no echar la vista atrás, recopilando de la hemeroteca, intentando hacer memoria y comparar como era Ubuntu antes y después de esa presentación.

Haciendo fotos al móvil con Fernando Lanero
Quedó claro el compromiso de BQ, sacando a posteriori el E5 (y el próximo mes de este año, la primera tablet con convergencia). También vende fundas propias, dispone de foros de soporte y es la primera vez que BQ vende un móvil en todo el mundo.

BQ E4.5

Meizu, con un gran móvil como es el MX4, no ha sacado más modelos, aunque ha habido rumores apuntando a la salida del MX5. Ojalá :D


Y tras un año, ¿cuales son las armas de Ubuntu Phone? En mi opinión, principalmente la convergencia, la privacidad y la libertad del software.

Privacy :) Yeah!

Si, la palabra so dicha mil veces, casi rayando la maldición, What's App. No, aún no existe esa aplicación para el móvil y posiblemente sea el único lastre para la plataforma.

Nada más que añadir
Y sí :) También podemos jugar

No hay millones y millones de aplicaciones como en Android o iOS (¿realmente necesitamos disponer de 300 aplicaciones distintas para hacer lo mismo?), pero las que hay son libres y de muchísima calidad. Y lo importante (para mi) es tener eso: un sistema operativo totalmente abierto y las aplicaciones suficientes para usar mi móvil en el día a día.

Porque quien se siente atraído por un Ubuntu Phone es un usuario que busca un dispositivo gobernado por software libre y que se respete su privacidad. Desde ahí debemos partir. Y ahí es donde Ubuntu cumple con creces. Ubuntu tiene su nicho y realmente no es un nicho pequeño.

Os aseguro que tener un móvil gobernado por un GNU/Linux real, un auténtico Ubuntu en el bolsillo, no tiene precio.

El cerebro de la bestia :)

Añade ratón + teclado + monitor y tendrás un Ubuntu Escritorio. Foto de Marius Quabeck

La otra gran baza es la convergencia, donde la competencia aun no se ha puesto las pilas y a excepción de Windows Phone, nadie ofrece lo que ofrecerá inminentemente Canonical.
Ubuntu ha ido entramando la telaraña con paso firme y decidido y ahora es el momento de recoger los frutos.
La nueva era, en la que la CPU de un escritorio es tu móvil, ha llegado. Un mismo Ubuntu para el móvil, la tablet y el escritorio. El ecosistema se ha completado :))

Conecta ratón + teclado y tendrás un mini PC perfecto para viajar

No quiero finalizar sin agradecer a todos los que hayan hecho que Ubuntu Phone sea lo que es hoy :) ¡Gracias!

Fotos: De Marius Quabeck, Fernando Lanero, David Castañón y mías.

06 February, 2016 02:08PM by Marcos Costales (noreply@blogger.com)

hackergotchi for ArcheOS


Digital archaeological drawing on the field with QGIS

I should have written this post since long, but time is always missing... The topic regards the digital archaeological (vector) drawing on the field. 

During the CAA conference of 2015, held in Siena Italy), I participate, among others, in the session 9A (Towards a Theory of Practice in Applied Digital Field Methods), moderated by +nicolò dell'unto (Lund University) and James Stuart Taylor. After my speech I was asked if we (Arc-Team), as a professional archaeological society, were really able to perform the digital documentation in real-time during an ordinary excavation. I answered that, at least in Italy, this point is very important for a professional society and that in normal conditions (but this can also happens during most of the emergency excavations) we complete the digital archaeological documentation directly on the field. The reason is simple and it is every year more evident: money are always less and less, as the time goes by, for cultural heritage matters; at least this is the trend of the last decade. For this reason, if on the one hand we have to try to counter this phenomenon, on the other we have to adapt our methodology to the current reality and this means to use the economic resources for the excavation also to produce the related documentation (without counting on a post-excavation budget).
The old video below (2014) shows how we manage the digital archaeological drawing on the field with QGIS.

The vector layers can be related to georeferenced photomosaic (bidimensional photomapping), or to georeferenced orthophoto (coming from 3D operations based on SfM/MVSR techniques). Of course orthophoto are the best solution, but currently the 3D work-flow with standard hardware is pretty slow. This is the reason why for almost all the palimpsestic documentation we still work with both the system: 2D photomapping and 3D SfM; depending on the time-table we have on the field, we choose the post-processing operations.
Within QGIS it is possible to draw vector layers in different ways. Basing on our experience (as you can see in the video), the best two best solution are:

1. to use the plugin Freehand Editing (if you want to experience something that is really similar to the old-traditional methodology, with the pencil and the paper)

2. to use the standard vector drawing tools (if you want to avoid too complex sahpes, like polygons with too many nodes)

IMHO, Freehand Editing plugin is a perfect solution for field operation, so that I am planning to add it into qgis-archeos-plugin, for ArcheOS Hypatia.

06 February, 2016 01:16PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Joe Liau: People = People

Technology made *for* people Trapped in technology? (source)

“Ubuntu is about people.”
“Ubuntu is for human beings.”

We have heard these phrases as good reminders as to “why” we are making Ubuntu. However, there is a growing sense of disconnect from the definition of “what” we are doing for the people. The “what” has to come back to the “why”. So, we need to clarify and simplify what we are doing.

Ubuntu = Ubuntu (oo-boon-too) — A free operating system inspired by an African philosophy that says that we all are one.

Ubuntu = People. When we are people-focused, then we are making Ubuntu. Anyone can make a product that people use. Anyone can create convergence of people’s devices. But, Ubuntu brings it all back to the people, and for the people. We don’t get trapped in the technology.

People = People. Ubuntu is about people. But, everyone is unique. We are not all technology-focused, and we don’t all have the freedom to enjoy technology without advanced knowledge. When we create Ubuntu we think of the humans before the technology. When we come together to celebrate Ubuntu, we celebrate the humans who are involved in the project. Our events and attention focus on the people and not just software. This means that we establish environments that allow and encourage people to be people. We don’t get Ubuntu by simply having people there. We get Ubuntu by acknowledging that those people are human beings who are part of the bigger picture. The things that we create are great, but Ubuntu is about people, so it always comes full circle, back to the people.


06 February, 2016 05:11AM

February 05, 2016

Daniel Pocock: Giving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Still not convinced? Read about Amazon secretly removing George Orwell's 1984 and Animal Farm from Kindles while people were reading them, Apple filtering the availability of apps with a pro-Life bias and Facebook using algorithms to identify homosexual users.

05 February, 2016 10:07PM

Randall Ross: Have We Converged Yet?

Apologies for the long period with no updates. I'll be bringing back this blog with a fresh look and more new exciting and original topics soon. I wanted to get this article out without further delay though because it captures an important and timely idea that has been missed by the tech news sites... again.

Convergence is not about a unified computing experience across all your devices. Although that's an important goal, convergence is more about that point in time where your philosophy that technology should respect people converges with that of a group or company that believes the same.

Recently, my friend Wayne (who's a long-time Ubuntu Vancouverite) pointed out his thoughts on Ubuntu's convergence announcement.

Here's a teaser from Wayne's blog:

    "... it became even more apparent to me that the ‘battle for the operating system’ will eventually be won by Ubuntu in numbers (it is already won in principle)"

    "You see, Ubuntu cares about you, because it’s built by people who care about things other than shareholders’ dividends."

Please read Wayne's full article here. http://wayneoutthere.com/race-or-marathon-to-convergence/ It's a quick read and will make you say "Hmmm..."

Like Wayne, I hope you will reject those in the tech industry that insist on keeping you focused on what's unimportant. It's *never* about widget this, or kernel that.

It's about the agenda that is behind the technology.

The friendly folks who make Ubuntu are charting a course in computing that respects people. The Ubuntu Tablet is another way to deliver that goal. That's the real news.

Image "Happy Boys" by https://www.flickr.com/photos/deepblue66/ cc-by-nc-sa

05 February, 2016 08:43PM

Thomas Ward: NGINX PPA Cleanup

The NGINX PPAs have had some cleanup done to them today.

Previously, the PPAs kept the ‘older’ package versions in them for now-EOL releases (this included keeping ancient versions for Maverick, Natty, Oneiric, Quantal, Raring, Saucy, and Utopic). This was decided upon in order to prevent people from seeing 404 errors on PPA checking. We also included a large list of “Final Version” items for each Ubuntu release, stating there would be no more updates for that release, but keeping the ancient packages in place for installation.

Looking back on this, this is a bad thing for multiple reasons. Firstly, it means people in ‘older releases’ can still use the PPA for that release. This means security-holed versions of NGINX could still be used. Secondly, it implies that we still ‘support’ the use of older releases of Ubuntu in the PPAs. This has the security connotation that we are OK with people using no-longer-updated releases, which in turn have their own security holes.

So, today, in an effort to discourage the use of ancient Ubuntu versions which get no security updates or support anymore, I’ve made changes to the way that the PPAs will operate going forward: Unless a release recently went End of Life, versions of the nginx package in the PPAs for older Ubuntu releases are no longer going to be kept, and will be deleted a week after the version goes End of Life.

Therefore, as of today, I have deleted all the packages in the NGINX PPAs (both Stable and Mainline, in both staging and release PPAs) for the following releases of Ubuntu:

  • Maverick (10.10)
  • Natty (11.04)
  • Oneiric (11.10)
  • Quantal (12.10)
  • Raring (13.04)
  • Saucy (13.10)
  • Utopic (15.04)

People still using ancient versions of NGINX or Ubuntu are strongly recommended to upgrade to get continued support and security/bug fixes.

05 February, 2016 05:20PM

Stuart Langridge: Android apps and sensitive permissions

In the last two days, I’ve installed two Android apps (names redacted because it’s not their fault!) which, on install, have popped up a custom notification saying that the app “requests Sensitive Permissions”.

UPDATE: this is not these apps’s fault. It is ES File Explorer’s fault. Uninstall ES File Explorer. And everything below applies to the ES File Explorer people.

Tapping this notification pops up a thing named “Apps Analyze” which pretends to be analysing the stuff on your phone and then shows you a bunch of irrelevant information about your phone and weather and Facebook info, which have nothing whatsoever to do with the app you installed.

Let me be clear. This is bullshit. This is nothing more than malware. I wanted to dim my screen, or buy a sandwich. I did not want to have my phone “analysed”; I did not want “sensitive permissions”. I don’t think this thing needs permissions at all; at the very best it’s a completely unwanted bundled thing, like Oracle bundling adware with their Java installer. At worst, it’s some sort of unpleasant malware which harvests data from my phone and ships it off somewhere. I don’t know what it does; it’s certainly bloatware at the very least; there’s a Reddit thread about it.

I don’t know where this is coming from; since it’s shown up in two separate apps, it’s presumably some sort of third-party component, and presumably the authors of it pay app developers to include it. I do know where it’s coming from; it’s from ES File Explorer. If you are an Android app developer and you are using this thing, fucking pack it in. This is a hysterical betrayal of your users’ trust. I know it’s hard work to monetise software that you write. I know it’s tempting to scrape the barrel like this. But if you are using this, you are a terrible person and you should sit down and have a bloody word with yourself. Stop it. You’re pissing in the waterhole and ruining things for everyone. Do you really want to be part of this race to the bottom?

It’s possible that this is an official Android thing, since it’s also showing up in Google Sheets and so on. If so, Android people, what the hell are you thinking of?

05 February, 2016 12:52PM

hackergotchi for SolydXK


Localized Japanese and Dutch versions available

Balloon has made the Japanese localized ISOs available for download.

The Japanese SolydX and SolydK are available in 32-bit and 64-bit downloads here:
or from Balloon’s site:

Thanks Balloon, for your outstanding work!

The Dutch ISOs are available in 64-bit only:

05 February, 2016 10:32AM by Schoelje


Ab sofort: Bibliotheksnutzung online

Foto: Eva Jünger / Münchner Stadtbibliothek

Foto: Eva Jünger / Münchner Stadtbibliothek










Online anmelden und bezahlen

Ab sofort steht allen, die in der Region München wohnen und arbeiten die Online-Anmeldung zur Verfügung. So kann man bequem von zuhause aus Kunde werden und Gebühren online zahlen. Als Zahlungsmöglichkeiten stehen Kreditkarte und Giropay zur Verfügung.

Die Einführung der E-Payment Funktion ist Teil der E- und Open-Government Strategie der Landeshauptstadt München. Die Münchner Stadtbibliothek ist in diesem Zusammenhang ein wichtiges Pilotprojekt der Stadt und ein Motor, um dem Serviceanspruch der Gesellschaft im Bezug auf digitale Angebote gerecht zu werden.

Online-Angebote der Münchner Stadtbibliothek

Das Online-Benutzerkonto bei der Münchner Stadtbibliothek ermöglicht die Nutzung der anmeldepflichtigen Online-Angebote der Münchner Stadtbibliothek. Es können eBooks in der Onleihe vorgemerkt und ausgeliehen werden oder in digitalen Magazinen und Lexika recherchiert werden.

Um Bücher, Filme, CDs, Noten in einer Stadtbibliothek auszuleihen kann mit der Voralge des Personalausweises – alternativ: Reisepass und einen Adressnachweis (z.B. Meldebescheinigung) ein Bibliotheksausweis vor Ort ausgestellt werden.

Hier geht es zur Anmeldung: www.muenchner-stadtbibliothek.de/opac

Online angemeldet – papierlos Lesen: Onleihe München

Die Onleihe München ist ein Angebot für die Kundinnen und Kunden der Münchner Stadtbibliothek. Rund um die Uhr gibt es hier eMedien – wie eBooks, ePaper oder eVideos – zum herunterladen oder, falls bereits ausgeliehen, zum vormerken.

Die Onleihe München kann mit einem Online-Benutzerkonto oder mit einem gültigen Bibliotheksausweis und dem dazugehörigen Passwort genutzt werden.

Mehr Informationen unter www.muenchner-stadtbibliothek.de

Munzinger und Library PressDisplay

Digital steht außerdem die Online-Datenbank Munzinger zur Verfügung: hier finden sich große Nachschlagewerke wie Duden und Brockhaus komfortabel und mobil nutzbar insbesondere für Schülerinnen und Schüler. Des Weiteren Biographie-, Literatur- und Geographiedatenbanken, das SZ-Archiv mit allen Regionalausgaben sowie die Naxos-Hörbuchbibliothek mit Zugang zu 4400 meist englischsprachigen Hörbüchern.

Auch über Munzinger steht Bibliothekskundinnen und -kunden der Zugang zum internationalen Zeitungsportal Library PressDisplay mit mehr als 4.000 Zeitungen, Magazinen und Zeitschriften aus aller Welt zur Verfügung. Library PressDisplay bietet tagesaktuelle Zeitungen und Zeitschriften aus 100 Ländern in über 50 Sprachen – häufig noch vor Erscheinen der Print-Ausgabe – online. Das Archiv reicht bis zu 90 Tage zurück.

Nutzen Sie die Münchner Stadtbibliothek – von zuhause aus, mobil unterwegs oder in einer unserer Bibliotheken vor Ort!

Presseinformation Münchner Stadtbibliothek – Februar 2016

Der Beitrag Ab sofort: Bibliotheksnutzung online erschien zuerst auf Münchner IT-Blog.

05 February, 2016 08:45AM by Elke Wildraut

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu 15.04 (Vivid Vervet) End of Life reached on February 4 2016

This is a follow-up to the End of Life warning sent last month to confirm that as of today (February 4, 2016), Ubuntu 15.04 is no longer supported. No more package updates will be accepted to 15.04, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 15.04 (Vivid Vervet) release almost 9 months ago, on April 23, 2015. As a non-LTS release, 15.04 has a 9-month month support cycle and, as such, the support period is now nearing its end and Ubuntu 15.04 will reach end of life on Thursday, February 4th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 15.04.

The supported upgrade path from Ubuntu 15.04 is via Ubuntu 15.10. Instructions and caveats for the upgrade may be found at:


Ubuntu 15.10 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:


Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-security-announce mailing list on Fri Feb 5 03:54:55 UTC 2016 by Adam Conrad, on behalf of the Ubuntu Release Team

05 February, 2016 04:22AM

February 04, 2016

Dustin Kirkland: How many people in the world use Ubuntu? More than anyone actually knows!

People of earth, waving at Saturn, courtesy of NASA.
“It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.

Why the negativity?!? Are you sure? Did you count all of them?

No one has.

How many people in the world use Ubuntu?

Actually, no one can count all of the Ubuntu users in the world!

Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.

Of course, you can buy laptops preloaded with Ubuntu from Dell, HP, Lenovo, and Asus.  And there are millions of them out there.  And you can buy servers powered by Ubuntu from IBM, Dell, HP, Cisco, Lenovo, Quanta, and compatible with the OpenCompute Project.

In 2011, hardware sales might have been how Mark Shuttleworth hoped to reach 200M Ubuntu users by 2015.

But in reality, hundreds of millions of PCs, servers, devices, virtual machines, and containers have booted Ubuntu to date!

Let's look at some facts...
  • Docker users have launched Ubuntu images over 35.5 million times.
  • HashiCorp's Vagrant images of Ubuntu 14.04 LTS 64-bit have been downloaded 10 million times.
  • At least 20 million unique instances of Ubuntu have launched in public clouds, private clouds, and bare metal in 2015 itself.
    • That's Ubuntu in clouds like AWS, Microsoft Azure, Google Compute Engine, Rackspace, Oracle Cloud, VMware, and others.
    • And that's Ubuntu in private clouds like OpenStack.
    • And Ubuntu at scale on bare metal with MAAS, often managed with Chef.
  • In fact, over 2 million new Ubuntu cloud instances launched in November 2015.
    • That's 67,000 new Ubuntu cloud instances launched per day.
    • That's 2,800 new Ubuntu cloud instances launched every hour.
    • That's 46 new Ubuntu cloud instances launched every minute.
    • That's nearly one new Ubuntu cloud instance launched every single second of every single day in November 2015.
  • And then there are Ubuntu phones from Meizu.
  • And more Ubuntu phones from BQ.
  • Of course, anyone can install Ubuntu on their Google Nexus tablet or phone.
  • Or buy a converged tablet/desktop preinstalled with Ubuntu from BQ.
  • Oh, and the Tesla entertainment system?  All electric Ubuntu.
  • Google's self-driving cars?  They're self-driven by Ubuntu.
  • George Hotz's home-made self-driving car?  It's a homebrewed Ubuntu autopilot.
  • Snappy Ubuntu downloads and updates for Raspberry Pi's and Beagle Bone Blacks -- the response has been tremendous.  Download numbers are astounding.
  • Drones, robots, network switches, smart devices, the Internet of Things.  More Snappy Ubuntu.
  • How about Walmart?  Everyday low prices.  Everyday Ubuntu.  Lots and lots of Ubuntu.
  • Are you orchestrating containers with Kubernetes or Apache Mesos?  There's plenty of Ubuntu in there.
  • Kicking PaaS with Cloud Foundry?  App instances are Ubuntu LXC containers.  Pivotal has lots of serious users.
  • And Heroku?  You bet your PaaS those hosted application containers are Ubuntu.  Plenty of serious users here too.
  • Tianhe-2, the world's largest super computer.  Merely 80,000 Xeons, 1.4 TB of memory, 12.4 PB of disk, all number crunching on Ubuntu.
  • Ever watch a movie on Netflix?  You were served by Ubuntu.
  • Ever hitch a ride with Uber or Lyft?  Your mobile app is talking to Ubuntu servers on the backend.
  • Did you enjoy watching The Hobbit?  Hunger Games?  Avengers?  Avatar?  All rendered on Ubuntu at WETA Digital.  Among many others.
  • Do you use Instagram?  Say cheese!
  • Listen to Spotify?  Music to my ears...
  • Doing a deal on Wall Street?  Ubuntu is serious business for Bloomberg.
  • Paypal, Dropbox, Snapchat, Pinterest, Reddit. Airbnb.  Yep.  More Ubuntu.
  • Wikipedia and Wikimedia, among the busiest sites on the Internet with 8 - 18 billion page views per month, are hosted on Ubuntu.
How many "users" of Ubuntu are there ultimately?  I bet there are over a billion people today, using Ubuntu -- both directly and indirectly.  Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
  • More people use Ubuntu than we know.
  • More people use Ubuntu than you know.
  • More people use Ubuntu than they know.
More people use Ubuntu than anyone actually knows.

Because of who we all are.


04 February, 2016 08:08PM by Dustin Kirkland (noreply@blogger.com)

hackergotchi for Xanadu


Calendario de lanzamientos 2016

Después de varios meses del lanzamiento de la versión 0.8.0 de Xanadu GNU/Linux, les traemos el calendario de lanzamientos para este año, durante el cual esta planificado el lanzamiento de dos versiones mayores, para llegar finalmente a la esperada versión … Sigue leyendo

04 February, 2016 05:19PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: A new look for tablet

Today we launched a new and redesigned tablet section on ubuntu.com that introduces all the cool features of the upcoming BQ Aquaris M10 Ubuntu Edition tablet.

Breaking out of the box

In this redesign, we have broken out of the box, removing the container that previously held the content of the pages. This makes each page feel more spacious, giving the text and the images plenty of room to shine.

This is something we’ve wanted to do for a while across the entire site, so we thought that having the beautiful, large tablet photos to work with gave us a good excuse to try out this new approach.


The overview page of the tablet section of ubuntu.com, before (left) and after


For most of the section, we’ve used existing patterns from our design framework, but the removal of the container box allowed us to play with how the images behave across different screen sizes. You will notice that if you look at the tablet pages on a medium to small screen, some of the images will be cropped by the edge of the viewport, but if you see the same image in a large screen, you can see it in its entirety.


From the top: the same row on a large, medium and small screen


How we did it

This project was a concerted effort across the design, marketing, and product management teams.

To understand the key goals for this redesign, we collected the requirements and messaging from the key stakeholders of the project. We then translated all this information into wireframes that guide the reader through what Ubuntu Tablet is. These went through a few rounds of testing and iteration with both users and stakeholders. Finally, we worked with a copywriter to refine the words of each section of the tablet pages.


Some of the wireframes


To design the pages, we started with exploring the flow of each page in large and small screens in flat mockups, which were quickly built into a fully functioning prototype that we could keep experimenting and testing on.


Some of the flat mockups created for the redesign


This design process, where we start with flat mockups and move swiftly into a real prototype, is how we design and develop most of our projects, and it is made easier by the existence of a flexible framework and design patterns, that we use (and sometimes break!) as needed.


Testing the new tablet section on real devices


To showcase the beautiful tablet screen designs on the new BQ tablet, we coordinated with professional photographers to deliver the stunning images of the real device that you can enjoy along every new page of the section.


One of the many beautiful device photos used across the new tablet section of ubuntu.com


Many people were involved in this project, making it possible to deliver a redesign that looks great, and is completed on time — which is always good a thing :)

In the future

In the near future, we want to remove the container box from the other sections of ubuntu.com, although you may see this change being done gradually, section by section, rather than all in one go. We will also be looking at redesigning our navigation, so lots to look forward to.

Now go experience tablet for yourself and let us know what you think!

04 February, 2016 04:15PM

Canonical Design Team: Embeddable cards for Juju

Juju is a cloud orchestration tool with a lot of unique terminology. This is not so much of a problem when discussing or explaining terms or features within the site or the GUI, but, when it comes to external sources, the context is sometimes lost and everything can start to get a little confusing.

So a project was started to create embeddable widgets of information to not only give context to blog posts mentioning features of Juju, but also to help user adoption by providing direct access to the information on jujucharms.com.

This project was started by Anthony Dillon, one of the developers, to create embeddable information cards for three topics in particular. Charms, bundles and user profiles. These cards would function similarly to embedded YouTube videos, or embedding a song from Soundcloud on your own site as seen bellow:



Multiple breakpoints of the cards were established (small, 300px and below. medium: 301px to 625px and large: 626px and up) so that they would work responsively and therefore work in a breadth of different situations and compliment the user’s content referring to a charm, bundle or a user profile without any additional effort for the user.

We started the process by determining what information we would want to include within the card and then refining that information as we went through the different breakpoints. Here are some of the initial ideas that we put together:

charm  bundle  profile

We wrote down all the information there could be related to each type of card and then discussed how that might carry down to smaller card sizes and removed the unnecessary information as we went through the process. For the profile cards, we felt there was not enough information to display a profile card above 625px break point so we limited the card to the medium size.

Just enter the bundle or the charm name and the card will be generated for you to copy the code snippet to embed into your own content.

embed card thing

You can create your own here: http://www.jujugui.org/community/cards

Below are some examples of the responsive cards are different widths:


04 February, 2016 02:36PM

David Henningsson: 13 ways to PulseAudio

All roads lead to Rome, but PulseAudio is not far behind! In fact, how the PulseAudio client library determines how to try to connect to the PulseAudio server has no less than 13 different steps. Here they are, in priority order:

1) As an application developer, you can specify a server string in your call to pa_context_connect. If you do that, that’s the server string used, nothing else.

2) If the PULSE_SERVER environment variable is set, that’s the server string used, and nothing else.

3) Next, it goes to X to check if there is an x11 property named PULSE_SERVER. If there is, that’s the server string, nothing else. (There is also a PulseAudio module called module-x11-publish that sets this property. It is loaded by the start-pulseaudio-x11 script.)

4) It also checks client.conf, if such a file is found, for the default-server key. If that’s present, that’s the server string.

So, if none of the four methods above gives any result, several items will be merged and tried in order.

First up is trying to connect to a user-level PulseAudio, which means finding the right path where the UNIX socket exists. That in turn has several steps, in priority order:

5) If the PULSE_RUNTIME_PATH environment variable is set, that’s the path.

6) Otherwise, if the XDG_RUNTIME_DIR environment variable is set, the path is the “pulse” subdirectory below the directory specified in XDG_RUNTIME_DIR.

7) If not, and the “.pulse” directory exists in the current user’s home directory, that’s the path. (This is for historical reasons – a few years ago PulseAudio switched from “.pulse” to using XDG compliant directories, but ignoring “.pulse” would throw away some settings on upgrade.)

8) Failing that, if XDG_CONFIG_HOME environment variable is set, the path is the “pulse” subdirectory to the directory specified in XDG_CONFIG_HOME.

9) Still no path? Then fall back to using the “.config/pulse” subdirectory below the current user’s home directory.

Okay, so maybe we can connect to the UNIX socket inside that user-level PulseAudio path. But if it does not work, there are still a few more things to try:

10) Using a path of a system-level PulseAudio server. This directory is /var/run/pulse on Ubuntu (and probably most other distributions), or /usr/local/var/run/pulse in case you compiled PulseAudio from source yourself.

11) By checking client.conf for the key “auto-connect-localhost”. If so, also try connecting to tcp4:…

12) …and tcp6:[::1], too. Of course we cannot leave IPv6-only systems behind.

13) As the last straw of hope, the library checks client.conf for the key “auto-connect-display”. If it’s set, it checks the DISPLAY environment variable, and if it finds a hostname (i e, something before the “:”), then that host will be tried too.

To summarise, first the client library checks for a server string in step 1-4, if there is none, it makes a server string – out of one item from steps 5-9, and then up to four more items from steps 10-13.

And that’s all. If you ever want to customize how you connect to a PulseAudio server, you have a smorgasbord of options to choose from!

04 February, 2016 12:51PM

Colin King: Intel Platform Quality of Service and Cache Allocation Technology

One issue when running parallel processes is contention of shared resources such as the Last Level Cache (aka LLC or L3 Cache).  For example, a server may be running a set of Virtual Machines with processes that are memory and cache intensive hence producing a large amount of cache activity. This can impact on the other VMs and is known as the "Noisy Neighbour" problem.

Fortunately the next generation Intel processors allow one to monitor and also fine tune cache allocation using Intel Cache Monitoring Technology (CMT) and Cache Allocation Technology (CAT).

Intel kindly loaned me a 12 thread development machine with CMT and CAT support to experiment with this technology using the Intel pqos tool.   For my experiment, I installed Ubuntu Xenial Server on the machine. I then installed KVM and an VM instance of Ubuntu Xenial Server.   I then loaded the instance using stress-ng running a memory bandwidth stressor:

 stress-ng --stream 1 -v --stream-l3-size 16M  
..which allocates 16MB in 4 buffers and performs various read/compute and writes to these, hence causing a "noisy neighbour".

Using pqos,  one can monitor and see the cache/memory activity:
sudo apt-get install intel-cmt-cat
sudo modprobe msr
sudo pqos -r
TIME 2016-02-04 10:25:06
0 0.59 168259k 9144.0 12195.0 0.0
1 1.33 107k 0.0 3.3 0.0
2 0.20 2k 0.0 0.0 0.0
3 0.70 104k 0.0 2.0 0.0
4 0.86 23k 0.0 0.7 0.0
5 0.38 42k 24.0 1.5 0.0
6 0.12 2k 0.0 0.0 0.0
7 0.24 48k 0.0 3.0 0.0
8 0.61 26k 0.0 1.6 0.0
9 0.37 11k 144.0 0.9 0.0
10 0.48 1k 0.0 0.0 0.0
11 0.45 2k 0.0 0.0 0.0
Now to run a stress-ng stream stressor on the host and see the performance while the noisy neighbour is also running:
stress-ng --stream 4 --stream-l3-size 2M --perf --metrics-brief -t 60
stress-ng: info: [2195] dispatching hogs: 4 stream
stress-ng: info: [2196] stress-ng-stream: stressor loosely based on a variant of the STREAM benchmark code
stress-ng: info: [2196] stress-ng-stream: do NOT submit any of these results to the STREAM benchmark results
stress-ng: info: [2196] stress-ng-stream: Using L3 CPU cache size of 2048K
stress-ng: info: [2196] stress-ng-stream: memory rate: 1842.22 MB/sec, 736.89 Mflop/sec (instance 0)
stress-ng: info: [2198] stress-ng-stream: memory rate: 1847.88 MB/sec, 739.15 Mflop/sec (instance 2)
stress-ng: info: [2199] stress-ng-stream: memory rate: 1833.89 MB/sec, 733.56 Mflop/sec (instance 3)
stress-ng: info: [2197] stress-ng-stream: memory rate: 1847.16 MB/sec, 738.86 Mflop/sec (instance 1)
stress-ng: info: [2195] successful run completed in 60.01s (1 min, 0.01 secs)
stress-ng: info: [2195] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
stress-ng: info: [2195] (secs) (secs) (secs) (real time) (usr+sys time)
stress-ng: info: [2195] stream 22101 60.01 239.93 0.04 368.31 92.10
stress-ng: info: [2195] stream:
stress-ng: info: [2195] 547,520,600,744 CPU Cycles 9.12 B/sec
stress-ng: info: [2195] 69,959,954,760 Instructions 1.17 B/sec (0.128 instr. per cycle)
stress-ng: info: [2195] 11,066,905,620 Cache References 0.18 B/sec
stress-ng: info: [2195] 11,065,068,064 Cache Misses 0.18 B/sec (99.98%)
stress-ng: info: [2195] 8,759,154,716 Branch Instructions 0.15 B/sec
stress-ng: info: [2195] 2,205,904 Branch Misses 36.76 K/sec ( 0.03%)
stress-ng: info: [2195] 23,856,890,232 Bus Cycles 0.40 B/sec
stress-ng: info: [2195] 477,143,689,444 Total Cycles 7.95 B/sec
stress-ng: info: [2195] 36 Page Faults Minor 0.60 sec
stress-ng: info: [2195] 0 Page Faults Major 0.00 sec
stress-ng: info: [2195] 96 Context Switches 1.60 sec
stress-ng: info: [2195] 0 CPU Migrations 0.00 sec
stress-ng: info: [2195] 0 Alignment Faults 0.00 sec
.. so about 1842 MB/sec memory rate and 736 Mflop/sec per CPU across 4 CPUs.  And pqos shows the cache/memory actitivity as:
sudo pqos -r
TIME 2016-02-04 10:35:27
0 0.14 43060k 1104.0 2487.9 0.0
1 0.12 3981523k 2616.0 2893.8 0.0
2 0.26 320k 48.0 18.0 0.0
3 0.12 3980489k 1800.0 2572.2 0.0
4 0.12 3979094k 1728.0 2870.3 0.0
5 0.12 3970996k 2112.0 2734.5 0.0
6 0.04 20k 0.0 0.3 0.0
7 0.04 29k 0.0 1.9 0.0
8 0.09 143k 0.0 5.9 0.0
9 0.15 0k 0.0 0.0 0.0
10 0.07 2k 0.0 0.0 0.0
11 0.13 0k 0.0 0.0 0.0
Using pqos again, we can find out how much LLC cache the processor has:
sudo pqos -v
NOTE: Mixed use of MSR and kernel interfaces to manage
CAT or CMT & MBM may lead to unexpected behavior.
INFO: Monitoring capability detected
INFO: CPUID.0x7.0: CAT supported
INFO: CAT details: CDP support=0, CDP on=0, #COS=16, #ways=12, ways contention bit-mask 0xc00
INFO: LLC cache size 9437184 bytes, 12 ways
INFO: LLC cache way size 786432 bytes
INFO: L3CA capability detected
INFO: Detected PID API (perf) support for LLC Occupancy
INFO: Detected PID API (perf) support for Instructions/Cycle
INFO: Detected PID API (perf) support for LLC Misses
ERROR: IPC and/or LLC miss performance counters already in use!
Use -r option to start monitoring anyway.
Monitoring start error on core(s) 5, status 6
So this CPU has 12 cache "ways", each of 786432 bytes (768K).  One or more  "Class of Service" (COS)  types can be defined that can use one or more of these ways.  One uses a bitmap with each bit representing a way to indicate how the ways are to be used by a COS.  For example, to use all the 12 ways on my example machine, the bit map is 0xfff  (111111111111).   A way can be exclusively mapped to a COS or shared, or not used at all.   Note that the ways in the bitmap must be contiguously allocated, so a mask such as 0xf3f (111100111111) is invalid and cannot be used.

In my experiment, I want to create 2 COS types, the first COS will have just 1 cache way assigned to it and CPU 0 will be bound to this COS as well as pinning the VM instance to CPU 0  The second COS will have the other 11 cache ways assigned to it, and all the other CPUs can use this COS.

So, create COS #1 with just 1 way of cache, and bind CPU 0 to this COS, and pin the VM to CPU 0:
sudo pqos -e llc:1=0x0001
sudo pqos -a llc:1=0
sudo taskset -apc 0 $(pidof qemu-system-x86_64)
And create COS #2, with 11 ways of cache and bind CPUs 1-11 to this COS:
sudo pqos -e "llc:2=0x0ffe"
sudo pqos -a "llc:2=1-11"
And let's see the new configuration:
sudo pqos  -s
NOTE: Mixed use of MSR and kernel interfaces to manage
CAT or CMT & MBM may lead to unexpected behavior.
L3CA COS definitions for Socket 0:
L3CA COS0 => MASK 0xfff
L3CA COS1 => MASK 0x1
L3CA COS2 => MASK 0xffe
L3CA COS3 => MASK 0xfff
L3CA COS4 => MASK 0xfff
L3CA COS5 => MASK 0xfff
L3CA COS6 => MASK 0xfff
L3CA COS7 => MASK 0xfff
L3CA COS8 => MASK 0xfff
L3CA COS9 => MASK 0xfff
L3CA COS10 => MASK 0xfff
L3CA COS11 => MASK 0xfff
L3CA COS12 => MASK 0xfff
L3CA COS13 => MASK 0xfff
L3CA COS14 => MASK 0xfff
L3CA COS15 => MASK 0xfff
Core information for socket 0:
Core 0 => COS1, RMID0
Core 1 => COS2, RMID0
Core 2 => COS2, RMID0
Core 3 => COS2, RMID0
Core 4 => COS2, RMID0
Core 5 => COS2, RMID0
Core 6 => COS2, RMID0
Core 7 => COS2, RMID0
Core 8 => COS2, RMID0
Core 9 => COS2, RMID0
Core 10 => COS2, RMID0
Core 11 => COS2, RMID0
..showing Core 0 bound to COS1, and Cores 1-11 bound to COS2, with COS1 with 1 cache way and COS2 with the remaining 11 cache ways.
Now re-run the stream stressor and see if the VM has less impact on the LL3 cache:
stress-ng --stream 4 --stream-l3-size 1M --perf --metrics-brief -t 60
stress-ng: info: [2232] dispatching hogs: 4 stream
stress-ng: info: [2233] stress-ng-stream: stressor loosely based on a variant of the STREAM benchmark code
stress-ng: info: [2233] stress-ng-stream: do NOT submit any of these results to the STREAM benchmark results
stress-ng: info: [2233] stress-ng-stream: Using L3 CPU cache size of 1024K
stress-ng: info: [2235] stress-ng-stream: memory rate: 2616.90 MB/sec, 1046.76 Mflop/sec (instance 2)
stress-ng: info: [2233] stress-ng-stream: memory rate: 2562.97 MB/sec, 1025.19 Mflop/sec (instance 0)
stress-ng: info: [2234] stress-ng-stream: memory rate: 2541.10 MB/sec, 1016.44 Mflop/sec (instance 1)
stress-ng: info: [2236] stress-ng-stream: memory rate: 2652.02 MB/sec, 1060.81 Mflop/sec (instance 3)
stress-ng: info: [2232] successful run completed in 60.00s (1 min, 0.00 secs)
stress-ng: info: [2232] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
stress-ng: info: [2232] (secs) (secs) (secs) (real time) (usr+sys time)
stress-ng: info: [2232] stream 62223 60.00 239.97 0.00 1037.01 259.29
stress-ng: info: [2232] stream:
stress-ng: info: [2232] 547,364,185,528 CPU Cycles 9.12 B/sec
stress-ng: info: [2232] 97,037,047,444 Instructions 1.62 B/sec (0.177 instr. per cycle)
stress-ng: info: [2232] 14,396,274,512 Cache References 0.24 B/sec
stress-ng: info: [2232] 14,390,808,440 Cache Misses 0.24 B/sec (99.96%)
stress-ng: info: [2232] 12,144,372,800 Branch Instructions 0.20 B/sec
stress-ng: info: [2232] 1,732,264 Branch Misses 28.87 K/sec ( 0.01%)
stress-ng: info: [2232] 23,856,388,872 Bus Cycles 0.40 B/sec
stress-ng: info: [2232] 477,136,188,248 Total Cycles 7.95 B/sec
stress-ng: info: [2232] 44 Page Faults Minor 0.73 sec
stress-ng: info: [2232] 0 Page Faults Major 0.00 sec
stress-ng: info: [2232] 72 Context Switches 1.20 sec
stress-ng: info: [2232] 0 CPU Migrations 0.00 sec
stress-ng: info: [2232] 0 Alignment Faults 0.00 sec
Now with the noisy neighbour VM constrained to use just 1 way of LL3 cache, the stream stressor on the host now can achieve about 2592 MB/sec and about 1030 Mflop/sec per CPU across 4 CPUs.

This is a relatively simple example.  With the ability to monitor cache and memory bandwidth activity with one can carefully tune a system to make best use of the limited LL3 cache resource and maximise throughput where needed.

There are many applications where Intel CMT/CAT can be useful, for example fine tuning containers or VM instances, or pinning user space networking buffers to cache ways in DPDK for improved throughput.

04 February, 2016 12:03PM by Colin Ian King (noreply@blogger.com)

hackergotchi for Tails


Tails report for August, 2015

We're sorry it took us so long to publish this report. We'll publish fresher news soon :)

The 16th of August was Tails 6th birthday. Well, Tails existed before, but it was the birthday of the first public release. Actually, it was named amnesia then, it was before the fusion with Incognito.

Never mind the details. Let's celebrate!


The following changes were introduced in Tails 1.5:

New features

  • Disable access to the local network in the Tor Browser. You should now use the Unsafe Browser to access the local network.

Upgrades and changes

  • Install Tor Browser 5.0 (based on Firefox 38esr).
  • Install a 32-bit GRUB EFI boot loader. Tails should now start on some tablets with Intel Bay Trail processors among others.
  • Let the user know when Tails Installer has rejected a device because it is too small.

Fixed problems

  • Our AppArmor setup has been audited and improved in various ways which should harden the system.
  • The network should now be properly disabled when MAC address spoofing fails.

Documentation and website

User experience


  • We asked our mirrors to disable HTTP ETag to better support resumed downloads and documented how to do that.

  • Despite of the Tor bug limiting our pool of HTTP mirrors is getting small. We need mirrors again and stopped saying that our pool is full.

  • Our test suite covers 191 scenarios, 6 more than in July.


  • We finally signed a contract with OTF, running from February 2015 to July 2016, and sent our first report.

  • We created ourselves a Flattr account.


  • We have Tails stickers again! We'll share them during upcoming events, you can also make your own.

  • Alan attended GUADEC, the GNOME conference in Gothenburg, Sweden on August 7 – 9 and connected us better with the GNOME community.

  • A talk about Tails took place during DebConf15 in Heidelberg, Germany, on August 15th.

  • DrWhax did a lightning talk about Tails at CCCamp on August 13 - 17 in Zehdenick, Germany.

On-going discussions

  • Alan submitted for review a new version of Tor Monitor (to replace Vidalia) and Sascha Steinbiss proposed to package it for Debian.

  • We drafted a script to run a Mumble server from Tails, verified that the Mumble client in Tails Jessie works well, and started using it for internal meetings.

Press and testimonials


At the end of the month:

All website PO files

  • de: 18% (1265) strings translated, 0% strings fuzzy, 17% words translated
  • fr: 46% (3223) strings translated, 2% strings fuzzy, 43% words translated
  • pt: 26% (1842) strings translated, 3% strings fuzzy, 24% words translated

Total original words: 79407

Core PO files

  • de: 59% (794) strings translated, 1% strings fuzzy, 66% words translated
  • fr: 91% (1219) strings translated, 3% strings fuzzy, 92% words translated
  • pt: 82% (1102) strings translated, 9% strings fuzzy, 85% words translated

Total original words: 14404


  • Tails has been started more than 469,870 times this month. This makes 15,157 boots a day on average.

  • 31,870 downloads of the OpenPGP signature of Tails ISO from our website.

  • 127 bug reports were received through WhisperBack.

-- Report by BitingBird for Tails folks

04 February, 2016 11:53AM

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: Australians stuck abroad and alleged sex crimes

Two Australians have achieved prominence (or notoriety, depending on your perspective) for the difficulty in questioning them about their knowledge of alleged sex crimes.

One is Julian Assange, holed up in the embassy of Ecuador in London. He is back in the news again today thanks to a UN panel finding that the UK is effectively detaining him, unlawfully, in the Ecuadorian embassy. The effort made to discredit and pursue Assange and other disruptive technologists, such as Aaron Swartz, has an eerie resemblance to the way the Inquisition hunted witches in the middle ages and beyond.

The other Australian stuck abroad is Cardinal George Pell, the most senior figure in the Catholic Church in Australia. The Royal Commission into child sex abuse by priests has heard serious allegations claiming the Cardinal knew about and covered up abuse. This would appear far more sinister than anything Mr Assange is accused of. Like Mr Assange, the Cardinal has been unable to travel to attend questioning in person. News reports suggest he is ill and can't leave Rome, although he is being accommodated in significantly more comfort than Mr Assange.

If you had to choose, which would you prefer to leave your child alone with?

04 February, 2016 10:30AM


Durch Mitarbeiterbeteiligung vom Intranet zum Mitarbeiterportal

Das Intranet der Landeshauptstadt München hat sich seit mehr als zehn Jahren nicht groß verändert. Deshalb plant die Stadt nun, ein völlig neues Mitarbeiterportal aufzusetzen.

Zusammenarbeit statt reine Information

Attraktiver, strukturierter, informativer und moderner. Auf diese und ähnliche Wünsche der Beschäftigten reagiert die Stadt mit dem Projekt „Redesign Intranet“. Ein neues Mitarbeiterportal ist auch eine Antwort auf neue Arten der digitalen Informationsbeschaffung und Zusammenarbeit. Ziel ist es, einfachere Kommunikations- und schnellere Informationswege, eine leichtere Bedienung und ein zentrales Wissensmanagement zu schaffen.

Im Mittelpunkt stehen die Wünsche der Beschäftigten

Um zu erfahren, wie sich die Mitarbeiterinnen und Mitarbeiter ihr Intranet vorstellen, beschreitet die Stadt völlig neue Wege: Anstatt einer klassischen Befragung gibt es eine moderierte Mitarbeiterbeteiligung. Alle Beschäftigten können damit ihr neues Mitarbeiterportal interaktiv und unmittelbar mitgestalten. Sie sind dazu aufgerufen, ihre Ideen einzubringen, Vorschläge zu kommentieren und zu diskutieren. Unterstützung für das Projekt kommt von Oberbürgermeister Dieter Reiter persönlich.

Mitarbeiterbeteiligung im Projekt "Redesign Intranet"

Mitarbeiterbeteiligung im Projekt „Redesign Intranet“


Zur Umsetzung der Mitarbeiterbeteiligung hat sich die Stadt zur Zusammenarbeit mit der Berliner Firma Zebralog entschieden, die vielfältige Erfahrungen mit Mitarbeiterbeteiligungen auch auf kommunaler Ebene hat. Die Web-Applikation ist auf der Open-Source-Lösung Drupal realisiert. Nach einer Registrierung legen sich die Beschäftigten ein Profil an und haben dann die Möglichkeit, eigene Kommentare abzugeben, die Ideen anderer zu kommentieren und sich ein Logo und einen Namen für ihr neues Portal auszudenken. Sie können sich ausführlich über das Projekt informieren, in einer Mediathek Statements zu dem Projekt sehen, aktuelle Meldungen der Projektleitung lesen und einen Newsletter abonnieren.

Wie es weiter geht

Seit dem 27. Januar läuft die Mitarbeiterbeteiligung nun. „Wie wollen Sie in Zukunft innerhalb der Münchner Stadtverwaltung miteinander kommunizieren? Wie stellen Sie sich einen gemeinsamen Informationsfluss vor? Welche Werkzeuge brauchen Sie für Ihre tägliche Arbeit?“ Fragen wie diese beantworten die Beschäftigten seither und bereits nach kurzer Zeit sind anregende Diskussionen auf der Plattform entstanden und jede Menge guter Ideen eingegangen, wie eine gesamtstädtische Zusammenarbeit in Zukunft aussehen könnte.

Sechs Wochen lang wird die Mitarbeiterbeteiligung laufen. Und am Ende wird ein riesiger Pool von Anregungen stehen, die dann in ein Konzept fließen. Die Plattform bleibt auch nach Ende der Beteiligung online, damit die Beschäftigten weiterverfolgen können, wie ihre Vorschläge umgesetzt werden und wie sich daraus Schritt für Schritt ein zeitgemäßes Mitarbeiterportal für die Landeshauptstadt München entwickelt.


Ein Beitrag von Bettina Link, Stellvertretende Projektleiterin „Redesign Intranet“

Der Beitrag Durch Mitarbeiterbeteiligung vom Intranet zum Mitarbeiterportal erschien zuerst auf Münchner IT-Blog.

04 February, 2016 09:35AM by Stefan Döring

hackergotchi for SolydXK


New Clothes for the emperor

It has been over three years since we started and even a distribution would like to have something new to wear once in a while. With the input of the community and friends I’ve updated the looks of our logos. Theming has been changed in line with the new logos and we’ve also been working hard on the theming for the Enthusiast’s Editions. Our main site, forum, Facebook, Twitter and Google+ accounts also reflect these changes.

I especially want to thank Grizzler, who has given me the support I needed and for making those marvelous Enthusiast’s Editions.

The localized versions will follow later and I will post the release of those ISOs separately.

Here are some screen prints:
SolydX Desktop
SolydK Desktop

You might also notice that there’s something different with the Live Installer. I’ve recently completed the conversion from Python2.7/Gtk to Python3/Gtk+. You will find some minor changes in the GUI but under the hood it’s a totally different beast. I’d like to invite everybody to help me improve the code: https://github.com/SolydXK/live-installer-3

We hope you are going to enjoy these new editions and that you like the new clothes of the emperor.

Go, get the ISOs here: http://solydxk.com

04 February, 2016 09:27AM by Schoelje

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Mako Hill: Welcome Back Poster

My office door is on the second floor in front the major staircase in my building. I work with my door open so that my colleagues and my students know when I’m in. The only time I consider deviating from this policy is the first week of the quarter when I’m faced with a stream of students, usually lost on their way to class and that, embarrassingly, I am usually unable to help.

I made this poster so that these conversations can, in a way, continue even when I am not in the office.



04 February, 2016 06:25AM

February 03, 2016

Philip Ballew: Scale and Ubucon

Just a little over a week ago I had the privilege of both attending and helping out at the Southern California Linux Expo. As someone who simi-frequently travels to other far away lands for Linux related software events, it is nice to be able to visit a place so close to home and see the impacts of a conference like this first hand.

The first portion of the conference involved me assisting and attending UbuCon at SCALE. What was different about this UbuCon, was that this UbuCon was a cooperative of both the Ubuntu community, and Canonical. All of the talks were amazing. One of the talks was about building a Juju Charm. Another was about gtting started in Free software as a career. I saw a keynote by Mark Shuttleworth, and also a member of my local community, Nathan Haines that have pictures right below. IMG_20160121_102222810
(Mark Shuttleworth giving an opening Keynote to the conference)

(Nathan Haines talking at UbuCon)

Once the Ubuntu mini conference ended, I was able to work with my favorite part of SCALE, The Next Generation. It is a part of SCALE, I work on. Here, we have Children from all over the country and beyond come and speak about the amazing things they are doing with Free Software. This year we had a child speak about statistics with R, that made me feel like I should know about statistics!

Below is a picture from a local high school that came out to present on some of the work they have been doing.

Needless to say, it was a great time and I cannot wait until next year!

03 February, 2016 11:30PM

hackergotchi for SparkyLinux


Enlightenment 0.20.5


There is a quick (very quick) update of Enlightenment 0.20.5 ready in Sparky repository now.

If you have an older 0.20.x version installed, simply make system upgrade:
sudo apt-get update
sudo apt-get dist-upgrade

If you would like to make fresh installations, do:
sudo apt-get update
sudo apt-get install enlightenment


03 February, 2016 09:35PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Online Summit: 3-5 May 2016

[T]he next Ubuntu Online Summit is going to be from 3rd – 5th May 2016, which is going to two weeks after 16.04 release.

Summit and related pages will be updated in due time.

Originally posted to the community-announce mailing list on Wed Feb 3 10:07:47 UTC 2016 by Daniel Holbach

03 February, 2016 10:42AM

February 02, 2016

hackergotchi for SparkyLinux


Enlightenment 0.20.4


There is an update of Enlightenment 0.20.4 ready in Sparky repository now.
EFL and friends have been updated to the latest versions 1.17.0 too.

If you have an older 0.20.3 version installed, simply make system upgrade:
sudo apt-get update
sudo apt-get dist-upgrade

If you would like to make fresh installations, do:
sudo apt-get update
sudo apt-get install enlightenment

There is one more change in the our repos – the ‘e’ packages are available as source packages too.
If you would like to build them yourself, add Sparky source repository as following:
Then refresh package list:
sudo apt-get update
Then install the Enlightenment from source in the following order:
1. efl
2. evas-generic-loaders
3. emotion-generic-players
4. elementary
5. python-efl
6. enlightenment

Other “e” packages will be rebuild soon to provide source packages too.

If you find any problem with building the packages from source, simply let me know via the Contact form or post it on our forums. The most probably reason which could break the installation from source could be (hope not) missing dependencies only.


02 February, 2016 11:50PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Rhonda D'Vine: Moby

Today is one of these moods. And sometimes one needs certain artists/music to foster it. Music is powerful. There are certain bands I know that I have to stay away from when feeling down to not get too deep into it. Knowing that already helps a lot. The following is an artist that is not completely in that area, but he got powerful songs and powerful messages nevertheless; and there was this situation today that one of his songs came to my mind. That's the reason why I present you today Moby. These are the songs:

  • Why Does My Heart Feel So Bad?: The song for certain moods. And lovely at that, not dragging me too much down. Hope you like the song too. :)
  • Extreme Ways: The ending tune from the movie The Bourne Ultimatum, and I fell immediately in love with the song. I used it for a while as morning alarm, a good start into the day.
  • Disco Lies: If you consider the video disturbing you might be shutting your eyes from what animals are facing on a daily basis.

Hope you like the selection; and like always: enjoy!

/music | permanent link | Comments: 2 | Flattr this

02 February, 2016 11:08PM

hackergotchi for Lihuen


Lanzamiento de Lihuen 6

16:57 02 feb 2016 (ART)

El viernes 18 de diciembre se publico Lihuen GNU/Linux 6, la nueva actualización de la distribución de la Facultad de Informática de la UNLP que está basada en la última versión estable de Debian, llamada Jessie.

La principal caracteristica de Lihuen 6 es que cuenta solo con en el entorno de escritorio LXDE, a diferencia de la version anterior que contaba ademas con Cinnamon. Se optó por este entorno debido a que es de los mas livianos logrando un mejor rendimiento en máquinas con pocos recursos, en comparación con el entorno de escritorio que se utilizaba en la versión anterior llamada Cinnamon.

Por otro lado, al igual que en Lihuen 5, la nueva versión se desarrolló para las arquitecturas i386 y amd64. Una de las novedades es la convivencia del sistema operativo en modo live y el instalador en una sola imagen, pudiendo probar el sistema antes de decidir instalarlo.

Además con motivo del décimo aniversario del nacimiento de Lihuen, se incorporó un comando denominado "diezaños", que al ejecutarse en una terminal, permite que se puedan ver las frases célebres de los diferentes desarrolladores que formaron parte del equipo de trabajo a lo largo de los 10 años.

Como en todas sus ediciones el sistema Lihuen 6 incluye una gran sección de aplicaciones educativas, combinadas con un entorno sencillo e intuitivo logrando la comodidad de todos los usuarios, tanto los usuarios mas asiduos como los de las salas de computación de escuelas y facultades, tanto como los de bibliotecas e instituciones sociales, donde tenemos el agrado de aportar el sistema. Dichos aportes provienen de la articulación con diferentes proyectos de extensión de la Facultad de Informática como el Programa "E-basura" que instala Lihuen en las computadoras que restauran y donan a instituciones sociales; "Expandiendo la Comunidad de Software Libre en las escuelas"; "Programando con Robots, Juegos y Software Libre"; "El barrio va a la Universidad".

02 February, 2016 07:16PM

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Trimming the fat from the Ubuntu online tour

Maybe, like me, you seen more of the inside of your gym in January than you had for the six months previous. New year, new diet, new me.. or something like that.

A big creeping problem in recent years is that websites have been on an all out binge, and not just over the winter holidays — big videos, big images, fancy fonts, third-party libraries — they just can’t get enough of ’em.

Average page weights increased by 15% in 2014 and although I haven’t yet seen any similar research done for 2015 yet, I’m willing to bet that trend did not reverse.

Last week I was tasked with making some performance optimisations to the Ubuntu online tour.

This legacy codebase stretches all the way back to 2012, and as such was not benefitting from some of the modern tools we now have at our disposal as web developers.

We have been maintaining our largest codebases such as ubuntu.com and canonical.com to ensure they are as performant as they can be but this Ubuntu tour repository slipped through the cracks somewhat.

We have users all over the world and many of them don’t enjoy the luxury of fat internet pipes that we enjoy in our London office. Time to trim the fat…

At first look, I noted on load of the site it required 235 HTTP requests to download 2.7MB of data. Chunky Charlie!


Network waterfall screenshot


Delving into the codebase, I immediately spotted some big areas ripe for improvement:

  • The CSS files were not being concatenated nor were they minified.
  • The Javascript was also being loaded in separate files, also un-minified.
  • The image assets were uncompressed.
  • The HTML was un-minified.

Beyond that – I ran the site URL through Google’s PageSpeed Insights and also discovered;

  • Browser cacheing was not being being leveraged as static assets did not have any Expires headers specified
  • There were quite a few CSS and javascript dependancies blocking rendering of the page.

As you see, the site was only scoring a lowly 46/100, not great.


Google Page Speed Insights screenshot


For jobs such as this, my first weapon of choice is the task runner, Gulp. It’s quick and easy to drop Gulp on top of any existing site and use some of it’s wide array of plugins to optimise source assets for performance.

For this job I used gulp-concat, gulp-htmlmin, gulp-imagemin, gulp-minify-css, gulp-renamegulp-uglify, gulp with critical & gulp-rev.

Explaining how to use each of them is beyond the scope of this article but you can view my Gulpfile.js and accompanying package.json file to see what I did.

When retro-optimising a site, you might find you have to make certain compromises such as placing “src” folders inside folders you are optimising to store the original documents, then output the optimised versions into the original folder to ensure everything is backwards compatible and you haven’t broken any relative links. You should also be careful when globbing Javascript files as they may need to be loaded in a certain order to prevent race conditions. This is also true when concatenating and including Javascript libraries such as jQuery.

In an ideal world, you would not deploy any files from the repository you have compiled locally. They should be ignored by version control and compiled on the fly by running your task runner on the server using a continuous integration engine such as Jenkins or Travis CI. This is much cleaner and will prevent merge conflicts when multiple developers are working on the same codebase.

So — when we have all of the above configured and then run it over our legacy codebase, how much weight did it shave?


Network Waterfall - After


Good news! Now to load the site, we only need 166 HTTP (-29%) requests to download 2.2MB(-18%) of data. Slim(mer) Jim for the win!

This should mean our users with slower connections will have a much improved experience.

When we run the leaner site now deployed through Google Pagespeed Insights – we now get a much healthier score also.


Google Pagespeed - After


This was a valuable exercise for our team and reminded us we not only have a responsibility to keep all our new and upcoming work performant but we should also address any legacy sites still currently in use wherever possible.

A leaner web is a faster web and I’m sure that’s something we can all get behind.


02 February, 2016 02:57PM

hackergotchi for HandyLinux


Une semaine pour tout changer

Je me permets aujourd'hui d'écrire mon premier article sur le blog afin de faire un petit bilan. Premier article... Bilan... Oui cela peut sembler bizarre, mais avec HandyLinux, rien n'est impossible. Qu'on se le dise.

« Des chiffres ! Donnez moi des chiffres ! »

arpinux dans ses nouvelles fonctions.

Grâce à la mise en place de Piwik sur le serveur HandyLinux, il nous est possible d'estimer le nombre de téléchargements de la nouvelle version. Ces chiffres ne sont là qu'à titre indicatif et ne représentent en rien le nombre réel d'installations. En effet, ne sont seulement comptabilisés que les téléchargements depuis les serveurs officiels et les téléchargements pairs à pairs (torrents).

Enfin toujours est-il que, en ce dimanche 31 janvier 2016, plus de 2000 personnes avaient téléchargé notre bonne vieille HandyLinux 2.3.

Thuban (la blonde) et Starsheep devant les chiffres (pour ce qui est de la pomme, ne vous en faites pas c'est un jouet).

Enfin ce qu'il faut retenir de tout cela (parce que bon les chiffres...) est que nous vous adressons un immense MERCI pour votre confiance et votre soutien. Merci encore.

Et ensuite ?

« Mais qu'est-ce que vous avez foutu ?! » - arpinux, le lendemain du passage de relais.

Dans le cadre du développement de HandyLinux, il peut être très facile de briser tout le système et d'apporter des instabilités par quelques modifications du code source. Et vous comprendrez que nous voulons absolument éviter cela. Pour l'intérêt de tous.

Aussi, la prochaine version sera certainement moins fournie en nouveautés qu'a pu l'être la dernière version. Il est nécessaire pour nous de prendre nos repères dans le code. arpinux nous guide énormément, ne vous en faites pas, mais nous n'allons pas nous lancer dans de grands chantiers.

C'est pourquoi le développement de la prochaine version de HandyLinux est axé sur la simplification, le nettoyage et la stabilité du code et de l'interface. Nous allons travailler tranquillement à améliorer l'existant sans pour autant que cela soit énormément visible du côté des utilisateurs. Des fois, il est bénéfique de savoir se poser un peu aussi.

Ceci étant dit, vous n'êtes jamais à l'abri de nouveautés...

PS : Toutes les images de cet article sont sous Licence CC0 et sont donc totalement libres de droit.
HandyLinux - la distribution Debian sans se prendre la tête...

02 February, 2016 01:00PM by Starsheep

hackergotchi for Ubuntu developers

Ubuntu developers

Nathan Haines: Ubuntu Free Culture Showcase submissions are now open again!

It’s time once again for the Ubuntu Free Culture Showcase!

The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will greet Ubuntu 16.04 LTS users.

Not only will the chosen content be featured on the next set of pressed Ubuntu discs shared worldwide across the next two years, but it will serve the joint purposes of providing a perfect test for new users testing Ubuntu’s live session or new installations, but also celebrating the fantastic talents of artists who embrace Free content licenses.

While we hope to see contributions from the video, audio, and photographic realms, I also want to thank the artists who have provided wallpapers for Ubuntu release after release. Ubuntu 15.10 shipped with wallpapers from the following contributors:

I'm looking forward to seeing the next round of entrants and a difficult time picking final choices to ship with Ubuntu 16.04 LTS.

For more information, please visit the Ubuntu Free Culture Showcase page on the Ubuntu wiki.

02 February, 2016 11:33AM

hackergotchi for Ubuntu


Vacant Developer Membership Board seats: Call for nominations

I regret to inform you that Iain Lane has expressed his wish to resign from the Developer Membership Board, after more than five years of service. The board and the Ubuntu developer community would like to thank Laney for all the hard work he has put into the Developer Membership Board. We are especially grateful for the pleora of applicants he has mentored and sponsored uploads for. He was last re-elected a year ago, however, he is cutting his tenure short and has requested for his seat to be including in the upcoming election. A formal resignation letter was posted to the Technical Board mailing list.

In response to Iain’s move, the remaining original Board member Stéphane Graber is also cutting his term short and vacates his seat for the upcoming election. We would like to thank Stéphane Graber for the long 6 years of service on the board!

In addition to above, we will soon have three vacant Developer Dembership Board seats as per regular term expiration. Brian Murray, Micah Gersten, and Dimitri John Ledkov will reach the end of their terms on 2016-03-09.

This telegraphic message is a call for nominations.

The DMB is responsible for reviewing and approving new Ubuntu developers, meeting for about an hour once a fortnight. Candidates should be Ubuntu developers themselves, and should be well qualified to evaluate prospective Ubuntu developers and decide when to entrust them with developer privileges or to grant them Ubuntu membership status.

The new members will be chosen using Condorcet voting. Members of the ubuntu-dev team in Launchpad will be eligible to vote. To ensure that you receive a ballot in the initial mail, please add a visible email address to your Launchpad profile (although there will be an opportunity to receive a ballot after the vote has started if you do not wish to do this).

The term of the new board member will be 2 years. Providing at least six valid nominations are received, voting will commence on Monday 15th February 2016 and will last for 14 days, ending on Monday 29th February 2016. The DMB will confirm the appointments in its next meeting thereafter.

Please send GPG-signed nominations to developer-membership-board at ists.ubuntu.com (which is a private mailing list accessible only by DMB members) by midnight UTC on Monday 15th February 2016.

If nominating a developer other than yourself, please confirm that the nominee is happy to sit on the board before emailing the DMB.

Please consider writing a short statement on your wiki page if nominated so that others get a better idea of who they are voting for. If you include a link to this in your nomination mail or a followup, the DMB will share it when the call for votes begins.

Originally posted to the ubuntu-devel-announce mailing list on Mon Feb 1 15:31:11 UTC 2016 by Dimitri John Ledkov

02 February, 2016 07:46AM by lyz

Ubuntu Weekly Newsletter Issue 452

Welcome to the Ubuntu Weekly Newsletter. This is issue #452 for the week January 25 – 31, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Paul White
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

02 February, 2016 04:19AM by lyz

February 01, 2016

hackergotchi for ArcheOS


Arc-Team 2015 Iran-Mission: The Video

In June 2015 we had the pleasure to prove our abilities for the first time in the Middle East:

On behalf of the University of Innsbruck - Department of Near Eastern Archaeology, in collaboration with the University of Sistan and Baluchestan and at Iran's Research Institute for Cultural Heritage and Tourisms invitation we've documented one of the most important sites of Persian heardland:

The Palace of Ardashir Pāpakan (in Persian: دژ اردشير پاپکان‎‎ Dezh-e Ardashir Pāpakān), also known as the Atash-kadeh .آتشکده

Ardashir, also known as "the Unifier" (180–242 AD), was the founder of the Sasanian Empire. He was the ruler of Estakhr since 206, subsequently Pars Province since 222, and finally "King of Kings of Sasanian Empire" in 224 with the overthrow of the Parthian Empire, ruling the Sasanian Empire until his death in 242. The dynasty ruled for four centuries, until it was overthrown by the Rashidun Caliphate in 651. (Wikipedia)

The building is located two kilometers (1.2 miles) north of the ancient city of Gor.

The palace complex includes a pond, fed by a natural spring.

The structures dimensions are 116 m by 54 m. The three domes are almost 18 m high, the southeastern one is partially collapsed. The structure was built of local rocks and mortar with plasterwork on the insides. 

After positioning the site with DGPS we've documented the building room by room, applying the structure from motion technology, powered by free and open source applications.

In ten days we took about 30.000 pictures from the ground and from our drone, covering the whole building and two Sasanid Rock Reliefs: The Investiture Relief of Ardashir I. and the Equestrian Relief, showing Ardashir's fight against the Parthian king Artabanus V in 224.

Our three minutes long clip sketches the 2015-Mission of "Digital Archaeological Documantation of Iranian Monuments" during all it's phases.

We would particularly like to thank:
Sandra Heinsch-Kuntner
Rouhollah Shirazi
Walter Kuntner
Sasan Darvish Zadeh

01 February, 2016 09:34PM by Rupert Gietl (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Raphaël Hertzog: My Free Software Activities in January 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I did not ask for any paid hours this month and won’t be requesting paid hours for the next 5 months as I have a big project to handle with a deadline in June. That said I still did a few LTS related tasks:

  • I uploaded a new version of debian-security-support (2016.01.07) to officialize that virtualbox-ose is no longer supported in Squeeze and that redmine was not really supportable ever since we dropped support for rails.
  • Made a summary of the discussion about what to support in wheezy and started a new round of discussions with some open questions. I invited contributors to try to pickup one topic, study it and bring the discussion to some conclusion.
  • I wrote a blog post to recruit new paid contributors. Brian May, Markus Koschany and Damyan Ivanov candidated and will do their first paid hours over February.

Distro Tracker

Due to many nights spent on playing Splatoon (I’m at level 33, rank B+, anyone else playing it?), I did not do much work on Distro Tracker.

After having received the bug report #809211, I investigated the reasons why SQLite was no longer working satisfactorily in Django 1.9 and I opened the upstream ticket 26063 and I had a long discussion with two upstream developers to find out the best fix. The next point release (1.9.2) will fix that annoying regression.

I also merged a couple of contributions (two patches from Christophe Siraut, one adding descriptions to keywords, cf #754413, one making it more obvious that chevrons in action items are actionable to show more data, a patch from Balasankar C in #810226 fixing a bad URL in an action item).

I fixed a small bug in the “unsubscribe” command of the mail bot, it was not properly recognizing source packages.

I updated the task notifying of new upstream versions to use the data generated by UDD (instead of the data generated by Christoph Berg’s mole-based implementation which was suffering from a few bugs). 

Debian Packaging

Testing experimental sbuild. While following the work of Johannes Schauer on sbuild, I installed the version from experimental to support his work and give him some feedback. In the process I uncovered #810248.

Python sponsorship. I reviewed and uploaded many packages for Daniel Stender who keeps doing great work maintaining prospector and all its recursive dependencies: pylint-common, python-requirements-detector, sphinx-argparse, pylint-django, prospector. He also prepared an upload of python-bcrypt which I requested last month for Django.

Django packaging. I uploaded Django 1.8.8 to jessie-backports.
My stable updates for Django 1.7.11 was not handled before the release of Debian 8.3 even though it was filed more than 1.5 months before.

Misc stuff. My stable update for debian-handbook has been accepted fairly shortly after my last monthly report (thank you Adam!) so I uploaded the package once acked by a release manager. I also sponsor a backports upload of zim prepared by Joerg Desch.

Kali related work

Kernel work. The switch to Linux 4.3 in Kali resulted in a few bug reports that I investigated with the help of #debian-kernel and where I reported my findings back so that the Debian kernel could also benefit from the fixes I uploaded to Kali: first we included a patch for a regression in the vmwgfx video driver used by VMWare virtual machines (which broke the gdm login screen), then we fixed the input-modules udeb to fix support of some Logitech keyboards in debian-installer (see #796096).

Misc work. I made a non-maintainer upload of python-maxminddb to fix #805689 which had been removed from stretch and that we needed in Kali. I also had to NMU libmaxminddb since it was no longer available on armel and we actually support armel in Kali. During that NMU, it occurred to me that dh-exec could offer a feature of “optional install”, that is installing a file that exists but not failing if it doesn’t exist. I filed this as #811064 and it stirred up quite some debate.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

01 February, 2016 07:31PM

hackergotchi for Xanadu developers

Xanadu developers

¿Qué es Git?

Visto en xkcd bajo licencia CC BY-NC 2.5Archivado en: Geekstuff Tagged: git

01 February, 2016 03:17PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu App Developer Blog: UI Toolkit for OTA9

Hello folks, it’s been a while since the last update came from our busy toolkit ants. As OTA9 came out recently, it is the time for a refreshment from our side to show you the latest and greatest cocktail of features our barmen have prepared. Beside the bugfixes we’ve provided, here is a list of the big changes we’ve introduced in OTA9. Enjoy!


One of the most awaited components is the PageHeader. This now makes it possible to have a detached header component which then can be used in a Page, a Rectangle, an Item, wherever you wish. It is composed of a base, plain Header component, which does not have any layout, but handles the default behavior like showing, hiding the header and dealing with the auto-hiding when an attached Flickable is moved. Some part of that API has been introduced in OTA8, but because it wasn’t yet polished enough, we decided not to announce it there and provide more distilled functionality now.

The PageHeader then adds the navigation and the trailing actions through the - hopefully - well known ActionBar component.


Yes, it’s back. Voldemort is back! But this time it is back as a detached component :) The API is pretty similar to PageHeader (it contains a leading and trailing ActionBar), and you can place it wherever you wish. The only restriction so far is that its layout only supports horizontal orientation.

Facelifted Scrollbar

Yes, finally we got a loan headcount to help us out in creating some nice facelift for the Scrollbar. The design follows the same principles we have for the upcoming 16.04 desktop, with the scroll handler residing inside the bar, and having two pointers to drive page up/down scrolling.

This guy also convinced us that we need a Scrollview, like in QtQuick Controls v1, so we can handle the “buddy” scrollbars, the situation when horizontal and vertical scrollbars are needed at the same time and their overlapping should be dealt with. So, we have that one too :) And let's name the barman: Andrea Bernabei aka faenil is the one!

The unified BottomEdge experience

Finally we got a complete design pattern ready for the bottom edge behavior, so it was about the time to get a component around the pattern. It can be placed within any component, and its content can be staged, meaning it can be changed while the content is dragged. The content is always loaded asynchronously for now, we will add support to force synchronous loading in the upcoming releases.

Focus handling in CheckBox, Switch, Button ActionBar

Starting now, pressing Tab and Shift+Tab on a keyboard will show a focus ring on components that support it. CheckBox, Switch, Button and ActionBar have this right now, others will follow soon.

Action mnemonics

As we are heading towards the implementation of contextual menus, we are preparing a few features as prerequisite work for the menus. For one adding mnemonic handling to Action.

So far there was only one way to define shortcuts for an Action, through the shortcut property. This now can be achieved by specifying the mnemonic in the text property of the Action using the ‘&’ character. This character will then be converted into a shortcut and, if there is a hardware keyboard attached, it will underline the mnemonic.

01 February, 2016 10:21AM by Zsombor Egri (zsombor.egri@canonical.com)

Joel Leclerc: Injecting code into running process with linux-inject

I was about to title this “Injecting code, for fun and profit”, until I realized that this may give a different sense than I originally intended… :P

I won’t cover the reasons behind doing such, because I’m pretty sure that if you landed on this article, you would already have a pretty good sense of why you want to do this …. for fun, profit, or both ;)

Anyway, after trying various programs and reading on how to do it manually (not easy!), I came across linux-inject, a program that injects a .so into a running application, similar to how LD_PRELOAD works, except that it can be done while a program is running… and it also doesn’t actually replace any functions either (but see the P.S. at the bottom of this post for a way to do that). In other words, maybe ignore the LD_PRELOAD simile :P

The documentation of it (and a few other programs I tried) was pretty lacking though. And for good reason, the developers probably expect that most users who would be using these kinds of programs wouldn’t be newbies in this field, and would know exactly what to do. Sadly, however, I am not part of this target audience :P It took me a rather long time to figure out what to do, so in hopes that it may help someone else, I’m writing this post! :D

Let’s start by quickly cloning and building it:

git clone https://github.com/gaffe23/linux-inject.git
cd linux-inject

Once that’s done, let’s try the sample example bundled in with the program. Open another terminal (so that you have two free ones), cd to the directory you cloned linux-inject to (e.g. cd ~/workspace/linux-inject), and run ./sample-target.

Back in the first terminal, run sudo ./inject -n sample-target sample-library.so

What this does is that it injects the library sample-library.so to a process by the -name of sample-target. If instead, you want to choose your victim target by their PID, simply use the -p option instead of -n.

But … this might or might not work. Since Linux 3.4, there’s a security module named Yama that can disable ptrace-based code injections (or code injections period, I doubt there is any other way). To allow this to work, you’ll have to run either one of these commands (I prefer the second, for security reasons):

echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope # Allows any process to inject code into any other process started by the same user. Root can access all processes
echo 2 | sudo tee /proc/sys/kernel/yama/ptrace_scope # Only allows root to inject code

Try it again, and you will hopefully see “I just got loaded” in-between the “sleeping…” messages.

Before I get to the part about writing your own code to inject, I have to warn you: Some applications (such as VLC) will segfault if you inject code into them (via linux-inject, I don’t know about other programs, this is the first injection program that I managed to get working, period :P). Make sure that you are okay with the possibility of the program crashing when you inject the code.

With that (possibly ominous) warning out of the way, let’s get to writing some code!

#include <stdio.h>

void hello() {
    puts("Hello world!");

If you know C, most of this should be pretty easy to understand. The part that confused me was __attribute__((constructor)). All this does is that it says to run this function as soon as the library is loaded. In other words, this is the function that will be run when the code is injected. As you may imagine, the name of the function (in this case, hello) can be whatever you wish.

Compiling is pretty straightforward, nothing out of the ordinary required:

gcc -shared -fPIC -o libhello.so hello.c

Assuming that sample-target is running, let’s try it!

sudo ./inject -n sample-target libhello.so

Amongst the wall of “sleeping…”, you should see “Hello world!” pop up!

There’s a problem with this though: the code interrupts the program flow. If you try looping puts("Hello world!");, it will continually print “Hello world!” (as expected), but the main program will not resume until the injected library has finished running. In other words, you will not see “sleeping…” pop up.

The answer is to run it in a separate thread! So if you change the code to this …

#include <stdio.h>
#include <unistd.h>
#include <pthread.h>

void* thread(void* a) {
    while (1) {
        puts("Hello world!");
    return NULL;

void hello() {
    pthread_t t;
    pthread_create(&t, NULL, thread, NULL);

… it should work, right? Not if you inject it to sample-target. sample-target is not linked to libpthread, and therefore, any function that uses pthread functions will simply not work. Of course, if you link it to libpthread (by adding -lpthread to the linking arguments), it will work fine.

However, let’s keep it as-is, and instead, use a function that linux-inject depends on: __libc_dlopen_mode(). Why not dlopen()? dlopen() requires the program to be linked to libdl, while __libc_dlopen_mode() is included in the standard C library! (glibc’s version of it, anyways)

Here’s the code:

#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
#include <dlfcn.h>

/* Forward declare these functions */
void* __libc_dlopen_mode(const char*, int);
void* __libc_dlsym(void*, const char*);
int   __libc_dlclose(void*);

void* thread(void* a) {
    while (1) {
        puts("Hello world!");

void hello() {
    /* Note libpthread.so.0. For some reason,
       using the symbolic link (libpthread.so) will not work */
    void* pthread_lib = __libc_dlopen_mode("libpthread.so.0", RTLD_LAZY);
    pthread_t t;

    *(void**)(&pthread_lib_create) = __libc_dlsym(pthread_lib, "pthread_create");
    pthread_lib_create(&t, NULL, thread, NULL);


If you haven’t used the dl* functions before, this code probably looks absolutely crazy. I would try to explain it, but the man pages are quite readable, and do a way better job of explaining than I could ever hope to try.

And on that note, you should (hopefully) be well off to injecting your own code into other processes!

If anything doesn’t make sense, or you need help, or just even to give a thank you (they are really appreciated!!), feel more than free to leave a comment or send me an email! :D And if you enjoy using linux-inject, make sure to thank the author of it as well!!

P.S. What if you want to change a function inside the host process? This tutorial was getting a little long, so instead, I’ll leave you with this: http://www.ars-informatica.com/Root/Code/2010_04_18/LinuxPTrace.aspx and specifically http://www.ars-informatica.com/Root/Code/2010_04_18/Examples/linkerex.c . I’ll try to make a tutorial on this later if someone wants :)

01 February, 2016 05:44AM

Joe Liau: The Start of Charm: Another Guide for New Juju Charmers

let's start charminglet’s start charming

This is the basic blueprint of my system for juju charming. I found that this was the quickest and least problematic setup for myself.

Below is the guide to working with this setup. Most of this will apply for working with a local server as well.

While I was working on my first juju charm, I found that the documentation was quite helpful, but I also ran into some recurring issues. As a result, I curated a lot of content, and created notes, which are now in the form of a supplementary guide for those heading down a similar path.


  • Install and setup juju on a single, local desktop system for creating and testing charms
  • Give you basic terminal commands for working with juju
  • Give some tips for troubleshooting the juju environment

Follow the guide on the LEFT, and refer to the RIGHT when necessary.

Terminal commands in this guide will look like this.

Juju guide Troubleshooting

sudo add-apt-repository ppa:juju/stable
sudo apt-get update


Install juju for local environment use:

sudo apt-get install juju-core juju-local
juju generate-config
juju switch local


juju bootstrap

The juju environment should be ready for working in now.


This guide will assume that you are working from the home directory, so please setup in home:

cd ~
mkdir -p charms/trusty
(swap “trusty” for “precise” if necessary)

You can now put charms that you are working on into ~/charms/trusty and deploy them via the local repository method (see below). Each charm will have its own unique directory that should match the charm name.

Install charm-tools for creating new charms, or testing existing ones:

sudo apt-get install charm-tools

Bootstrap Errors:
If you get any errors during bootstrap, then the environment is probably already boostrapped. You may need to restart the juju db and agent services. This might happen if you reboot the computer (you will notice that the juju commands just hang).

sudo service juju-db-$USER-local start
sudo service juju-agent-$USER-local start

Wait a few minutes for the agent to reconnect.

Destroying environment:

If the whole environment becomes messy or faulty, you can start over.

juju destroy-environment local
You will probably have to enter the super user password. And re-bootstrap.

In the worst case you might have to purge the juju installation and start again:
sudo apt-get purge juju*
sudo rm -rf ~/.juju

Some other errors might require a juju package upgrade.


juju status

This will give you the details of what your current juju environment is doing.
Pay attention to public-address (IP), and current state of your charm. Don’t interact with it until it is “started”.

juju debug-log

A running log of whatever juju is doing. It will show you where charms are at, if there is an error, and when hooks are “completed.”
(You must CTRL C to get out of it.)

Status Checks:
It is important to be patient when checking on the status of charms. Some issues are resolved by waiting. You can check juju status periodically to see changes.

DEPLOYING SERVICES (i.e. “installing” charms)

a) You can deploy any “recommended” charm with:

juju deploy charmName

e.g. juju deploy juju-gui

You can deploy multiple charms without waiting for the previous one to finish.
Just don’t add relations until they are BOTH “started.”

b) If you want to deploy a charm that you are working on locally (one-line command):

juju deploy --repository=/home/$USER/charms/ local:trusty/charmname
e.g. juju deploy –repository=/home/$USER/charms/ local:trusty/diaspora

Replace “trusty” with “precise” if necessary.

c) You can also deploy from personal trunks that haven’t yet been recommended:

juju deploy cs:~launchpadUserId/trusty/charmname

e.g. juju deploy cs:~joe/trusty/ethercalc-6

d) Deploying from the GUI (see GUI section below)

Destroy services (i.e. charm installations):

Maybe you installed the wrong one, or it “failed” to install or configure.
(You should probably destroy relations first.)

juju destroy-service serviceName
e.g. juju destroy-service suitecrm

“Un-Dead” Services ( Can’t Destroy )

Sometimes things are “dying” forever, but don’t actually die because they are in an “error state.”

If a charm/relation is in an “error” state, it will hang indefinitely at each error. You can’t even destroy it.

You can “resolve” the errors until all the hooks have gone through the cycle at which point the thing may die.

juju resolved serviceName/0

e.g. juju resolved suitecrm/0

if you have more than one of the same service, the /# will indicate which one.

*Be sure to spell “resolved” correctly. I never get it right first type :(

ssh into a service (remember a service is running inside its own “machine” by default)

If you want to go into the virtual machine that your service is running on to fix/break things more:

juju ssh serviceName/0

e.g. juju ssh suitecrm/0

No username or password needed. You have root access with “sudo”. Keep track of where you are (purple circle vs orange circle). “exit” to return to the local terminal


You can link/relate charms to each other if compatible. Commonly a database and another service.

juju add-relation charmName1 charmName2

WAIT again. Check the status do see “x-relation-changed” hook running etc.

Can’t add relation:

Check the charm’s readme to see if a special syntax is required for the relation.

Generally I like to wait for both services/charms to be in a ready state before adding a relation between them.

Destroy relations:

relation-changed hook failed, or the charms don’t like each other anymore:

juju destroy-relation charm1 charm2

e.g. juju destroy-relation suitecrm mysql

SAMPLE WORKFLOW: Deploying Services
Let’s see if that suitecrm charm is working for you.

Deploy database:

juju deploy mysql

Deploy suitecrm:

juju deploy suitecrm

Run juju status or juju debug-log to see when BOTH charms are done.

Just because it has a public-address does not mean that it’s ready to be used.

Add relation:

juju add-relation suitecrm mysql

Check status…… WAIT!

While you’re waiting… why not check out the readme document.

Access the service:

juju status and get the public-address of suitecrm, then visit in your browser. You should see the login page.

User: Admin
Pass: thisisaTEST!


This will give you graphical way of working with juju. It is quite magical, but requires manual installation if not using juju-quickstart.

juju deploy juju-gui

WAIT. Do a “juju status” to see what stage of deployment is in.

Run this in your terminal again to get the admin password

cat ~/.juju/environments/local.jenv | grep password

Once started, copy the public-address. Usually 10.0.#.### and visit that in a browser. It will likely complain about an insecure connection. For our purposes you can add the exception.

Login with: admin and the password from the above command.

The GUI is the simplest way to deploy and manage services. It does not provide much debugging information at this time. Most of the usage is pretty self-explanatory.

When you deploy a service it will show the icon with colours to indicate its status (these have been varying lately):

Yellow = wait
Red = Stop. error…

You do have to “commit” the changes to the canvas.

your first charmyour first charm (source)

Hopefully this guide has provided you with an acceptable environment for working on your charm(s). Further documentation exists for starting a charm, but I also recommend finding an existing charm that is similar to the one that you want to create so that you can model the structure. More on this process can be seen in an earlier post.

Remember that you are not alone in this project: juju add-relation me community 😉

01 February, 2016 05:34AM

hackergotchi for Blankon developers

Blankon developers

Mahyuddin Idram Ahmad: Tarsius Gnome Themes

Tarsius is the light version and clean theme for GTK/Gnome Shell 3.18, look like yosemite themes. 


$ mkdir ~/.themes
$ git clone https://github.com/dotovr/Tarsius.git ~/.themes/Tarsius

01 February, 2016 04:52AM by Mahyuddin Idram Ahmad (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: The Hybrid Desktop

OK, folks, I want to share a random idea that cropped up after a long conversation with Langridge a few weeks back. This is merely food for thought and designed to trigger some discussion.

Today my computing experience is comprised of Ubuntu and Mac OS X. On Ubuntu I am still playing with GNOME Shell and on Mac I am using the standard desktop experience.

I like both. Both have benefits and disadvantages. My Mac has beautiful hardware and anything I plug into it just works out the box (or has drivers). While I spend most of my life in Chrome and Atom, I use some apps that are not available on Ubuntu (e.g. Bluejeans and Evernote clients). I also find multimedia is just easier and more reliable on my Mac.

My heart will always be with Linux though. I love how slick and simple Shell is and I depend on the huge developer toolchain available to me in Ubuntu. I like how customizable my desktop is and that I can be part of a community that makes the software I use. There is something hugely fulfilling about hanging out with the people who make the tools you use.

So, I have two platforms and use the best of both. The problem is, they feel like two different boxes of things sat on the same shelf. I want to jumble the contents of those boxes together and spread them across the very same shelf.

The Idea

So, imagine this (this is total fantasy, I have no idea if this would be technically feasible.)

You want the very best computing experience, so you first go out and buy a Mac. They have arguably the nicest overall hardware combo (looks, usability, battery etc) out there.

You then download a distribution from the Internet. This is shipped as a .dmg and you install it. It then proceeds to install a bunch of software on your computer. This includes things such as:

  • GNOME Shell
  • All the GNOME 3 apps
  • Various command line tools commonly used on Linux
  • An ability to install Linux packages (e.g. Debian packages, RPMs, snaps) natively

When you fire up the distribution, GNOME Shell appears (or Unity, KDE, Elementary etc) and it is running natively on the Mac, full screen like you would see on Linux. For all intents and purposes it looks and feels like a Linux box, but it is running on top of Mac OS X. This means hardware issues (particularly hardware that needs specific drivers) go away.

Because shell is native it integrates with the Mac side of the fence. All the Mac applications can be browsed and started from Shell. Nautilus shows your Mac filesystem.

If you want to install more software you can use something such as apt-get, snappy, or another service. Everything is pulled in and available natively.

Of course, there will be some integration points where this may not work (e.g. alt-tab might not be able to display Shell apps as well as Mac apps), but importantly you can use your favorite Linux desktop as your main desktop yet still use your favorite Mac apps and features.

I think this could bring a number of benefits:

  • It would open up a huge userbase as a potential audience. Switching to Linux is a big deal for most people. Why not bring the goodness to the Mac userbase?
  • It could be a great opportunity for smaller desktops to differentiate (e.g. Elementary).
  • It could be a great way to introduce people to open source in a more accessible way (it doesn’t require a new OS).
  • It could potentially bring lots of new developers to projects such as GNOME, Unity, KDE, or Elementary.
  • It could significantly increase the level of testing, translations and other supplemental services due to more people being able to play with it.

Of course, from a purely Free Software perspective it could be seen as a step back. Then again, with Darwin being open source and the desktop and apps you install in the distribution being open source, it would be a mostly free platform. It wouldn’t be free in the eyes of the FSF, but then again, neither is Ubuntu. 😉

So, again, just wanted to throw the idea out there to spur some discussion. I think it could be a great project to see. It wouldn’t replace any of the existing Linux distros, but I think it could bring an influx of additional folks over to the open source desktops.

So, two questions for you all to respond to:

  1. What do you think? Could it be an interesting project?
  2. If so, technically how do you think this could be accomplished?

01 February, 2016 03:17AM

Sean Davis: Catfish 1.3.4 Released (New PPA)

With a slew of updates and a new build system, Catfish 1.3.4 is now available! This update fixes a number of bugs, adds initial support for PolicyKit, and introduces a new PPA for Ubuntu users. What’s New? New Features Initial PolicyKit integration for requesting administrative rights to update the search database. Bug Fixes Fixes for […]

01 February, 2016 02:41AM

Svetlana Belkin: Ubuntu Membership News: New Page Wiki Added

Today, I added a new wiki page in the Ubuntu Membership board area called Best Practices.  This page will hold guides on how to apply, what to expect, ect. from those who, in the past, applied.  Right now, it only has the blog post that I wrote about the lessons that I learned when I applied about this time last year.

Hopefully this page can help the new applicants.

P.S. Thank you wxl for this idea.

01 February, 2016 12:41AM

January 31, 2016

Mythbuntu: Mythbuntu theme survey

The Mythbuntu team would like some feedback on our current MythTV theme. We would appreciate it if you filled out this survey (no personal data is collected) whether you use the theme or not.

31 January, 2016 06:27PM by Thomas Mashos (thomas@mashos.com)

Jonathan Riddell: FOSDEM Photos

KDE people getting to know our Gnome friends. The Gnome chap gave me a bit hug just after so it must have gone well whatever they were talking about.

Ruphy on WikiToLearn one of the more stylish speakers of the day

Rasterman gave a talk on Enlightenment and how it’s being ported to Wayland for use in Tizen projects and more. Turns out Rasterman is a real person called Carsten, good speaker too.

Hallway track

Paul holds court to discuss in Project Kobra. No I’ve no idea.

Stephen Kelly on his CMake addon CMakeDaemon which lets IDEs understand CMake files for code completion and highlighting goodness.

It’s the KDE neon launch party, what a happy bunch.

facebooktwittergoogle_pluslinkedinby feather

31 January, 2016 05:45PM

Colin King: Pagemon improvements

Over the past month I've been finding the odd moments [1] to add some small improvements and fix a few bugs to pagemon (a tool to monitor process memory).  The original code went from a sketchy proof of concept prototype to a somewhat more usable tool in a few weeks, so my main concern recently was to clean up the code and make it more efficient.

With the use of tools such as valgrind's cachegrind and perf I was able to work on some of the code hot-spots [2] and reduce it from ~50-60% CPU down to 5-9% CPU utilisation on my laptop, so it's definitely more machine friendly now.  In addition I've added the following small features:
  • Now one can specify the name of a process to monitor as well as the PID.  This also allows one to run pagemon on itself(!), which is a bit meta.
  • Perf events showing Page Faults and Kernel Page Allocates and Frees, toggled on/off with the 'p' key.
  • Improved and snappier clean up and exit when a monitored process exits.
  • Far more efficient page map reading and rendering.
  • Out of Memory (OOM) scores added to VM statistics window.
  • Process activity (busy, sleeping, etc) to VM statistics window.
  • Zoom mode min/max with '[' (min) and ']' (max) keys.
  • Close pop-up windows with key 'c'.
  • Improved handling of rapid map expansion and shrinking.
  • Jump to end of map using 'End' key.
  • Improve the man page.
I've tried to keep the tool small and focused and I don't want feature bloat to make it unwieldy and overly complexed.  "Do one job, and do it well" is the philosophy behind pagemon. At just 1500 lines of C, it is as complex as I want it to be for now.

Version 0.01.08 should be hitting the Ubuntu 16.04 Xenial Xerus archive in the next 24 hours or so.  I have also the lastest version in my PPA (ppa:colin-king/pagemon) built for Trusty, Vivid, Wily and Xenial.

Pagemon is useful for spotting unexpected memory activity and it is just interesting watching the behaviour memory hungry processes such as web-browsers and Virtual Machines.

[1] Mainly very late at night when I can't sleep (but that's another story...).  The git log says it all.
[2] Reading in /proc/$PID/maps and efficiently reading per page data from /proc/$PID/pagemap

31 January, 2016 05:09PM by Colin Ian King (noreply@blogger.com)

Mirko Pizii: Test Android Auto – New Opel Astra [ENG]

Well yes, not only reviews and tests only for products like computer but today we are also on car industry testing the new Opel Astra Elective which supports Android Auto.

First of all, before I start, I want to thanks my friend Davide, who works in Opel workshop, for all of his patience and availability gave to me. Really thanks because he lets me to write this review.
Of course, I tested this all with him smartphone (OnePlus Two) and mine (Nexus 5). Screenshots made by him :)

Details about new Opel Astra Elective

Unfortunately in this first “episode” I will focus on use of Android Auto but, if you want to see more (explained) car’s details, you can go to Opel website: http://www.opel.com/

What is Android Auto?

I don’t know how to say.. Android Auto is all and nothing…

What I want to say, informally, is that Android Auto is what we see on our car’s display..
It seems a common operative system where, by default, there are different separate applications installed where you may have to update them for fixing bugs and you may need to go to your trusted mechanical.

No, you have not to do all these things I wrote. It’s only a sort of “Screen Mirroring” or better, you see in your car’s display, what you should see on your smartphone’s screen. You may ask “what about apps?”.
No problem with apps, they are directly to your smartphone! It sounds strange, no?

So ok, you might not understand what I’m saying and of course, what you can do with Android Auto.
To give an example, you can think about using modern technology system (including functionalities of your smarphone) into every car that have Android Auto receiver.
After installing official application via Play Store, you will be able to open, through display, apps which gives support such as Spotify, WhatsApp, standard functionalities like messages, calls, Google Maps, Google Play Music and many more.
Of course to go on, you should connect your smartphone to your car using the USB cables.

Exploring Android Auto

If we remove the automatic access to Android Auto when we connect the phone, or we exit from this mode without disconnect it from USB, the “Trasmission” icon will be the one of Android Auto.

1_Android_auto_opelastra(Click image to enlarge)

After connection, you can see a short and fast tutorial which explain you how to use Android Auto.
Of course if you haven’t install “Android Auto” application, you will be required to do it through Play Store.

This is the first thing that appear after the connection is this:

2_Android_auto_opelastra(Click image to enlarge)

Here is the small tutorial where the app explain the first steps.. In this screen, Android Auto say that we can press button (available on steering wheel) to active vocal commands (and here we will use Google Now functionalities).

3_Android_auto_opelastra(Click image to enlarge)

We don’t like the button or it does not work anymore because we broke it? No problem, we could use the vocal commands by pressing microphone icon on the top right.

4_Android_auto_opelastra(Click image to enlarge)

And here, the app shows us menu’ which have the role of launcher. It lets us to access to various applications like navigator, phone, music, etc.
About navigator, by default, it’s used Google Maps, Google Play Music for our music and the default app we have for everything we want.

Of course we can use different messaging apps like WhatsApp, Telegram and so on but only if they are compatible with Android Auto. (They will be more soon).


(Click image to enlarge)

When all it’s loaded (and accepted warnings), we will be in front of first screen which is what we see into Google Now and so, what we see on our phone, we will see into car’s display..

In my case, I see that I can start navigator to drive to Modena or I can choose the last address that I’ve inserted into Google Maps. Other informations displayed are traffic, and how much the trip would take, in terms of time.

7_Android_auto_opelastra(Click image to enlarge)

We don’t like Google Play Music or we don’t have internet connection? No problem we can use the default radio Opel system. Here’s how it will be.
Note, we can also see the weather forecast.. It’s surely the Google Now!

8_Android_auto_opelastra(Click image to enlarge)

Touching the “Phone” icon, we can see our address book. If we want, we can also digit phone number and start calling.
Naturally, we can use vocal commands when available by press on button on steering wheel or display.

9_Android_auto_opelastra(Click image to enlarge)

Starting navigation, as said, we have 3D view of Google Maps and also here, we can insert address by talking (through vocal commands) or write it with touchscreen.

10_Android_auto_opelastra(Click image to enlarge)

We was talking about choose different application instead of default..
When we press button on menu, if more apps are available for the selection, before the default we will be asked to select which one we want to use.
For example, pressing music icon, we will be asked if open Google Play Music or Spotify.

11_Android_auto_opelastra(Click image to enlarge)

Yeah, Spotify is available (and we can see it through selections) and we choose it!
What say, we have convergence with different devices, in fact, using and installing only one application, we can have compatibility for phone and Android Auto.
Spotify of course it’s really similar to what appears on our phone.


(Click image to enlarge)

Also WhatsApp it’s available with Android Auto and you can notice how the small notification appears when we receive message.
Touching the notification (on display), the system will read us the message via TTS (Text-To-Speech) and we can obviously reply thanks to the vocal system.

Anyway, we can also send a message saying “Send Message” followed with the name of contact.

14_Android_auto_opelastra(Click image to enlarge)

You received message and you have to reply without press keys on touchscreen? If available, we can answer with a fast standard reply like “I’m driving, sorry!” as WhatsApp do.

15_Android_auto_opelastra(Click image to enlarge)

You don’t like or don’t want to use Android Auto anymore? You’re free to exit (also with smartphone connected).

16_Android_auto_opelastra(Click image to enlarge)


What I can say, in my honest opinion it’s really curious and nice.. Maybe if the maps of Google Maps would be more updated like “Tom Tom”, it would be really something perfect.
After all, also the possibility of developing own applications for Android Auto or using existing one for music, messages, and so on.. It’s something really nice.

About me, I like it and it deverves all the best. Really congratulations to Google!

Of course, if you’re instered into developing apps, having more details or see who supports Android Auto, you can visit this website: https://www.android.com/intl/en/auto/.

Note: if you found something incorrect about what I said or about my not best english, write into comments please.. Thank you!

L'articolo Test Android Auto – New Opel Astra [ENG] sembra essere il primo su Mirko Pizii | Grab The Penguin.

31 January, 2016 04:18PM

Stuart Langridge: What I Did On My Holidays, or, Pouring Out A Forty

Before we begin, thank you. Thank you, all, thank you, thank you…
Dr Hook, The Millionaire

Bad Voltage on stage

This has been a busy few weeks. Culminating in me becoming forty years of age, of which more later.

I went to the US to see Jono and Erica. Watched The Martian on the plane on the way out. It is a very excellent film indeed, and if you have not seen it, go and see it. And I got to hang out in Walnut Creek for a few days; I can recommend Sasa, Ike’s Sandwiches1, and Library on Main2 if you find yourself in town. And see my friends, of course. Dr3 Matthew Garrett introduced me to Longitude, the best cocktail bar in Oakland. And I scored a new laptop.

Aside: amusing story about the laptop. A few months ago I mentioned to Jono that my dad’s phone (a Moto G) was dying, and he said, hey, I’ve got a Samsung Galaxy S5 you can have to give to him if you want. Cool, said I, and handed over fifty euros for it4. Sadly, on returning to the UK, I discovered that the phone was locked to its US T-Mobile SIM and so my dad couldn’t use it. So this trip saw its return to the US so Jono could get it unlocked. He rings up T-Mobile USA, and the conversation went something like this5:

Jono: I would like you to give me the unlock code for this here Galaxy S5 that I bought from you
Helpful T-Mobile USA person: No, sir, we can’t do that
J: This is ridiculous. You phone operators are all terrible and try to lock in your customers. I know that phones are unlockable, and you’re just keeping this secret in a further attempt to deny me my rights over my purchased hardware. I can’t believe you’d keep lying about this; give me the unlock code, which I’m technical enough to know exists.
TM: No, sir, we’re not refusing to unlock the phone because we’re oppressing your hardware rights. We’re refusing to unlock the phone because you haven’t finished paying for it yet.
J: Really?
TM: Yup. You still have twelve months to go on the contract.
J: … oh. (turns to me) Do you want that laptop you borrowed instead?

So, result. Every one’s a winner. And it meant I had a machine that didn’t have to be plugged in all the time; my poor Dell M1330 had finally given up the ghost. Nice one, Jono.6

The real purpose for being in the US7 was to travel to SCaLE 14x. Did a couple of talks about Ubuntu phone stuff at the colocated “Ubucon”; one on adding analytics and advertising to Ubuntu phone apps (SCaLE video8) and one with Alan Pope about Marvin, our cloud testing service for phone apps (SCaLE video) (footnote ditto). I was one of the panelists on The Weakest Geek, a “quiz show” where as far as I can tell the rules are that quizmaster extraordinaire Gareth Greenaway asks various people increasingly hard questions about tech and sci-fi, and then Ruth Suehle wins. And there was the main purpose for my travel to SCaLE: Bad Voltage Live, our third live show and second at SCaLE. Video will be out tomorrow. It was a fun show to do, although rather dogged by problems with the AV. Still, we soldiered on, and at the after party a number of people said that they thought that our struggles with the audio and a Mac9 added to the comedy, so that’s OK. I’d like to say a special thank you to Linode for flying me out to LA10, the other sponsors for making the show possible, and to Tara for putting up with the dodgy software I wrote to run the Family Feud/Family Fortunes scoreboard for the show.

And the SCaLE team gave us some jerseys with our names on!

The four Bad Voltage presenters, being presented with branded SCaLE jerseys with names on the back

I feel a bit guilty about that. You see, we love SCaLE. The team try really hard to support Bad Voltage, and in return we use them shamelessly. We pressed Ilan into service as our audience fluffer, and made him wear lederhosen11, but we showed our gratitude with a decent bottle or bourbon in return for all his hard work. And we pressed Gareth into service as our Family Fortunes quizmaster, and made him wear a spangly gold quizmaster suit, and then showed our gratitude by getting him… a pink Hello Kitty stepstool so he can be even taller, and a pink Hello Kitty hat with movable ears. Sorry, Gareth. We love ya, buddy. And thank you both for everything.

But the real event for me was a bit at the end of the live show. You see, as of yesterday, as I post this, I am forty years old.

Forty. Cool, eh?

That makes all of this the 2016 iteration of the famous once-a-year birthday post (now in its 12th great year!), but my birthday this year was rather special. It appears to be what Cristian referred to as a “birthday week”, and I am perfectly happy with that. It started during the live show, where Jono had obviously done a whole bunch of behind the scenes hassling of lots of people to have them wish me happy birthday on video. I managed to resist the urge to actually cry on stage, but… not by much. Then a whole bunch of us went out on the Saturday night and drank yards of ale and then some sort of all night pie shop12. The day before my birthday13 a whole bunch of us went out to Rub Smokehouse14 and then had celebratory beers15. Niamh and my parents and I went to Amantia for tapas16 and Niamh and I are going to Gordon Ramsay’s maze for sushi. I have a gorgeous new watch (a Roamer Ceraline Saphira, which I have not stopped constantly looking at since the moment it was strapped to my wrist, nor have I stopped telling people the time when they don’t want to know it). A copy of Watchmen which is signed by Dave Gibbons!17 A little model of Ron Weasley!18 A potato!

So I would like to say thank you. To all the people on Facebook, because once a year I get a million emails of people wishing me many happy returns. To Sam and Andrew. To the people on my birthday video: Rob McQueen, Matthew Walster, Jorge Castro, Rikki Endsley19, Ted Haeger, Adam Sweet20, Ron Wellsted, Bill21, Jono and Erica’s parents, Tarus Balog22, Ronnie Trommer, Jessi Hustace and the OpenNMS team, Erica Bacon23, Michael Hall24, Christian Heilmann, Cristian Parrino25, Alan Pope, Bruce Lawson, and Niamh. To the Bad Voltage team: Jono, Jeremy, and Bryan. To the Saturday night partiers26: Jono, Jeremy, Tara, Ilan Maru, Hannah Anderson, Ian Santopietro, popey, mhall, Pete and Amber Graner. To the Birmingham crew: Dan, Ebz, Kev, Matt Somerville27, Matt Machell, Charles, and Rich. To Mike, who is skiing. To Andy and Tom, who I’m seeing next weekend.28 To Jono, for everything, including a rather lovely blog post. To mum and dad. And to Niamh.

I have the best friends. I really do.

Rather enjoying being forty.

It’s 11.27, by the way.

my gorgeous new watch showing the time

  1. stupid sandwich names, great actual sandwiches
  2. used to be Eleve
  3. this is important
  4. technically, I paid for half his lederhosen instead
  5. a certain amount of artistic licence is, I admit, taken here
  6. more on the ‘nice one Jono’ front later, too
  7. sandwiches and cocktails aside
  8. with rather dodgy sound; more on the AV issues later
  9. followed by brutal berating from aforementioned Dr Garrett during Wrong in 60 Seconds
  10. never did get to go to JPL in Pasadena. Or Buffalo Wild Wings
  11. we find lederhosen way funnier than I think we should
  12. where I may have left my hat
  13. we skip over here a week of me suffering from the most brutal jet lag I have every experienced. It was not a pretty sight.
  14. not actually recommended, unless you’re a professional stodge appreciator
  15. and Jura whisky. And Sambuca. And some other things
  16. also not actually recommended, it turns out. Go to La Tasca instead.
  17. nice one Charles!
  18. nice one Dan and Ebz!
  19. who, I am told, got stuck in that cemetery and couldn’t get out; cheers, Rikki
  20. I am now officially “the big ginger web lothario”
  21. sorry we haven’t managed to get together for beers, pal. It will happen. Promise. Plus, you’ve only got six months to go now…
  22. who is fifty! congrats!
  23. the “nearly crying on stage” thing? that was you, Erica
  24. I promise to look at the JS scopes stuff once I’m running 16.04
  25. I like the “decade of wisdom” thing. That sounds a lot like me, that
  26. hope you all enjoyed the gorilla joke
  27. who fixed traintimes.org.uk as a birthday present!
  28. as the birthday week turns into a birthday fortnight

31 January, 2016 03:30PM

Linux Padawan: Master Spotlight: Walter Lapchynski

Meet Walter Lapchynski, aka wxl! How did you first get started using Linux? What distros, software or resources did you use while learning? I started rolling my own kernels in Slackware on an old ?ThinkPad when there was really only one page on the Internet *briefly* dealing with the subject of running Linux on laptops. […]

31 January, 2016 02:52PM

Costales: My Ubuntu Phone is my 'mini' PC for traveling

I usually don't code in my travels :P Then I don't need to carry my laptop.
But If I can, I like to write a journal in my blog. Upload a few pictures and share thoughts, mainly to myself.

The issue with a phone is that I have over ~450 ppm in a real keyboard and I really hate to write in the screen phone with only 1 finger.
Then I bought a bluetooth keyboard (7,7€) and mouse (7,9€). They arrived this week and I discovered a new Ubuntu Phone :O

Yes, I watched videos or pictures about the convergence in the web, but when I experimented it by myself on my BQ E4.5, all changed :O :O

Terminal maximized in the background, Twiter & Music are the foreground windows. The mouse cursor forcing show Unity launcher

From now, I'll traveling with the keyboard, mouse and the phone. Now my phone is my real portable and small PC :))

31 January, 2016 10:34AM by Marcos Costales (noreply@blogger.com)

Aurélien Gâteau: An intro to git gui

I have been using git for years now, I think I can say I know the tool quite well, yet I do all my commits with git gui. This often surprises my coworkers because a) it looks a bit ugly and b) it's a graphical application! The horror!

This is what it looks like:

git gui screenshot

Yes, it's indeed a bit ugly, thanks to it using tcl-tk, just like its most widely known brother, gitk.

On the left side you can see two lists: the top list contains all your unstaged changes, the bottom list contains all your staged changes (ie: files which have been added with git add, or removed with git rm).

The right side, contains a large view which shows the change of the currently selected file and in the bottom a text area where you can enter your commit message as well as a few widgets to trigger different actions.

How does one uses it? Easy: to stage a file to commit click on the icon of the file in the top-left side: the file disappears from the top list and appears in the bottom one. If you click on the name of the file, it gets selected, and you can see the changes in the main area.

Why use git gui instead of the command line?

For a few reasons: first it provides an easy way to review your commits before they get in. I have often caught a debug line I forgot to remove or some added trailing spaces while going through my changes this way.

Second, and most importantly, it is much easier to do partial commits with git gui. Partial commits, if you are not familiar with this, is the ability to commit only parts of a file. This (slightly controversial) feature is useful to clean up a commit or to break a set of unrelated changes in separate commits. Often necessary when I land back on Earth after a frenzy coding session. It's also useful to split commits when doing an interactive rebase.

The command-line way to do so is git add -p, but that is really tedious because it shows one hunk at a time, you don't have a global view of all the changes. With git gui you can just scroll the diff, just right-click on a change and select "Stage Hunk For Commit". If you change your mind, select the file in the Staged list, right click the staged hunk and select "Unstage Hunk From Commit".

It's even better when you want to do finer grained commits and stage only lines: with git add -p you have to edit diffs. That is really not efficient and very error prone. This is where git gui really shines: select the lines you want to commit (either additions or removals), right click and select "Stage Lines For Commit". Done.

In this little animation I create two commits from my current changes:

Creating partial commits

It works the other way as well: stage all the file or a few hunks, then right click on that debug line or that extra blank line and select "Unstage Line From Commit".

Here I remove a debug line after staging all changes:

Removing a debug line

"But it's a graphical application, it can't be as fast as the command line!"

It turns out that, at least for me, git gui is fast enough. It starts up instantly and has a set of shortcuts which makes it possible to do many operations without using the mouse. Here is the list of shortcuts I use most often:

  • Ctrl+T/Ctrl+U: Stage/unstage selected file
  • Ctrl+I: Stage all files (asks if you want to add new files if there are any)
  • Ctrl+J: Revert changes
  • Ctrl+Enter: Commit
  • Ctrl+P: Push

What about other frontends?

I must confess I haven't tried a lot of other frontends. I played a bit with git cola a few years ago but I did not feel as productive as with git gui. There are probably nicer alternatives out there but one of the main advantages of git gui is that it is an official part of Git, so it is available wherever Git is available, I have used git gui on Windows and Mac OS X: it works just like on Linux.

31 January, 2016 06:54AM

January 30, 2016

Elizabeth K. Joseph: Ubuntu at SCALE14x

I spent a long weekend in Pasadena from January 21-24th to participate in the 14th Annual Southern California Linux Expo (SCALE14x). As I mentioned previously, a major part of my attendance was focused on the Ubuntu-related activities. Wednesday evening I joined a whole crowd of my Ubuntu friends at a pre-UbuCon meet-and-greet at a wine bar (all ages were welcome) near the venue.

It was at this meet-and-greet where I first got to see several folks I hadn’t seen since the last Ubuntu Developer Summit (UDS) back in Copenhagen in 2012. Others I had seen recently at other open source conferences and still more I was meeting for the first time, amazing contributors to our community who I’d only had the opportunity to get to know online. It was at that event that the excitement and energy I used to get from UDS came rushing back to me. I knew this was going to be a great event.

The official start of this first UbuCon Summit began Thursday morning. I arrived bright and early to say hello to everyone, and finally got to meet Scarlett Clark of the Kubuntu development team. If you aren’t familiar with her blog and are interested in the latest updates to Kubuntu, I highly recommend it. She’s also one of the newly elected members of the Ubuntu Community Council.

Me and Scarlett Clark

After morning introductions, we filed into the ballroom where the keynote and plenaries would take place. It was the biggest ballroom of the conference venue! The SCALE crew really came through with support of this event, it was quite impressive. Plus, the room was quite full for the opening and Mark Shuttleworth’s keynote, particularly when you consider that it was a Thursday morning. Richard Gaskin and Nathan Haines, familiar names to anyone who has been to previous UbuCon events at SCALE, opened the conference with a welcome and details about how the event had grown this year. Logistics and other details were handled now too, and then they quickly went through how the event would work, with a keynote, series of plenaries and then split User and Developer tracks in the afternoon. They concluded by thanking sponsors and various volunteers and Canonical staff who made the UbuCon Summit a reality.

UbuCon Summit introduction by Richard Gaskin and Nathan Haines

The welcome, Mark’s keynote and the morning plenaries are available on YouTube, starting here and continuing here.

Mark’s keynote began by acknowledging the technical and preference diversity in our community, from desktop environments to devices. He then reflected upon his own history in Linux and open source, starting in university when he first installed Linux from a pile of floppies. It’s been an interesting progression to see where things were twenty years ago, and how many of the major tech headlines today are driven by Linux and Ubuntu, from advancements in cloud technology to self-driving cars. He continued by talking about success on a variety of platforms, from the tiny Raspberry Pi 2 to supercomputers and the cloud, Ubuntu has really made it.

With this success story, he leapt into the theme of the rest of his talk: “Great, let’s change.” He dove into the idea that today’s complex, multi-system infrastructure software is “too big for apt-get” as you consider relationships and dependencies between services. Juju is what he called “apt-get for the cloud/cluster” and explained how LXD, the next evolution of LXC running as a daemon, gives developers the ability to run a series of containers to test deployments of some of these complex systems. This means that just like the developers and systems engineers of the 90s and 00s were able to use open source software to deploy demonstrations of standalone software on our laptops, containers allow the students of today to deploy complex systems locally.

He then talked about Snappy, the new software packaging tooling. His premise was that even a six month release cycle is too long as many people are continuously delivering software from sources like GitHub. Many places have a solid foundation of packages we rely upon and then a handful of newer tools that can be packaged quickly in Snappy rather than going through the traditional Debian Packaging route, which is considerably more complicated. It was interesting to listen to this, as a former Debian package maintainer myself I always wanted to believe that we could teach everyone to do software packaging. However, seeing these efforts play out the community work with app developers it became clear between their reluctance and the backlog felt by the App Review Board, it really wasn’t working. Snappy moves us away from PyPI, PPAs and such into an easier, but still packaged and managed, way to handle software on our systems. It’ll be fascinating to see how this goes.

Mark Shuttleworth on Snappy

He concluded by talking about the popular Internet of Things (IoT) and how Ubuntu Core with Snappy is so important here. DJI, “the market leader in easy-to-fly drones and aerial photography systems,” now offers an Ubuntu-driven drone. The Open Source Robotics Institute uses Ubuntu. GE is designing smart kitchen appliances powered by Ubuntu and many (all?) of the self-driving cars known about use Ubuntu somewhere inside them. There was also a business model here, a company that produces the hardware and a minimal features set that comes with it, also sells a more advanced version, and then industry-expert third parties who further build upon it to sell industry-specific software.

After Mark’s talk there were a series of plenaries that took place in the same room.

First up was Sergio Schvezov who followed on Mark’s keynote nicely as he gave a demo of Snapcraft, the tool used to turn software into a .snap package for Ubuntu Core. Next up was Jorge Castro who gave a great talk about the state of Gaming on Ubuntu, which he said was “Not bad.” Having just had this discussion with my sister, the timing was great for me. On the day of his talk, there were 1,516 games on Steam that would natively run on Linux, a nice selection of which are modern games that are new and exciting across multiple platforms today. He acknowledged the pre-made Steam Boxes but also made the case for homebrewed Steam systems with graphics card recommendations, explaining that Intel did fine, AMD is still lagging behind high performance with their open source drivers and giving several models of NVidia cards today that do very well (from low to high quality, and cost: 750Ti, 950, 960, 970, 980, 980Ti). He also passed around a controller that works with Linux to the audience. He concluded by talking about some issues remaining with Linux Gaming, including regressions in drivers that cause degraded performance, the general performance gap when compared to some other gaming systems and the remaining stigma that there are “no games” on Linux, which talks like this are seeking to reverse. Plenaries continued with Didier Roche introducing Ubuntu Make, a project which makes creating a developer platform out of Ubuntu with several SDKs much easier so that developers reduce the bootstrapping time. His blog has a lot of great posts on the tooling. The last talk of the morning was by Scarlett Clark, who gave us a quick update on Kubuntu Development, explaining that the team had recently joined forces with KDE packagers in Debian to more effectively share resources in their work.

It was then time for group photo! Which included my xerus, and where I had a nice chat (and selfie!) with Carla Sella as we settled in for the picture.

Me and Carla Sella

In the afternoon I attended the User track, starting off with Nathan Haines on The Future of Ubuntu. In this talk he talked about what convergence of devices meant for Ubuntu and warded off concerns that the work on the phone was done in isolation and wouldn’t help the traditional (desktop, server) Ubuntu products. With Ubuntu Core and Snappy, he explained, all the work done on phones is being rolled back into progress made on the other systems, and even IoT devices, that will use them in the future. Following Nathan was the Ubuntu Redux talk by Jono Bacon. His talk could largely be divided into two parts: History of Ubuntu and how we got here, and 5 recommendations for the Ubuntu community. He had lots of great stories and photos, including one of a very young Mark, and moved right along to today with Unity 8 and the convergence story. His 5 recommendations were interesting, so I’ll repeat them here:

  1. Focus on core opportunities. Ubuntu can run anywhere, but should it? We have finite resources, focus efforts accordingly.
  2. Rethink what community in Ubuntu is. We didn’t always have Juju charmers and app developers, but they are now a major part of our community. Understand that our community has changed and adjust our vision as to where we can find new contributors.
  3. Get together more in person. The Ubuntu Online Summit works for technical work, but we’ve missed out on the human component. In person interactions are not just a “nice to have” in communities, they’re essential.
  4. Reduce ambiguity. In a trend that would continue in our leadership panel the next day, some folks (including Jono) argue that there is still ambiguity around Intellectual Propoerty and licensing in the Ubuntu community (Mark disagrees).
  5. Understand people who are not us.

Nathan Haines on The Future of Ubuntu

The next presentation was my own, on Building a career with Ubuntu and FOSS where I drew upon examples in my own career and that of others I’ve worked with in the Ubuntu community to share recommendations for folks looking to contribute to Ubuntu and FOSS as a tool to develop skills and tools for their career. Slides here (PDF). David Planella on The Ubuntu phone and the road to convergence followed my talk. He walked audience members through the launch plan for the phone, going through the device launch with BQ for Ubuntu enthusiasts, the second phase for “innovators and early adopters” where they released the Meizu devices in Europe and China and went on to explain how they’re tackling phase three: general customer availability. He talked about the Ubuntu Phone Insiders group of 30 early access individuals who came from a diverse crowd to provide early feedback and share details (via blog posts, social media) to others. He then gave a tour of the phones themselves, including how scopes (“like mini search engines on your phone”) change how people interact with their device. He concluded with a note about the availability of the SDK for phones available at developer.ubuntu.com, and that they’re working to make it easy for developers to upload and distribute their applications.

Video from the User track can be found here. The Developer track was also happening, video for that can be found here. If you’re scanning through these to find a specific talk, note that each is 1 hour long.

Presentations for the first day concluded with a Q&A with Richard Gaskin and Nathan Haines back in the main ballroom. Then it was off to the Thursday evening drinks and appetizers at Porto Alegre Churrascaria! Once again, a great opportunity to catch up with friends old and new in the community. It was great running into Amber Graner and getting to talk about our respective paid roles these days, and even touched upon key things we worked on in the Ubuntu community that helped us get there.

The UbuCon Summit activities continued after a SCALE keynote with an Ubuntu Leadership panel which I participated in along with Oliver Ries, David Planella, Daniel Holbach, Michael Hall, Nathan Haines and José Antonio Rey with Jono Bacon as a moderator. Jono had prepared a great set of questions, exploring the strengths and weaknesses in our community, things we’re excited about and eager to work on and more. We also took questions from the audience. Video for this panel and the plenaries that followed, which I had to miss in order to give a talk elsewhere, are available here. The link takes you to 1hr 50min in, where the Leadership panel begins.

The afternoon took us off into unconference mode, which allowed us to direct our own conference setup. Due to aforementioned talk I was giving elsewhere, I wasn’t able to participate in scheduling, but I did attend a couple sessions in the afternoon. First was proposed by Brendan Perrine where we talked about strategies for keeping the Ubuntu documentation up to date, and also talked about the status of the Community Help wiki, which has been locked down due to spam for nearly a month(!). I then joined cm-t arudy to chat about an idea the French team is floating around to have people quickly share stories and photos about Ubuntu in some kind of community forum. The conversation was a bit tool-heavy, but everyone was also conscious of how it would need to be moderated. I hope I see something come of this, it sounds like a great project.

With the UbuCon Summit coming to a close, the booth was the next great task for the team. I couldn’t make time to participate this year, but the booth featured lots of great goodies and a fleet of contributors working the booth who were doing a fantastic job of talking to people as the crowds continued to flow through each day.

Huge thanks to everyone who spent months preparing for the UbuCon Summit and booth on the SCALE14x expo hall. It was a really amazing event that I was proud to be a part of. I’m already looking forward to the next one!

Finally, I took responsibility for the @ubuntu_us_ca Twitter account throughout the weekend. It was the first time I’ve done such a comprehensive live-tweeting of an event from a team/project account. I recommend a browse through the tweets if you’re interested in hearing more from other great people live-tweeting the event. It was a lot of fun, but also surprisingly exhausting!

More photos from my time at SCALE14x (including lots of Ubuntu ones!) here: https://www.flickr.com/photos/pleia2/albums/72157663821501532

30 January, 2016 11:40PM

Linux Padawan: Master Spotlight: Na3iL

Today we interview Na3iL   1) How did you first get started using Linux? What distros, software or resources did you use while learning? When I was 12 years old, I was very interested about learning security & hacking stuff. Thus, after a while searching in the web about good OS that protects you while […]

30 January, 2016 11:05PM