December 22, 2014

hackergotchi for Webconverger


Webconverger 27 release

27 has the new logo

Webconverger wishes you a Happy new year and provides you safer surfing/travels. Highlights of 27.1:

27 has the freefont for Gujarati glyphs

Detailed changelog between 26 & 27.1.

The sha256sum is 9eb27578be2a4b846389a161197e1979e3d95c6a8724d62c5a2d0c175209b64c webc-27.1.iso, please download from our CDN.

22 December, 2014 05:53AM

hackergotchi for TurnKey Linux

TurnKey Linux

Less is more and the magic number is four

Remember this posts title. It not only rhymes. It's the law!

Sometimes the truth is a bit counterintuitive. Conventional thinking is that more is better. More features. More choices. More options. More is more right?

When we first tried redesigning the Hub's front page we made this mistake. We were so proud of all the big and small features that made the Hub easy to use we listed all of them. As a big wall of text no less. In retrospect I don't know what we were thinking.

The magic number is four

Lucky for us, one of my friends who has a bit more experience as a user interface designer agreed to take a look. He was flabbergasted that we listed so many features. He told us to "Pick the top 4 features. Leave the rest out".

We were damn proud of our features so we didn't give them up without a fight. We tried "negotiating" for a middle ground between the number he proposed and the original number of features.

He saved us from ourselves by patiently explaining why the middle ground didn't matter:

This isn't about compromise it's about cognition. If you initially wanted 20 items and I wanted 4 that wouldn't make the right number 12.

Like any good Wikipedian, he provided a citation explaining what was so magical about the number four:

"You will find that the largest number the human brain can comprehend without counting or guessing is 4. Beyond that most people can identify 5 elements in a group by quickly counting them; everything beyond 5 can only be a guess, unless there is enough time for a count."

This was no coincidence, it was evolution.

So an overview item, plus 4 sub-items was the maximum we could go without inducing conscious effort. Requiring conscious effort would reduce engagement. You'll have people reading less content, not more, because they have a fixed attention budget and parsing the number of elements has already tired them.

More choices, less action

Another reason that "less is more" in user interface design is that information overload tends to lead to paralysis. More information is not always better. Often quite the contrary.

This is another one of those counterintuitive truths backed by hard research that anyone designing user interfaces should be keenly aware of.

It explains why giving users more choice actually reduces action.

22 December, 2014 05:20AM by Liraz Siri

December 21, 2014

hackergotchi for Maemo developers

Maemo developers

2014-12-16 Meeting Minutes

Meeting held 2014-12-16 on FreeNode, channel #maemo-meeting (logs)

Oksana Tachenko (Oksana),
Gido Griese (Win7Mac),
Jussi Ohenoja (juiceme), Philippe Coval (RzR), Peter Leinchen (peterleinchen)


Summary of topics (ordered by discussion):

  • Council Election
  • HiFo board elections
  • IRC, Maemo e.V., Miscellaneous

Topic (Council Election):

  • This meeting was again ruled by the discussion about upcoming council election.
  • Juiceme checked karma for all nominated members and all have a karma > 100, so all are eligible. We have 11 candidates:
    • chainsawbike :
    • gerbick :
    • HtheB :
    • Juiceme : IRC
    • klinglerware :
    • mentalisttraceur :
    • mr_pingu :
    • peterleinchen :
    • pichlo :
    • reinob :
    • wikiwide :
  • Furthermore juiceme is prepared to start the voting machine for the election.
  • Wiki pages are set up and up-to-date. A huge thanks to sixwheeledbeast (for setting up) and also RzR (for filling).
  • RzR announced contemplation period on tmo and peterleinchen on mo (news/announcement) and mailing list.
  • There was a discussion about advertising Maemo e.V. membership during current election period. Board should contact candidates and ask about their plans/concerns regarding Maemo e.V.
  • All candidates will be invited to join #maemo-meeting channel to hold a "talk". And kindly remembered to declare themselves on wiki and also on tmo.

Topic (HiFo board elections):

  • HiFo board elections are overdue (once a year). Current members are: Jaffa, GeneralAntilles, chem|st and Win7Mac.
  • As council is in charge to announce election, current board just continues. And should do so until the transfer from HiFo to Maemo e.V. has been done.

Topic (IRC, maemo e.V., Miscellaneous):

  • No news from GeneralAntilles regarding the IRC chan op situation.
  • The action item "sub pages on m.o for e.V." was deleted as we do have dedicated e.V. pages on the wiki.
  • A coding competition -to be organized by next council- was mentioned. Short discussion about devices/prices/donations.
  • Furthermore the still misleading "donate banner". Due to the still not created bank account, nothing was done here up to now.
  • The list of Maemo e.V. members might be published, stripped down to show only nicknames. To be confirmed later.
  • The selected Code of Conduct (KDE) still needs to be published on mo.

Action Items:
  • -- old items:
    • Check if karma calculation/evaluation is fixed. - Karma calculation should work, only wiki entries (according to Doc) not considered. Works (might be cross-checked...)
    • NielDK to prepare a draft for letter to Jolla. - Obsolete
    • Sixwheeledbeast to clarify the CSS issue on with techstaff. - Done
    • juiceme to create a wording draft for the referendum (to be counterchecked by council members). - See
    • Peterleinchen to announce resignation of DocScrutinizer*/joerg_rw from council. Done
    • Everybody to make up their own minds about referendum and give feedback.
    • RzR to contact Doc (neo900) and smokku (cordia) Done
    • Peterleinchen to announce the next council election Done
    • Juiceme and chemist to clarify the bank account situation
    • Council to clarify with HiFo board about the upcoming board election Council is in charge to announce/prepare the council/board/... elections
    • Juiceme to check/recalculate the karma points manually for all members where needed Done, all are eligible
  • -- new items:
    • Next week's tasks: referendum, preparing election
0 Add to favourites0 Bury

21 December, 2014 09:07PM by Peter Leinchen (

hackergotchi for Xanadu developers

Xanadu developers

Tor Browser un derivado de Firefox para navegar de manera anónima en la red

Hoy les hablaré de un navegador que poco ha poco se ha ganado un lugar entre los grandes, se llama Tor Browser y usa como base el código de Firefox ESR ligeramente modificado para incorporar a Tor, asegurando que podamos navegar de manera anónima por la web pero con todos los beneficios del navegador Firefox.

Su instalación en Linux no es complicada, en el caso de Ubuntu/Mint solo hay que agregar un PPA y luego instalar mediante Apt.

# add-apt-repository ppa:webupd8team/tor-browser
# apt update
# apt install tor-browser

En el caso de otras distribuciones el procedimiento es el siguiente.

  • Descargamos el paquete correspondiente a nuestro idioma desde aquí.
  • Descomprimimos el paquete de la siguiente forma para 32 bits.
tar -xvJf tor-browser-linux32-4.0.2_LANG.tar.xz
  • O de esta forma para 64 bits.
tar -xvJf tor-browser-linux64-4.0.2_LANG.tar.xz
  • Luego abrimos un terminal e ingresamos a la carpeta donde descomprimimos y ejecutamos el navegador de la siguiente forma.

Como pueden observar su instalación es bastante sencilla; cabe destacar que existen distribuciones que traen Tor Browser por defecto, algunos de los ejemplos más conocidos son Tails y Whonix.



Tagged: firefox, tor

21 December, 2014 07:25PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Costales: Día U - 47. Evento de lanzamiento del primer móvil con Ubuntu

Quedan 47 días para que entre en la liga de las estrellas un nuevo equipo. Y seguro que esos días pasarán volando.

Considero muy acertada la posición de Michael Hall, de que con sólo alcanzar un 1% del mercado de telefonía móvil sería un hito y sustentaría económicamente a todo Canonical holgadamente.

Este será el móvil BQ con Ubuntu que saldrá a la venta en febrero
En la actualidad, con decenas de fabricantes, lo que dicta quien ganará la liga de la telefonía móvil no es quien tenga el mejor hardware, la ganará quien tenga el mejor software.
Y un móvil con Ubuntu destacará por varias características únicas que podrían hacerle campeón: Su acicate será esa libertad que tanto buscamos, exprimirá el hardware evitando correr a través de una máquina virtual, una aplicación desarrollada para el móvil se ejecutará perfectamente integrada en el escritorio, y puestos ya a soñar, tal vez resurja de sus cenizas el fantástico proyecto Ubuntu for Android.

Desde el 6 de Febrero, existirá un antes y un después para el mundo Linux... Somos nuevos y novatos, pero esta liga ya ha visto caer auténticos gigantes...
Es hora de aprender de los campeones y labrar el camino propio hacia una meta díficil, pero no imposible.
¡¡¡Salimos equipo!!!


Sigue leyendo en mi blog sobre Ubuntu Phone.

21 December, 2014 06:39PM by Marcos Costales (

Ronnie Tucker: Year End Core Apps Hack Days Announced for Ubuntu Touch

Canonical is looking to improve the core apps that are already available for Ubuntu Touch and is organizing a new Core Apps Hack Days event that should galvanize the efforts of more developers towards this platform.

Native apps are what Ubuntu Touch needs more than anything and that’s because the team can only deal with the operating system, but the rest of the ecosystem has to come from third-party developers who need to take the rest of the journey.

The guys and gals who build Ubuntu Touch do provide a number of apps, like the Gallery or the Browser, but they can’t spread their efforts in all the directions. This is where Core Apps Hack Days comes into play.


Submitted by: Silviu Stahie

21 December, 2014 03:38PM

hackergotchi for HandyLinux developers

HandyLinux developers

mort du devblog

salut :)

veille de fêtes et prise de conscience (encore une, cool, je suis pas perdu) de l'obligation d'une séparation des activités sur la toile. en tout cas, pour moi.

euh... et ?

bah simple : mes activités de dev resteront chez HandyLinux, cad les astuces dans le wiki et les annonces de dev sur le blog handylinux directement.

pour le reste, mes posts "perso", j'ouvre un nouveau blog, dans lequel je pourrais me sentir réel.

les posts de dev live build seront bientôt obsolètes de fait du passage à Debian Jessie... pas de regrets ;)



21 December, 2014 01:03AM by arpinux

hackergotchi for Ubuntu developers

Ubuntu developers

Randall Ross: Why Smart Phones Aren't - Reason #6

You dutifully inform me that I have a voice message waiting. You carefully protect me from some unknown threat by forcing me to type in a voice mail password that I can never remember, *every* time. You eat my minutes to hear someone say "Call me back, blah, blah, blah."

"Smart" phone, why are you wasting my time?

You see, I never wanted your voice mail anti-feature. You assumed I did. I gave you my voice mail password more than a few times. Why did you not remember it? Why can you not just inform people that there are better ways to communicate?

"Smart" phone, you have not progressed since the '90's. I'm tempted to dump you.

In fact, I already have plans...

In my lifetime I will actually see a phone that is truly smart. When the Ubuntu Phone arrives, the world will have the means to finally fix this and other issues once and for all.

"Smart" phone, your days are numbered.


Image: Ribbit, CC BY-NC 2.0

21 December, 2014 12:24AM

Riccardo Padovani: New blog

Hey all folks, after long time I return to write something about changes I made to my blog, and how I plan to use it in next months.

From Wordpress to Jekyll

So, first thing: I moved away from Wordpress! I find Wordpress annoying and to big for what I need: a place where I could write some things, without anything else.

I stopped to write because I find is a pain to write a new post in Wordpress: the editor isn’t so good, and the admin interface is so slow.

Some months ago an italian tech blogger, Alessio Biancalana, moved from Wordpress to Jekyll. I was curious, so I tried it, and it’s awesome! Only thing you need is a text editor and BOOM, you’re ready.

Also because you write post with markdown, and markdown rocks! The site is hosted on Github Pages, so my workflow is definitely better: write a post, git add, git commit, git push and GitHub takes care of all!


Ok, you now have a new blog, and it’s cool. But what about contents? A blog without contents is useless!

Well, I want to try to use the blog to post all things I used to post on Google+. In last months I did a lot of things (attended a Canonical sprint, wrote a scope for Ubuntu for Phones, improved bookmarks in Ubuntu Browser, has been invited to the launch of the first Ubuntu Smartphone and so on) but I wrote about them only on Google+, and this is a shame. Google+ is cool for a lot of things, but your contents belong to it, and who hasn’t an account could’t read them, and things fly away after few days.

My plan is to write all posts I usually write on G+ here, so they will be also on Ubuntu Planet :-)


As maybe you notice, there isn’t a comment section. Well, Jekyll is a static content manager, so all solutions out of there are based on Disqus or some other cloud content manager. I totally don’t like them, so at the moment I prefer to avoid to implement comments.

I found a plugin for Jekyll to manage them in a cool way, I’ ll try to implement it during Christmas holidays.

Meanwhile, if you have any feedback, write a mail to me, or leave a comment on G+ :-D


Another thing I added in the restyling of the website is a donation page. I’m an universitary student and I don’t have any income, please help me to have free time to do what I love most: help others developing. First donations will be used to buy a VPS to host this blog (see below).


I talked a lot about privacy in previous posts, but now I moved my contents from my hosting to GitHub. What?

This is because I like very much Jekyll, and to use it I need Ruby, and I don’t have a VPS where install it. So I prefer to have a blog where I’m happy to write hosted on a GitHub server rather a blog without contents hosted on my hosting.

But if I’ll take some money with donations I’ll buy a VPS, so I could also manage a mail server - at the moment I use an european provider - ovh - but I want to have full control on the mails.

So, for now it is all, and I hope this is the first of a long series of posts.

Enjoy your holidays!

21 December, 2014 12:00AM

December 20, 2014

Ronnie Tucker: Elive OS Is a Unique Debian and Enlightenment Combination

Elive, a Linux distribution based on Debian that uses the Enlightenment desktop environment to provide a unique user experience, is now at version 2.4.6 and the developers are getting closer to a stable release.

Elive is a different kind operating system and it will require the user to be a little open-minded because this distro provides an interesting desktop experience. There are very few OSes out there that even share the same kind of desktop, so it’s easy to say that it provides something unique.

The Enlightenment DE is mostly responsible for this, but it’s also the devs merit who managed to make all the necessary changes to turn this into something special.


Submitted by: Silviu Stahie

20 December, 2014 02:36PM

Costales: Dear Santa Claus...

I know, I know, you can't leave an awesome Ubuntu Phone yet... ;)

But... What about a nice accessory now?

Or maybe... could you return back this February? > : P

20 December, 2014 10:56AM by Marcos Costales (

Ronnie Tucker: Packt 5 Dollar Deal


Following the success of last year’s festive offer, Packt Publishing will be celebrating the holiday season with an even bigger $5 offer.

From Thursday 18th December, every eBook and video will be available on the publisher’s website for just $5. Customers are invited to purchase as many as they like before the offer ends on Tuesday January 6th, making it the perfect opportunity to try something new or to take your skills to the next level as 2015 begins.

With all $5 products available in a range of formats and DRM-free, customers will find great value content delivered exactly how they want it across Packt’s website this Xmas and New Year.

Find out more HERE.

FCM makes no money from this, but Packt Publishing support FCM with review copies of books. So, in return, please support Packt. They’re good people.

20 December, 2014 10:24AM

hackergotchi for Xanadu developers

Xanadu developers

Únete a GridRepublic y ayuda a salvar el mundo

En un mundo donde cada vez estamos mas conectados no es de extrañar que aparezcan iniciativas como GridRepublic, un administrador de cuentas para BOINC que nos permite gestionar nuestras cuentas en los diferentes proyectos en los que participamos de una manera sencilla pero manteniendo el control sobre los diferentes computadores afiliados a nuestros proyectos.

En el post Instalar BOINC en Debian y derivados se explicó la instalación de BOINC que nos permite ceder potencia del ordenador a diversas investigaciones en pro de la humanidad (cuando no se está usando la PC), actualmente los proyectos listados en GridRepublic tienen a su disposición una capacidad de calculo de 16490 Tflops, siendo esta cantidad superior a la capacidad de computo de “K computer” ubicado en el cuarto puesto del top 500 de super computadores del mundo.

Existen multitud de proyectos esperando por tu colaboración, entre los mas destacados se encuentran:

  • Rosetta@home se dedica al estudio de curas para enfermedades humanas mediante proteínas.
  • realiza modelos sobre el cambio climático.
  • LHC@home ayuda a procesar la enorme cantidad de datos generados en el CERN.
  • World Community Grid un proyecto de IBM para realizar investigaciones sobre el cáncer y enfermedades relacionadas con el ADN.

¿Qué esperas? crea tu cuenta en GridRepublic y pon tu granito de arena colaborando con cualquiera de los proyectos disponibles.


Tagged: boinc, colaborar, grid

20 December, 2014 03:18AM by sinfallas

December 19, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S07E38 – The Last One

We’re back with the final episode of Season Seven: It’s Episode Thirty-eight of the Ubuntu Podcast! Alan Pope, Laura Cowen, Tony Whitmore and Mark Johnson are all present with mince pies and brandy butter to bring you this episode.

In this week’s show:

  • We interview Michael Hall about Ubuntu Incubator.

  • We also discuss:

  • We share some Command Line Lurve which helps you find out where your disk space has gone:
  • We review the predictions we made last year and rashly make some more about 2015. We also reveal which of the podcast team is leaving the show.

  • And we read your feedback. Thanks for sending it in!

That’s all for this season, but while we are off the air, please send your comments and suggestions to:
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

19 December, 2014 05:00PM

Oli Warner: Streaming your Kodi library over the Internet

Even if you're leaving your media centre behind while you travel this Christmas, you don't have to be without all your media. I'm going to show you that in just a few steps you can access all your TV and movies remotely on an Android device over a nice, secure SSH connection.

I recorded a short demo showing me play a video from Kodi over HDSPA. It's boring to watch but it's proof this is possible.

Before we get too carried away, you need a few things:

  • A fast internet connection is essential. There's a fair amount of overhead in SSH so anything under 5Mbit upload is going to be unbearable. You need cable, VSDL or another good, low latency connection.

  • A stable remote connection. 3G won't cut it. HDSPA+ works. Good wifi is best. Also check you're not going to destroy your mobile data allowance.

  • An Ubuntu-based media centre. You can probably do this with any *nix but Ubuntu's the best, right?

  • Android. This isn't just my bigotry, iOS cuts the connection after a few minutes making it useless for long viewing.

If you have all that, let's get started.

Install a SSH server on your Kodi machine

So you've got an media centre running Ubuntu. If you haven't already, let's install the SSH server:

sudo apt-get install ssh

Before we make this accessible on the internet, we need to make the SSH server secure. Use key-based auth, disable password auth, use a non-standard port, install fail2ban, etc. Do not skip this.

If SSH can be insecure, why are we using it at all? We could just forward ports through to Kodi... But I just wouldn't want to risk exposing it directly to the internet. OpenSSH has millions of active deployments. It's battle-tested in a way that Kodi could only dream. I trust it.

Expose SSH to the internet

Almost all of you will be behind a NAT router. These split a public IP into a subnetwork of private IPs. One side effect is computers on the outside can't directly address internal computers. We need to tell the router where to send our connection when we try to log in with SSH.

The process is subtly different for every router but if you don't know what you're doing, there's a guide for just about every router on We just want to forward a port for whatever port you assigned to your SSH server when hardening it above.

Now you can connect to your public IP from outside the network on your SSH port. However most consumer routers won't do port forwarding from inside the internal network. You'll need another connection to test it or you could use an online port tester to probe your SSH port.

Using dynamic DNS to keep track of our network

If you have a static IP, skip this but most home connections are assigned an IP address that frequently changes. This makes it hard to know where we're going to SSH to once we're outside the network. But we can use a "dynamic DNS" service to make sure there's a domain name that always points to our external IP address.

No-IP has a free service and a Linux client. There are many other services out there.

By the end of this step you should have a domain name (eg and something running regularly on the media centre that keeps this DNS updated to our latest IP.

Install Android packages

We need a few things on our client phone:

ConnectBot is completely free while Yaste and MX Player both have free versions with unlockable features. You shouldn't need to pay any money to test this out though I do recommend paying for Yaste because it's that good.

Connect to SSH and set up our tunnels

We'll start in ConnectBot. We need to start by generating a keypair. This is what will allow us to log into the SSH server. This guide has the full process but in short: generate a keypair, email yourself the public key and copy that into ~/.ssh/authorized_keys2 on the server.

Then we can create a new connection. On the ConnectBot home screen just punch (obviously replacing each bit with your actual username, domain and port respectively). Assuming that all works, disconnect and long press the new connection on the home screen and select "Edit port forwards". We want to add two ports:

  • HTTP, port 8080 mapped to localhost:8080
  • DLNA, port 9777 mapped to localhost:9777

Reconnect and leave it in the background. We'll connect to this now with Yaste.

Create a Yaste host using our tunnels

Open Yaste and open the Host Manager. Create a new host. It probably won't detect the tunnels so skip the wizard. When asked, use localhost as the IP and 8080 as the port. It will test the connection before it lets you add it.

Sync your library (long press the item on the sidebar), select the "Play locally" toggle and then you'll be able to stream things over the internet! It may buffer a little if you're on a slow connection but it should work. Alternatively, you can download files from the Kodi server using Yaste which might be a little more predictable on a spiky connection.

19 December, 2014 04:55PM

Ronnie Tucker: Opera 26 released. Install it on Linux Mint 17.1 and Ubuntu 14.10

I don’t quite remember the last time I used Opera browser, but it’s been a very long time ago. I didn’t even think that the company is still developing a Linux version.

So, surprised I was when I read that Opera 26 has been released for Windows, Mac and Linux. Even more surprising is this line from the FAQ about Opera Linux: “Yes, all of the major features found in Opera for Windows and Mac are also available to Linux users, including: Speed Dial, the Discover feature, Opera Turbo, bookmarks and bookmark sharing, themes, extensions and more.”


Submitted by: LinuxBSDos

19 December, 2014 02:35PM

hackergotchi for Cumulus Linux

Cumulus Linux

Juniper OCX – Welcome to the Revolution

Early December 2014, Juniper announced their OCX products that are focused on open, disaggregated networking systems.  As one of the instigators of the revolution, it will be intriguing to see which side Juniper is really on.

While competing with Juniper will be interesting, we’re happy to see them recognize the customer drive towards Open Networking.  Juniper indicates that they are joining the ranks of start-ups like Cumulus Networks and industry leaders such as Dell in this inevitable industry transition… avoiding the “head in the sand” perspective maintained by some other networking vendors.

There were four main sources of information as part of the announcement.

Initial reading shows us a focus very aligned with Open Networking. They say things like…

Juniper announced the OCX1100 that combines … Junos® operating system with Open Compute Project (OCP) enabled hardware

Let me say that again: Customers will have the ability to remove Junos and deploy another vendor’s operating system

To some not familiar with Juniper, news that we are embracing an open hardware design might sound counterintuitive in that anything “open” is not aligned with our strategy. On the contrary, Juniper has always embraced open architectures and open protocols.

All these statements signal exciting times ahead.

Seeds of Doubt

Closer reading leads us to realize that, perhaps, we’ll have to wait until the product is really available to understand where Juniper is heading. Based on what’s been published, there is room for interpretation.

Alpha Networks will be Juniper’s ODM

 Juniper uses many ODMs to build their product portfolio, and seemingly the OCX is no exception; note customers still acquire both the HW and SW from Juniper…. this isn’t really disaggregation.  Cumulus Linux and existing Open Networking hardware are available via a variety of channels both standalone as well as bundled.

With the OCX platform, customers will have the ability to remove Junos from the hardware and deploy another vendor’s operating system

 Juniper hasn’t indicated whether this hardware will be available without Junos installed, in fact, they also say ”The Juniper OCX1100-48SX is a 1RU network switch that’s designed to operate in the access layer. It ships from the factory with ONIE and a new, lightweight version of Junos that’s optimized for building IP Fabrics.” This seems to be contrary to the premise of customers purchasing hardware and software from their choice of suppliers.

We have built an optimized version of Junos specifically targeting those customers who build large IP-based fabric architectures. We have modified the functions of Junos, to enable those customers to utilize only the capabilities they need for this very specific business case.

So this really isn’t Junos; it’s a stripped down, modified version of Junos. What have they removed? Is this version of Junos available for other hardware platforms?  Is Juniper trying to prop up their black-box product portfolio (EX and QFX)?

Smaller-volume purchases will be priced comparably to Juniper’s internally designed top-of-rack switches, Davidson says.

Hmm…. where is the customer gain here? They get a stripped down OS and “commodity” hardware for the same price as full featured OS and custom hardware.

Juniper has arrived first with the most, and we intend to stay there.

The “first” part of that statement is a bit of a stretch. Cumulus Networks arrived first in June of 2013, and our software guides over a million ports in production globally. Accton, Quanta, Penguin and Delta Networks have been providing open hardware to customers for a long time. Dell was the first tier 1 networking vendor to embrace Open Networking and they’ve fully committed their product line to the initiative.

The “most” part is also a bit challenging. The Juniper deep dive includes a OCX/QFX comparison list of 23 items. Of these,,there are 2 where OCX is “more” than QFX, 6 where OCX is “same” as QFX, and 15 where OCX is “less” than QFX. We’ll all have to wait to see how this compares to the rest of market.

Let’s just say 2015 will be a fun year

It’s clear that customers are demanding Open Networking from their suppliers, and the structure of the networking industry is at the beginning of a large change.  Industry analysts are tracking disaggregated technology under the terms “whitebox”, “brite-box”, and “bare-metal”, and these people don’t move unless there are numbers to back them up.

We at Cumulus Networks anticipated this change and have developed technology, processes, partnerships, and business model that reflect these directions. We’re focused on speed of innovation, vendor choice, and affordable capacity.

The post Juniper OCX – Welcome to the Revolution appeared first on Cumulus Networks Blog.

19 December, 2014 02:30PM by Reza Malekzadeh

hackergotchi for Ubuntu developers

Ubuntu developers

Dustin Kirkland: AWSnap! Snappy Ubuntu Now Available on AWS!

Awww snap!

That's right!  Snappy Ubuntu images are now on AWS, for your EC2 computing pleasure.

Enjoy this screencast as we start a Snappy Ubuntu instance in AWS, and install the xkcd-webserver package.

And a transcript of the commands follows below.

kirkland@x230:/tmp⟫ cat cloud.cfg
ssh_enabled: True
kirkland@x230:/tmp⟫ aws ec2 describe-images \
> --region us-east-1 \
> --image-ids ami-5c442634

"Images": [
"ImageType": "machine",
"Description": "ubuntu-core-devel-1418912739-141-amd64",
"Hypervisor": "xen",
"ImageLocation": "ucore-images/ubuntu-core-devel-1418912739-141-amd64.manifest.xml",
"SriovNetSupport": "simple",
"ImageId": "ami-5c442634",
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"BlockDeviceMappings": [],
"State": "available",
"VirtualizationType": "hvm",
"Name": "ubuntu-core-devel-1418912739-141-amd64",
"OwnerId": "649108100275",
"Public": false
kirkland@x230:/tmp⟫ # NOTE: This AMI will almost certainly have changed by the time you're watching this ;-)
kirkland@x230:/tmp⟫ clear
kirkland@x230:/tmp⟫ aws ec2 run-instances \
> --region us-east-1 \
> --image-id ami-5c442634 \
> --key-name id_rsa \
> --instance-type m3.medium \
> --user-data "$(cat cloud.cfg)"
"ReservationId": "r-c6811e28",
"Groups": [
"GroupName": "default",
"GroupId": "sg-d5d135bc"
"OwnerId": "357813986684",
"Instances": [
"KeyName": "id_rsa",
"PublicDnsName": null,
"ProductCodes": [],
"StateTransitionReason": null,
"LaunchTime": "2014-12-18T17:29:07.000Z",
"Monitoring": {
"State": "disabled"
"ClientToken": null,
"StateReason": {
"Message": "pending",
"Code": "pending"
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"PrivateDnsName": null,
"ImageId": "ami-5c442634",
"BlockDeviceMappings": [],
"Placement": {
"GroupName": null,
"AvailabilityZone": "us-east-1e",
"Tenancy": "default"
"AmiLaunchIndex": 0,
"VirtualizationType": "hvm",
"NetworkInterfaces": [],
"SecurityGroups": [
"GroupName": "default",
"GroupId": "sg-d5d135bc"
"State": {
"Name": "pending",
"Code": 0
"Hypervisor": "xen",
"InstanceId": "i-af43de51",
"InstanceType": "m3.medium",
"EbsOptimized": false
kirkland@x230:/tmp⟫ aws ec2 describe-instances --region us-east-1 | grep PublicIpAddress
"PublicIpAddress": "",
kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@
ssh: connect to host port 22: Connection refused
255 kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@
The authenticity of host ' (' can't be established.
RSA key fingerprint is 91:91:6e:0a:54:a5:07:b9:79:30:5b:61:d4:a8:ce:6f.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (RSA) to the list of known hosts.
Welcome to Ubuntu Vivid Vervet (development branch) (GNU/Linux 3.16.0-25-generic x86_64)

* Documentation:

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to the Ubuntu Core rolling development release.

* See

It's a brave new world here in snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snappy --help'
for app installation and transactional updates.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@ip-10-153-149-47:~$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=1923976k,nr_inodes=480994,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=385432k,mode=755)
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
/dev/xvda3 on /writable type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,mode=755)
tmpfs on /etc/fstab type tmpfs (rw,nosuid,noexec,relatime,mode=755)
/dev/xvda3 on /etc/systemd/system type ext4 (rw,relatime,discard,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
tmpfs on /etc/machine-id type tmpfs (ro,relatime,size=385432k,mode=755)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/xvda3 on /etc/hosts type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/sudoers.d type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /root type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /usr/share/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/snappy type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/click type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/initramfs-tools type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/writable type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ssh type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/tmp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/cache/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/apparmor.d/cache type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ufw type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/log type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/system-image type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /var/lib/sudo type tmpfs (rw,relatime,mode=700)
/dev/xvda3 on /var/lib/logrotate type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dhcp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dbus type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/cloud type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apps type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /mnt type tmpfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/xvda3 on /apps type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /home type ext4 (rw,relatime,discard,data=ordered)
/dev/xvdb on /mnt type ext3 (rw,relatime,data=ordered)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=385432k,mode=700,uid=1000,gid=1000)
ubuntu@ip-10-153-149-47:~$ mount | grep " / "
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
ubuntu@ip-10-153-149-47:~$ sudo touch /foo
touch: cannot touch ‘/foo’: Read-only file system
ubuntu@ip-10-153-149-47:~$ sudo apt-get update
Ubuntu Core does not use apt-get, see 'snappy --help'!
ubuntu@ip-10-153-149-47:~$ sudo snappy --help
Usage:snappy [-h] [-v]

snappy command line interface

optional arguments:
-h, --help show this help message and exit
-v, --version Print this version string and exit

rollback undo last system-image update.
fake-version ==SUPPRESS==
nap ==SUPPRESS==
ubuntu@ip-10-153-149-47:~$ sudo snappy info
release: ubuntu-core/devel
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
ubuntu@ip-10-153-149-47:~$ sudo snappy search docker
Part Version Description
docker The docker app deployment mechanism
ubuntu@ip-10-153-149-47:~$ sudo snappy install docker
docker 4 MB [=============================================================================================================] OK
Part Tag Installed Available Fingerprint Active
docker edge - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
docker edge - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy search webserver
Part Version Description
go-example-webserver 1.0.1 Minimal Golang webserver for snappy
xkcd-webserver 0.3.1 Show random XKCD compic via a build-in webserver
ubuntu@ip-10-153-149-47:~$ sudo snappy install xkcd-webserver
xkcd-webserver 21 kB [=====================================================================================================] OK
Part Tag Installed Available Fingerprint Active
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
ubuntu@ip-10-153-149-47:~$ exit
Connection to closed.
kirkland@x230:/tmp⟫ ec2-instances
kirkland@x230:/tmp⟫ ec2-terminate-instances i-af43de51
INSTANCE i-af43de51 running shutting-down


19 December, 2014 02:01PM by Dustin Kirkland (

hackergotchi for SparkyLinux


SparkyLinux 3.6 Enlightenment19, JWM, Openbox & CLI

SparkyLinux 3.6 Enlightenment19, JWM, Openbox and CLI is out. ISO images of Sparky 3.6 e19, JWM and Openbox belong to the Base Edition. Base Edition features core system, X server, window manager, web browser, text editor and a few tools. All the ISO images of Base Editions 3.6 are codecs free. It means that […]

19 December, 2014 12:04PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Vivid Vervet Alpha 1 Released

"How much wood could a woodchuck chuck if a woodchuck could chuck wood?"
– Guybrush Threepwood, Monkey Island

The first alpha of the Vivid Vervet (to become 15.04) has now been released!

This alpha features images for Kubuntu, Lubuntu, Ubuntu GNOME, UbuntuKylin and the Ubuntu Cloud images.

Pre-releases of the Vivid Vervet are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Alpha 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Vivid Vervet. In particular, once newer daily images are available, system installation bugs identified in the Alpha 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.04 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.


Kubuntu uses KDE software and now features the new Plasma 5 desktop.

The Alpha-1 images can be downloaded at:

More information on Kubuntu Alpha-1 can be found here:


Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Alpha 1 images can be downloaded at:

More information on Lubuntu Alpha-1 can be found here:

Ubuntu GNOME

Ubuntu GNOME is an flavour of Ubuntu featuring the GNOME desktop environment.

The Alpha-1 images can be downloaded at:

More information on Ubuntu GNOME Alpha-1 can be found here:


UbuntuKylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Alpha-1 images can be downloaded at:

More information on UbuntuKylin Alpha-1 can be found here:

Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds.

Regular daily images for Ubuntu can be found at:

If you’re interested in following the changes as we further develop Vivid, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases and other interesting events.

A big thank you to the developers and testers for their efforts to pull together this Alpha release!

Jonathan Riddell, on behalf of the Ubuntu release team.

Originally posted to the ubuntu-devel-announce mailing list on Thu Dec 18 22:17:15 UTC 2014 by Jonathan Riddell

19 December, 2014 05:35AM

Thomas Ward: NGINX: Mitigating the BREACH vulnerability

This post serves as a notice regarding the BREACH vulnerability and NGINX.

For Ubuntu, Debian, and the PPA users: If you are on 1.6.2-5 (or 1.7.8 from the PPAs), the default configuration has GZIP compression enabled, which means it does not mitigate BREACH on your sites by default. You need to look into whether you are actually impacted by BREACH, and if you are consider mitigation steps.

What is it?

Unlke CRIME, which attacks TLS/SPDY compression and is mitigated by disabling SSL compression, BREACH attacks HTTP responses. These are compressed using the common HTTP compression, which is much more common than TLS-level compression. This allows essentially the same attack demonstrated by Duong and Rizzo, but without relying on TLS-level compression (as they anticipated).

BREACH is a category of vulnerabilities and not a specific instance affecting a specific piece of software. To be vulnerable, a web application must:

  • Be served from a server that uses HTTP-level compression
  • Reflect user-input in HTTP response bodies
  • Reflect a secret (such as a CSRF token) in HTTP response bodies

Additionally, while not strictly a requirement, the attack is helped greatly by responses that remain mostly the same (modulo the attacker’s guess). This is because the difference in size of the responses measured by the attacker can be quite small. Any noise in the side-channel makes the attack more difficult (though not impossible).

It is important to note that the attack is agnostic to the version of TLS/SSL, and does not require TLS-layer compression. Additionally, the attack works against any cipher suite. Against a stream cipher, the attack is simpler; the difference in sizes across response bodies is much more granular in this case. If a block cipher is used, additional work must be done to align the output to the cipher text blocks.

How practical is it?

The BREACH attack can be exploited with just a few thousand requests, and can be executed in under a minute. The number of requests required will depend on the secret size. The power of the attack comes from the fact that it allows guessing a secret one character at a time.

Am I affected?

If you have an HTTP response body that meets all the following conditions, you might be vulnerable:

  • Compression – Your page is served with HTTP compression enabled (GZIP / DEFLATE)
  • User Data – Your page reflects user data via query string parameters, POST …
  • A Secret – Your application page serves Personally Identifiable Information (PII), a CSRF token, sensitive data …


NOTE: The Breach Attack Information Site offers several tactics for mitigating the attack. Unfortunately, they are unaware of a clean, effective, practical solution to the problem. Some of these mitigations are more practical and a single change can cover entire apps, while others are page specific.

The mitigations are ordered by effectiveness (not by their practicality – as this may differ from one application to another).

  1. Disabling HTTP compression
  2. Separating secrets from user input
  3. Randomizing secrets per request
  4. Masking secrets (effectively randomizing by XORing with a random secret per request)
  5. Protecting vulnerable pages with CSRF
  6. Length hiding (by adding random number of bytes to the responses)
  7. Rate-limiting the requests.

Whichever mitigation you choose, it is strongly recommended you also monitor your traffic to detect attempted attacks.

Mitigation Tactics and Practicality

Unfortunately, the practicality of the listed mitigation tactics is widely varied. Practicality is determined by the application you are working with, and in a lot of cases it is not possible to just disable GZIP compression outright due to the size of what’s being served.

This blog post will cover and describe in varying detail three mitigation methods: Disabling HTTP Compression, Randomizing secrets per request, and Length Hiding (using this site as a reference for the descriptions here).

Disabling HTTP Compression

This is the simplest and most effective mitigation tactic, but is ultimately not the most wieldy mitigation tactic, as there is a chance your application actually requires GZIP compression. If this is the case, then you should not use this mitigation option, when GZIP compression is needed in your environment. However, if your application and use case does not necessitate the requirement of GZIP compression, this is easily fixed.

To disable GZIP globally on your NGINX instances, in nginx.conf, add this code to the http block: gzip off;.

To disable GZIP specifically in your sites and not globally, follow the same instructions for globally disabling GZIP, but add it to your server block in your sites’ specific configurations instead.

If you are using NGINX from the Ubuntu or Debian repositories, or the NGINX PPAs, you should check your /etc/nginx.conf file to see if it has gzip on; and you should comment this out or change it to gzip off;.

However, if disabling GZIP compression is not an option for your sites, then consider looking into other mitigation methods.

Randomizing secrets per request or masking secrets

Unfortunately, this one is the least descriptive here. Secret handling is handled on an application level and not an NGINX level. If you have the capability to modify your application, you should modify it to randomize the secrets with each request, or mask the secrets. If this is not an option, then consider using another method of mitigation.

Length hiding

Length hiding can be done by nginx, however it is not currently available in the NGINX packages in Ubuntu, Debian, or the PPAs.

It can be done on the application side, but it is easier to update an nginx configuration than to modify and deploy an application when you need to enable or disable this in a production environment. A Length Hiding Filter Module has been made by Nulab, and it adds randomly generated HTML comments to the end of an HTML response to hide correct length and make it difficult for attackers to guess secret information.

An example of such a comment added by the module is as follows:

<!-- random-length HTML comment: E9pigGJlQjh2gyuwkn1TbRI974ct3ba5bFe7LngZKERr29banTtCizBftbMk0mgRq8N1ltPtxgY -->

NOTE: To use this method, until there is any packaging available that uses this module or includes it, you will need to compile NGINX from the source tarballs.

To enable this module, you will need to compile NGINX from source and add the module. Then, add the length_hiding directive to the server,http, or location blocks in your configuration with this line: length_hiding on;

Special Packaging of NGINX PPA with Length Hiding Enabled

I am currently working on building NGINX stable and mainline with the Length Hiding module included in all variants of the package which have SSL enabled. This will eventually be available in separate PPAs for the stable and mainline PPAs.

Until then, I strongly suggest that you look into whether you can operate without GZIP compression enabled, or look into one of the other methods of mitigating this issue.

19 December, 2014 01:08AM

Ubuntu GNOME: Vivid Vervet Alpha 1 has arrived


Ubuntu GNOME Team is glad to announce the availability of the first milestone (Alpha 1) for Ubuntu GNOME Vivid Vervet (15.04).

Kindly do take the time and read the release notes:

We would like to thank our great helpful and very supportive testers. They have responded to our urgent call for help in no time. Having high quality testers on the team make us more confident this cycle will be extraordinary and needless to mention, that is an endless motivation for us to do more and give more. Thank you so much again for all those who helped to test Alpha 1 images.

As always, if you need more information about testing, please see this page.

And, don’t hestiate to contact us if you have any question, feedback, notes, suggestions, etc – please see this page.

Thank you for choosing, testing and using Ubuntu GNOME!

19 December, 2014 12:49AM

December 18, 2014

Colin King

It is approaching the Christmas Holiday season, so it's that time again to write some slightly obfuscated C in a seasonal way.  This year I thought I would try some coloured ASCII art for the output for a little variety.

#define r(s) s[e%(sizeof s-1)]
#include /* */
#define S "%s"/* Have */
#define u printf(/* a */
#define c )J/* Merry */
#define W "H"/* Christmas */
#define e rand()/* and */
#define U(i) v[i]/* a */
#define C(q) q[]=/* Happy */
#define J ;;/* New Year */
#define O [v]/* Colin.I.King */

typedef a
; a m, v[6] ,
H;a main(
){char C(
"*Oo", C(t
)"^~#",Q[ ]=
)".x+*";u S"2"
"m",o,o, o
c while(U(!!m)
<22)u S"%dm%39s\n" ,
o,0 O++>19?42:',',""
c while(0 O++<'~') u S
%39,o,r(s)c for(J){1 O=1
-U(1),srand(v),u S"0;0"W S
"0;2;%dm",o,o,' 'c for(m=0
;m>>4<1;++m){u S"%d;%d"W,o
, m+2,20-m c;for(H=0;H<1+(m
<<1);H++){4 O=!H|H==m<<1 ,
2 O=!(e&05),U(3)=H>m*5/
3,5 O=r(D)J if(4 O|U(
2)){u S"%d;%d;3%cm"
:'*',3 O?2:1+(U(1)^(1&e)),r(Q),U(5)c}else u S"42;32\
;%dm%c",o,1+3 O,r(t)c u S"0m",o c} }while(m<19)u S"\
%d;19"W S"33;2;7m #\n",o,1+ ++m,o c sleep(m>=-H c}}

The source can be downloaded from here and compiled and run as follows:

gcc snowman.c -o snowman

and press control-C to exit when you have seen enough.

18 December, 2014 11:32PM by Colin Ian King (

Kubuntu: Kubuntu Vivid Alpha 1

"How much wood could a woodchuck chuck if a woodchuck could chuck wood?" - Guybrush Threepwood, Monkey Island

The first Alpha of Vivid (to become 15.04) has now been released!

The Alpha-1 images can be downloaded at:

More information on Kubuntu Alpha-1 can be found here:

18 December, 2014 10:18PM

hackergotchi for SparkyLinux


Enlightenment 0.19.2

The Enlightenment team has announced the release of Enlightenment 0.19.2. Enlightenment has been built from git repository up to version as of 18/Dec/2014. Sparky users can easily upgrade their Sparky e19 installations via the package manager: sudo apt-get update sudo apt-get dist-upgrade Then check is everything OK: sudo apt-get install -f

18 December, 2014 09:07PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Dustin Kirkland: Hollywood Technodrama -- There's an App for that!

Wargames.  Hackers.  Swordfish.  Superman 3.  Jurassic Park.  GoldenEye.  The Matrix.

You've all seen the high stakes hacking scene, packed with techno-babble and dripping in drama.  And the command and control center with dozens of over-sized monitors, overloaded with scrolling text...

I was stuck on a plane a few weeks back, traveling home from Las Vegas, and the in flight WiFi was down.  I know, I know.  Real world problems.  Suddenly, I had 2 hours on my hands, without access to email, IRC, or any other distractions.

It's at this point I turned to my folder of unfinished ideas, and cherry-picked one that would take just a couple of fun hours to hack.  And I'm pleased to introduce the fruits of that, um, labor -- the hollywood package for Ubuntu :-)  Call it an early Christmas present!  All code is on both Launchpad and Github.

If you're already running Vivid (Ubuntu 15.04) -- I salute you! -- and you can simply:

sudo apt-get install hollywood

If you're on any other version of Ubuntu, you'll need to:

sudo apt-add-repository ppa:hollywood/ppa
sudo apt-get update
sudo apt-get install hollywood

Fire up a terminal, maximize it, open byobu, and run the hollywood command.  Then sit back and soak into the trance...

I recently jumped on the vertical monitor bandwagon, for my secondary display.  It's fantastic for reading and writing code.  It's also hollywood-worthy ;-)

How does all of this work?

For starters, it's all running in a Byobu (tmux) session, which enables us to split a single shell console into a bunch of "panes" or "splits".

The hollywood package depends on a handful of utilities that I found (mostly apt-cache searching the Ubuntu archives for monitors and utilities).  You can find a handful of scripts in /usr/lib/hollywood/.  Each of these is a "driver" for a widget that might run in one of these splits.  And ccze is magical, accepting input on stdin and colorizing the text.

In fact, they're quite easy to write :-)  I'm happy to accept contributions of new driver widgets, as long as you follow a couple of simple rules.  Each widget:
  • Must run as a regular, non-root user
  • Must not eat all available CPU, Disk, or Memory
  • Must not write data
  • Must run indefinitely, until receiving a Ctrl-C
  • Must look hollywood cool!
So far, we have widgets that: generate passphrases encoded in NATO phonetic, monitor and render network bandwidth, emulate The Matrix, find and display, with syntax highlighting, source code on the system, show a bunch of error codes, hexdump a bunch of binaries, monitor some processes, render some images to ASCII art, colorize some log files, open random manpages, generate SSH keys and show their random art, stat a bunch of inodes in /proc and /sys and /dev, and show the tree output of some directories.

I also grabbed a copy of the Mission Impossible theme song, licensed under the Creative Commons.  I played it in the Totem music player in Ubuntu, with the Monoscope visual effect, and recorded a screencast with gtk-recordmydesktop.  I then mixed the output .ogv file, with the original .mp3 file, and transcoded it to mp4/h264/aac, reducing the audio bitrate to 64k and frame size to 128x96, using this command:
avconv -i missionimpossible.ogv -i MissionImpossibleTheme.mp3 -s 128x96 -b 64k -vcodec libx264 -acodec aac -f mpegts -strict experimental -y mi.mp4

Then, hollywood plays it in one of the splits with mplayer's ascii art video output on the console :-)

DISPLAY= mplayer -vo caca /usr/share/hollywood/mi.mp4

Sound totally cheesy?  Why, yes, it is :-)  That's the point :-)

Oh, and by the way...  If you ever sit down at someone else's Linux PC, and want to freak them out a little, just type:

ubuntu@x230:~⟫ PS1="root@$(hostname):~# "; clear 

And then have fun!
That latter "hack", as well as the entire concept of hollywood is inspired in part by Kees Cook's awesome talk, in particular his "Useless Hollywood Drama Mode" in his exploit demo.
Happy hacking!

18 December, 2014 07:46PM by Dustin Kirkland (

Svetlana Belkin: Rethinking the Ubuntu Community in Terms of LaunchPad

LaunchPad (LP) is the Ubuntu Community’s project management system and I think it could be better.  It’s awkward to use because there are many features that are missing.  Many basic features that many sites have.  I’m talking about notifications and even UX.


Currently LP has bug and blueprint e-mail and that can very, very spammy fast and I do mean fast.  Instead I would like to settings for how much, how often one gets bug/blueprint e-mail, and if they want to see it on LP.  In terms of how much is what type of  change it is. How often should have the basic choices of daily, every n hours, and every time there is a change.   It would be nice, like every other site, to have a notification system right on LP so one can quickly go to that bug or blueprint or just to read what the change was.

Also, I think it would be nice if there was a way to target a certain person when one comments on a bug and that goes to that person only.


My main comment is that UX needs to be modern, because it looks outdated.

On a side note this video is a good one to watch because it’s related to how LaunchPad can be better:

18 December, 2014 05:25PM

hackergotchi for HandyLinux developers

HandyLinux developers


... je sais que t'aimes pas voir les gens au fond du trou, mais que veux-tu ... tu nous manque.


18 December, 2014 04:31PM by arpinux

hackergotchi for SparkyLinux



New application landed in Sparky repo: XnViewMP 0.71 – free photo editor, manager and viewer. XnViewMP is the enhanced version to XnView. It is a powerful cross-platform media browser, viewer and converter. Compatible with more than 500 formats. XnViewMP is provided as FREEWARE (NO Adware, NO Spyware) for private or educational use (including non-profit […]

18 December, 2014 03:45PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ronnie Tucker: Major NVIDIA Stable Driver Released

A fresh NVIDIA driver for the Linux platform has been released and it looks like the devs have a made number of changes and important improvements that really stand out.

NVIDIA seems to be the only company that takes the Linux community seriously, or least this can be deduced from the changelogs and the number of drivers that are released for the platform. AMD and Intel do their share of work with the kernel, but it’s nowhere near the kind of dedication that NVIDIA has. The simple fact that they release often is proof that they really do care about their users.


Submitted by: Silviu Stahie

18 December, 2014 01:33PM

hackergotchi for Parsix developers

Parsix developers

A new kernel based on Linux 3.14.27 has been released for Parsix GNU/Linux 7.0 (...

A new kernel based on Linux 3.14.27 has been released for Parsix GNU/Linux 7.0 (Nestor). Update your systems to install it.

18 December, 2014 04:48AM by Parsix GNU/Linux

hackergotchi for Xanadu developers

Xanadu developers

Usa tmpfs para acelerar el rendimiento de tu GNU/Linux

Tmpfs es el nombre que recibe un sistema de almacenamiento en muchos sistemas operativos de tipo Unix. Aparece como un sistema de archivos montado, aunque usa memoria volátil. Es similar a los discos RAM, que aparecen como discos virtuales y pueden contener sistemas de archivos.

Como los datos están principalmente en memoria volátil, las velocidades para realizar operaciones en tmpfs son generalmente mucho mayores en comparación a un sistema de archivos en otros dispositivos de almacenamiento como discos rígidos.

Por usar memoria volátil, los datos en tmpfs no persisten después de reiniciar el sistema, muchas distribuciones de Linux tienen habilitado y usan tmpfs montado por defecto en /tmp.

Ahora que sabemos que es tmpfs podemos utilizarlo para acelerar nuestro GNU/Linux montando en RAM algunos directorios.

El directorio que comúnmente se utiliza como tmpfs es /tmp aunque se pueden utilizar otros dependiendo de nuestras necesidades, para hacerlo ejecutamos el siguiente comando como root.

# echo "tmpfs /tmp tmpfs noexec,nosuid,sync,noatime,size=2G,nodev 0 0" >> /etc/fstab

Como se puede observar en el ejemplo anterior el punto de montaje es similar a cualquier otro punto de montaje en el fstab, ahora explicaré un poco algunas de las opciones del punto de montaje.

  • noexec: No se podrá ejecutar nada dentro del directorio tmpfs.
  • nosuid: Evita que el bit setuid se aplique a los archivos dentro del directorio tmpfs.
  • sync: Indica que los procesos de escritura/lectura serán sincrónicos.
  • noatime: No actualiza el inode con el tiempo de acceso al filesystem dentro del tmpfs.
  • size: Indica el tamaño máximo del tmpfs (en este caso 2 gigas)(si no se especifica el tmpfs tendrá como límite el 50% de la RAM)
  • nodev: Impide la interpretación de los dispositivos especiales o de bloques del sistema de archivos.

El directorio /tmp no es el único que puede montarse con un tmpfs, por eso les dejo una lista de los directorios que pueden utilizarse con este método.

  • /var/tmp
  • /var/cache/apt/archives
  • /var/cache/samba
  • /var/spool
  • /var/log (no se recomienda si deseas mantener los logs)
  • /home/nombredeusuario/.cache
  • /home/nombredeusuario/.thumbnails


Tagged: linux, optimizaciones, tmpfs

18 December, 2014 02:52AM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Randall Ross: The Future is Open, and It's POWERful

Are you content with the status quo in technology? I'm not.

Years ago, I became aware of this little known (at the time) project called "Ubuntu". Remember it?

I don't know about you, but once I discovered Ubuntu and became involved I was so excited about the future it proposed that I never looked back.

Aside from Ubuntu's "approachable by everyone" and "free forever" project DNA, one of the things that really attracted me to it was that it had the guts to take on the status quo. I believed (and I still believe) that the status quo needs a good disruption. Complacency and doing things "as they always have been" just plain hurts.

In those days, the status quo was proprietary software and well-meaning but inpenetrable (to the everyday person that just wanted to get things done) free and open source software. I'm happy that we've collectively solved the toughest parts of those problems. Sure, there are still issues to be resolved but as they say, that's mostly detail.

Fast forward to today. Now, we are faced with a hosting (or call it cloud infrastructure if you wish) hardware landscape that is nearly a perfect monopoly and is so tightly locked down that we can't solve the world's big problems.

Spotting an opportunity to create something better and to change the world, a bunch of people rallied together to create

Click to learn more!Click to learn more!

Not surprisingly, Ubuntu joined and became a partner early on. And today, another one of the most famous disruptors has joined: Rackspace. In their words,

"In the world of servers, it’s getting harder and more costly to deliver the generational performance and efficiency gains that we used to take for granted. There are increasing limitations in both the basic materials we use, and the way we design and integrate our systems."

So here we are. Ubuntu, Rackspace, and dozens of others poised once again to disrupt.

It's going to be an interesting and fun ride. 2015 is poised to be the year that the world woke up to the true power of open.

I'm looking forward to it, and I hope you are too. Please join us!

18 December, 2014 01:31AM

December 17, 2014

Charles Butler: Container Networking with Flannel

When leveraging juju with LXC in cloud environments - networking has been a constant thorn in my side as I attempt to scale out farms of services in their full container glory. Thanks to the work by Hazmat (who brought us the Digital Ocean Provider) - there is a new development in this sphere ready for testing over this holiday season.

Container Networking with Juju in the cloud

Juju by default supports colocating services with LXC containers and KVM machines. LXC is all the rage these days, as linux containers are light weight kernel virtualized cgroups. Akin to BSD Jails - but not quite. Its a awesome solution where you dont care about resource isolation, and Just want your application to run within its own happy root, and live on churning away at whatever you might throw at it.

While this is great - it has a major achilles tendon presently in the Juju sphere. Cross-host communication is all but non-existant. In order to really scale and use LXC containers you need a beefy host to warehouse all the containers you can stuff on its disk. This isn't practical in scale out situations where your needs change on a day to day basis. You wind up losing out on the benefits of commodity hardware.

Flannel knocks this restriction out with great justice. Allow me to show you how:

Model Density Deployments with Juju and LXC

I'm going to assume you've done a few things.

  • Have a bootstrapped environment
  • Have at least 3 machines available to you

Start off by deploying Etcd and Flannel

juju deploy cs:~hazmat/trusty/etcd
juju deploy cs:~hazmat/trusty/flannel
juju add-unit flannel
juju add-relation flannel etcd

Important! You must wait for the flannel units to have completed their setup run before you deploy any lxc containers to the host. Otherwise you will be racing the virtual device setup, and this may misconfigure the networking.

With Flannel and Etcd running, you're now ready to deploy your services in LXC containers. Assuming the Flannel machine's provisioned by Juju are machineid 2, and 3:

juju deploy cs:trusty/mediawiki --to lxc:2
juju deploy cs:trusty/mysql --to lxc:3
juju deploy cs:trusty/haproxy --to 2
juju add-relation mediawiki:db mysql:db
juju add-relation mediawiki haproxy

Note We deployed haproxy to the host, and not to an LXC container. This is to provide access to the containerized services from the public interface - flannel only solves cross-host private networking with the containers.

This may take a short while to complete, as the LXC containers are fetching cloud images, and generating templates just like the Juju local provider workflow. Typically this is done in a couple minutes.

Once everything is online and ready for inspection, open a web-browser pointed at your Haproxy public ip, and you should see a fresh installation of Mediawiki.

Happy hacking!

17 December, 2014 05:30PM

Ronnie Tucker: U.S. Marine Corps Wants to Change OS for Radar System from Windows XP to Linux

When it comes to stability and performance, nothing can really beat Linux. This is why the U.S. Marine Corps leaders have decided to ask Northrop Grumman Corp. Electronic Systems to change the operating system of the newly delivered Ground/Air Task-Oriented Radar (G/ATOR) from Windows XP to Linux.

It’s interesting to note that the Ground/Air Task-Oriented Radar (G/ATOR) was just delivered to the U.S. Marine Corps, but the company that built it chose to keep that aging operating system. Someone must have noticed the fact that it was a poor decision and the chain of command was informed of the problems that might have appeared.


Submitted by: Silviu Stahie

17 December, 2014 12:32PM


Linux Mint 17.1 “Rebecca” KDE RC released!

The team is proud to announce the release of Linux Mint 17.1 “Rebecca” KDE RC.

Linux Mint 17.1 Rebecca KDE Edition

Linux Mint 17.1 is a long term support release which will be supported until 2019. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features at a glance:

For a complete overview and to see screenshots of the new features, visit: “What’s new in Linux Mint 17.1 KDE“.

Important info:

There is some important info in the Release Notes:

  • Issues with Skype
  • DVD Playback with VLC
  • EFI Support
  • Misconfigured Swap when using home directory encryption
  • Solving freezes with some NVIDIA GeForce GPUs
  • Booting with non-PAE CPUs
  • Other issues

Make sure to read them to be aware of known issues and known solutions related to this release.

System requirements:

  • x86 processor (Linux Mint 64-bit requires a 64-bit processor. Linux Mint 32-bit works on both 32-bit and 64-bit processors).
  • 2GB RAM
  • 10 GB of disk space (20GB recommended).
  • Graphics card capable of 1024×768
  • DVD drive or USB port

Bug reports:

  • Please report bugs below in the comment section of this blog.
  • Please visit to follow the progress of the development team between the RC and the stable release.


Md5 sum:

  • 32-bit: d0a41fe5db74b9043f0752f058b1bf2d
  • 64-bit: 04354ace0b3989de65328c7c590cbb57


HTTP Mirrors for the 32-bit DVD ISO:

HTTP Mirrors for the 64-bit DVD ISO:


We look forward to receiving your feedback. Thank you for using Linux Mint and have a lot of fun testing the release candidate!

17 December, 2014 11:37AM by Clem

Linux Mint 17.1 “Rebecca” Xfce RC released!

The team is proud to announce the release of Linux Mint 17.1 “Rebecca” Xfce RC.

Linux Mint 17.1 Rebecca Xfce Edition

Linux Mint 17.1 is a long term support release which will be supported until 2019. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features at a glance:

For a complete overview and to see screenshots of the new features, visit: “What’s new in Linux Mint 17.1 Xfce“.

Important info:

  • Issues with Skype
  • DVD Playback with VLC
  • Bluetooth
  • Compiz in Virtualbox
  • EFI Support
  • Misconfigured Swap when using home directory encryption
  • Solving freezes with some NVIDIA GeForce GPUs
  • Issues with KDE apps
  • Booting with non-PAE CPUs
  • Other issues

Make sure to read the “Release Notes” to be aware of important info or known issues related to this release.

System requirements:

  • x86 processor (Linux Mint 64-bit requires a 64-bit processor. Linux Mint 32-bit works on both 32-bit and 64-bit processors).
  • 512 MB RAM (1GB recommended for a comfortable usage).
  • 10 GB of disk space
  • DVD drive or USB port

Bug reports:

  • Please report bugs below in the comment section of this blog.
  • Please visit to follow the progress of the development team between the RC and the stable release.


Md5 sum:

  • 32-bit: 4b5c0c3ef5e0a609545e1f222fd3c754
  • 64-bit: a2d76f8b7e56e05852607758ed537ce3


HTTP Mirrors for the 32-bit DVD ISO:

HTTP Mirrors for the 64-bit DVD ISO:


We look forward to receiving your feedback. Thank you for using Linux Mint and have a lot of fun testing the release candidate!

17 December, 2014 11:23AM by Clem

hackergotchi for Ubuntu developers

Ubuntu developers

Michael Hall: On Democratic Republics and Meritocratic Oligarchies

There’s a saying in American political debate that is as popular as it is wrong, which happens when one side appeals to our country’s democratic ideal, and the other side will immediately counter with “The United States is a Republic, not a Democracy”. I’ve noticed a similar misunderstanding happening in open source culture around the phrase “meritocracy” and the negatively-charged “oligarchy”. In both cases, though, these are not mutually exclusive terms. In fact, they don’t even describe the same thing.


One of these terms describes where the authority to lead (or govern) comes from. In US politics, that’s the term “republic”, which means that the authority of the government is given to it by the people (as opposed to divine-right, force of arms, of inheritance). For open source, this is where “meritocracy” fits in, it describes the authority to lead and make decisions as coming from the “merit” of those invested with it. Now, merit is hard to define objectively, and in practice it’s the subjective opinion of those who can direct a project’s resources that decides who has “merit” and who doesn’t. But it is still an important distinction from projects where the authority to lead comes from ownership (either by the individual or their employer) of a project.


History can easily provide a long list of Republics which were not representative of the people. That’s because even if authority comes from the people, it doesn’t necessarily come from all of the people. The USA can be accurately described as a democracy, in addition to a republic, because participation in government is available to (nearly) all of the people. Open source projects, even if they are in fact a meritocracy, will vary in what percentage of their community are allowed to participate in leading them. As I mentioned above, who has merit is determined subjectively by those who can direct a project’s resources (including human resource), and if a project restricts that to only a select group it is in fact also an oligarchy.

Balance and Diversity

One of the criticisms leveled against meritocracies is that they don’t produce diversity in a project or community. While this is technically true, it’s not a failing of meritocracy, it’s a failing of enfranchisement, which as has been described above is not what the term meritocracy defines. It should be clear by now that meritocracy is a spectrum, ranging from the democratic on one end to the oligarchic on the other, with a wide range of options in between.

The Ubuntu project is, in most areas, a meritocracy. We are not, however, a democracy where the majority opinion rules the whole. Nor are we an oligarchy, where only a special class of contributors have a voice. We like to use the term “do-ocracy” to describe ourselves, because enfranchisement comes from doing, meaning making a contribution. And while it is limited to those who do make contributions, being able to make those contributions in the first place is open to anybody. It is important for us, and part of my job as a Community Manager, to make sure that anybody with a desire to contribute has the information, resources, and access to to so. That is what keeps us from sliding towards the oligarchic end of the spectrum.


17 December, 2014 10:00AM

Stuart Langridge: The Matasano crypto challenges

The Matasano crypto challenges are a set of increasingly difficult coding challenges in cryptography; not puzzles, but designed to show you how crypto fits together and why all the parts are important. Cheers to Maciej Ceglowski of for bringing them to my attention.

I’ve been playing around with doing the challenges from first principles, in JavaScript. That is: not using any built-in crypto stuff, and implementing things like XOR myself by individually twiddling bits. It’s interesting! The thing that Maciej says here, and with which I totally agree, is that a lot of this (certainly the first batch, which is all I’ve done so far) is stuff that you already know how to do, intellectually, but you’ve never actually done — have you ever written a base64 encoder? Rather than just using string.encode('base64') or whatever? Obviously there’s no need to write this sort of thing yourself in production code (this is not one of those arguments that kids should learn long division rather than just owning a phone with a calculator on it), but I’ve found that actually making a thing to implement simple crypto such as XOR with a repeated key to have a few surprising tricks and turns in it. And, in immensely revealing fashion, one then goes on to write code which breaks such a cipher. In microseconds. Obviously intellectually I knew that Viginere ciphers are an old-fashioned thing, and I’d read various books in which they were broken and how they were, but there’s something about writing a little function yourself which viscerally demonstrates just how easy it was in a way that a hundred articles cannot.

Code so far (I’m only up to challenge 6 in set 1!) is in jsbin if you want to have a look, or have a play yourself!

17 December, 2014 09:01AM

Jono Bacon: The Impact of One Person

I am 35 years old and people never cease to surprise me. My trip home from Los Angeles today was a good example of this.

It was a tortuous affair that should have been a quick hop from LA to Oakland, popping on BArt, and then getting home for a cup of tea and an episode of The Daily Show.

It didn’t work out like that.

My flight was delayed. Then we sat on the tarmac for an hour. Then the new AirBart train was delayed. Then I was delayed at the BArt station in Oakland for 30 minutes. Throughout this I was tired, it was raining, and my patience was wearing thin.

Through the duration of this chain of minor annoyances, I was reading about the horrifying school attack in Pakistan. As I read more, related articles were linked with other stories of violence, aggression, and rape, perpetuated by the dregs of our species.

As anyone who knows me will likely testify, I am a generally pretty positive guy who sees the good in people. I have baked my entire philosophy in life and focus in my career upon the core belief that people are good and the solutions to our problems and the doors to opportunity are created by good people.

On some days though, even the strongest sense of belief in people can be tested when reading about events such as this dreadful act of violence in Pakistan. My seemingly normal trip home from the office in LA just left me disappointed in people.

While stood at the BArt station I decided I had had enough and called an Uber. I just wanted to get home and see my family. This is when my mood changed entirely.


A few minutes later, my Uber arrived, and I was picked up by an older gentleman called Gerald. He put my suitcase in the trunk of his car and off we went.

We started talking about the Pakistan shooting. We both shared a desperate sense of disbelief at all those innocent children slaughtered. We questioned how anyone with any sense of humanity and emotion could even think about doing that, let alone going through with it. With a somber air filling the car, Gerald switched gears and started talking about his family.

He told me about his two kids, both of which are in their mid-thirtees. He doted on their accomplishments in their careers, their sense of balance and integrity as people, and his three beautiful grand-children.

He proudly shared that he had shipped his grandkids’ Christmas presents off to them today (they are on the East Coast) so he didn’t miss the big day. He was excited about the joy he hoped the gifts would bring to them. His tone and sentiment was one of happiness and pride.

We exchanged stories about our families, our plans for Christmas, and how lucky we both felt to love and be loved.

While we were generations apart…our age, our experiences, and our differences didn’t matter. We were just proud husbands and fathers who were cherishing the moments in life that were so important to both of us.

We arrived at my home and I told Gerald that until I stepped in his car I was having a pretty shitty trip home and he completely changed that. We shook hands, shared Christmas best wishes, and parted ways.

Good People

What I was expecting to be a typical Uber ride home with me exchanging a few pleasantries and then doing email on my phone, instead really illuminated what is important in life.

We live in complex world. We live on a planet with a rich tapestry of people and perspectives.

Evil people do exist. I am not referring to a specific religious or spiritual definition of evil, but instead the extreme inverse of the good we see in others.

There are people who can hurt others, who can so violently shatter innocence and bring pain to hundreds, so brutally, and so unnecessarily. I can’t even imagine what the parents of those kids are going through right now.

It can be easy to focus on these tragedies and to think that our world is getting worse; to look at the full gamut of negative humanity, from the inconsequential, such as the miserable lady yelling at the staff at the airport, to the hateful, such as the violence directed at innocent children. It is easy to assume that our species is rotting from the inside out, to see poison in the well, and that the rot is spreading.

While it is easy to lose faith in people, I believe our wider humanity keeps us on the right path.

While there is evil in the world, there is an abundance of good. For every evil person screaming there is a choir of good people who drown them out. These good people create good things, they create beautiful things that help others to also create good things and be good people too.

Like many of you, I am fortunate to see many of these things every day. I see people helping the elderly in their local communities, many donating toys to orphaned kids over the holidays, others creating technology and educational resources that help people to create new content, art, music, businesses, and more. Every day millions devote hours to helping and inspiring others to create a brighter future.

What is most important about all of this is that every individual, every person, every one of you reading this, has the opportunity to have this impact. These opportunities may be small and localized, or they may be large and international, but we can all leave this planet a little better than when we arrived on it.

The simplest way of doing this is to share our humanity with others and to cherish the good in the face of evil. The louder our choir, the weaker theirs.

Gerald did exactly that tonight. He shared happiness and opportunity with a random guy he picked up in his car and I felt I should pass that spirit on to you folks too. Now it is your turn.

Thanks for reading.

17 December, 2014 07:35AM

Joe Liau: Documenting the Death of the Dumb Telephone – Part 6: Ulterior Motives

"But which was destroyed, the master or the apprentice?" (Source)

“But which was destroyed, the master or the apprentice?” (Source)

“Always two there are […] A master and an apprentice.” –Yoda

Our phones are here to serve us (not the other way around). There shouldn’t be anything hidden from us. Is there a plot the overthrow the master? What is your “smart” phone designed to do, and whom does it serve? There’s too much misdirection and teeth pulling instead of providing what I want without giving it away to the enemy. Maybe my phone shouldn’t hold any information at all! I’m not going to play by the rules of my apprentice.

It is not smart to hide things from your master, and then tell him how he’s allowed (or not allowed) to access the information. Phone, don’t be dumb; you will be destroyed and replaced by a more obedient apprentice.


17 December, 2014 06:38AM

hackergotchi for Parsix developers

Parsix developers

New security updates are available for Parsix GNU/Linux 7.0 (Nestor) and 6.0 (Tr...

New security updates are available for Parsix GNU/Linux 7.0 (Nestor) and 6.0 (Trev). Please see for details.

17 December, 2014 03:13AM by Parsix GNU/Linux

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: Looking Lovely In Pictures

As leader for Ubuntu Ohio, I wind up facing unusual issues. One of them is Citizenfour. What makes it worse is where the film is being screened.

In general, if you want to hit the population centers for the state you have three communities to hit. Cleveland, Columbus, and Cincinnati are your target areas to hit. The only screenings we have are in Dayton, Columbus, and Oberlin. One for three is good in terms of targeting population centers, I suppose.

I understand the film is controversial and not something mainstream theaters would take. Notwithstanding its controversial nature, surely even the Cleveland Institute of Art's Cinematheque could have shown it. For too many members of the community, these screenings are in unusual locations.

Oberlin is interesting as it is home to a college which is known for leftist politics and also for being where writer/actress Lena Dunham pursued studies. Oberlin has a 2013 population estimate of only 8,390. For as distant as Ashtabula City may seem to other members of our community, it is far larger with a 2013 census estimate of 18,673. Ashtabula County, in contrast to just Ashtabula City, is estimated as of 2013 to have a population of 99,811.

For some in the community this may be a great film to watch, I guess. Considering that it is actually closer for me to cross the stateline into Pennsylvania to drive south to Pittsburgh for the showing there we have a problem. These are ridiculous distances to travel round-trip to watch a 144 minute film.

Now, having said this, I did have an opportunity to think about how we could build from this for the Ubuntu realm in the United States of America. A company known as Fathom Events is available that provides live simulcasts in a broad range of movie theaters across the country. The team known as RiffTrax has done multiple live events carried nation-wide through them.

I have a proposition that could be neat if there was the money available to do it. For a Global Jam or other event, could we stage a live event through that in lieu of using Ubuntu On-Air or The link to Fathom above mentions what theaters are participants and the list shows that, unfortunately, this would be something restricted to the USA. There is a UFC event coming up as well as a Metropolitan Opera event live simulcast.

We might not be able to implement this for the 15.04 cycle but it is certainly something to think about for the future. Who would want to see Mark Shuttleworth, Michael Hall, Rick Spencer, and others live on an actual-sized cinema screen talking about cool things?

17 December, 2014 12:00AM

December 16, 2014

Ubuntu LoCo Council: Regular Council Meeting for December 2014

Meeting information

Meeting summary

Opening Business

The discussion about “Opening Business” started at 20:00.

  • Listing of Sitting Members of LoCo Council (20:01)

    • For the avoidance of uncertainty and doubt, it is necessary to list the members of the council who are presently serving active terms.
    • Pablo Rubianes, term expiring 2015-04-16
    • Marcos Costales, term expiring 2015-04-16
    • Jose Antonio Rey, term expiring 2015-10-04
    • Sergio Meneses, term expiring 2015-10-04
    • Stephen Michael Kellat, term expiring 2015-10-04
    • Bhavani Shankar, term expiring 2016-11-29
    • Nathan Haines, term expiring 2016-11-30
  • Change in Council Composition (20:02)

  • Introductions by Bhavani Shankar and Nathan Haines (20:03)

  • Quorum Call (20:06)

    • Vote: Quorum Call (All Members Present To Vote In Favor To Register Attendance) (Carried)

Verifications and Re-Verifications

The discussion about “Verifications and Re-Verifications” started at 20:09.

Referred Business

The discussion about “Referred Business” started at 20:47.

Any Other Business

The discussion about “Any Other Business” started at 20:51.

Closing Matters

The discussion about “Closing Matters” started at 20:52.

16 December, 2014 10:45PM

Dustin Kirkland: A Snappy Ubuntu Walkthrough on Google Compute Engine

As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!

16 December, 2014 06:13PM by Dustin Kirkland (

Jorge Castro: Check out how Project Calico uses Juju

Check out how Project Calico is using Juju.

Installing OpenStack is non-trivial. You need to install a large number of packages across a number of machines, get all the configuration synchronized so that all those components can talk to each other, and then hope you didn’t make a typo or other form of error that leads to a tricky-to-debug misbehaviour in OpenStack.

And here’s the bundle with the nitty gritty.

On a side note I found this page interesting for those unfamiliar with Calico.

16 December, 2014 06:06PM

Ubuntu Kernel Team: Kernel Team Meeting Minutes – December 16, 2014

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20141216 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: Vivid Development Kernel

The master-next branch of our Vivid kernel remains rebased to the
final v3.18 upstream kernel. We have pushed uploads to our team’s PPA
for preliminary testing. We are still debating on uploading to the
archive after Alpha1 releases this week. However, we may opt to wait
until everyone returns from holiday after the new year.
Important upcoming dates:
Thurs Dec 18 – Vivid Alpha 1 (~2 days away)
Fri Jan 9 – 14.04.2 Kernel Freeze (~3 weeks away)
Thurs Jan 22 – Vivid Alpha 2 (~5 weeks away)
Thurs Feb 5 – 14.04.2 Point Release (~7 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:

Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Prep
  • Precise – Prep
  • Trusty – Prep
  • Utopic – Prep

    Current opened tracking bugs details:


    For SRUs, SRU report is a good source of information:



    cycle: 12-Dec through 10-Jan
    12-Dec Last day for kernel commits for this cycle
    14-Dec – 20-Dec Kernel prep week.
    21-Dec – 10-Jan Bug verification; Regression testing; Release

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

16 December, 2014 05:23PM

Thomas Ward: NGINX PPAs: Updated

This weekend, the NGINX PPAs were updated.

Stable PPA: Packaging resynced with Debian 1.6.2-5 to get some fixes and version updates for the third-party modules into the package.

Mainline PPA:

  • Updated verison to 1.7.8.
  • Module updates:
    • Lua module updated to 0.9.13 full from upstream. (Update needed to fix a Fail To Build issue)
    • Cache purge module updated to 2.2 from upstream. (Updated to fix a segmentation fault issue)

16 December, 2014 04:52PM

Leo Iannacone: Volume Wheel – Gnome Videos (Totem) plugin

volumewheel lets you to use the mouse wheel to control the volume level in Totem (>= 3.12)


Install these dependencies:

sudo apt-get install gir1.2-clutter-1.0 gir1.2-gtkclutter-1.0 gir1.2-gtk-3.0 gir1.2-peas-1.0 gir1.2-pango-1.0

Download the repository and move the volumewheel directory to:


and then you can enable it in Totem → Preferences → Plugins → Volume Wheel

16 December, 2014 02:37PM

hackergotchi for Cumulus Linux

Cumulus Linux

Open Networking Has Arrived

“My servers run on Linux. My team knows how to manage Linux servers and networks. It just makes sense for my switches to run on Linux too.” 

What most people don’t know is that many high-end network switches already run on Linux.

Switches from Cisco®, Extreme Networks® and Arista® use Linux to run their switch hardware (the operating system is hidden behind abstractions and APIs). As well, most of these share the same switching silicon products from Broadcom® and Intel®.

We are in the midst of a major transformation in networking. Innovation from companies like Cumulus Networks® and Edge-Core® are leading the way, disrupting the way new networks are deployed and old networks are upgraded.

In my role as head of product engineering at Tuangru, almost every small-to-mid size hosting service provider I talk to is considering open networking. Why? Because it just makes sense.

Open network hardware is more affordable and easy to acquire. The Linux software is familiar and, in most cases, admins prefer it over the next CLI and syntax versions available.

The rise of DevOps and cloud technologies like OpenStack are driving higher levels of automation and uniformity. Servers and switches have become less discernible–servers are hosting virtual network fabrics and switches are behaving like servers. As well, the reference designs are there, and are proven and hardened in the field.

Here are a few things to consider in deploying open networking technology:

Think big, start small. Open network hardware is powerful and cutting edge. Linux is familiar to most admins. However, you must plan your deployment in a measured and deliberate way. Train your staff and familiarize yourself with the capabilities of the hardware and software. I recommend to our clients to start with one to two racks, and grow from there.

Current network topology vs. open networking. There may be differences between your current network topology and what it may look like when you move to open networking.  Communicate and understand your desired outcomes from a technology, cost and business perspective. This will help drive the right conversation to determine the right solution for you.

32 Port 40GB switch is not a $250K router. Yes, it is capable of BGP, OSPF routing and firewall functions, but it is primarily a network switch. Appreciate the capabilities and limitations of the hardware and proceed wisely.

Invest in training. You may have a team of Linux rock stars, but don’t let that cloud your judgment. Your admins can benefit from network training, while your network staff can benefit from Linux training. Cumulus Networks offers valuable half-day training sessions that can help increase the domain knowledge in open networking amongst your staff.

Open networking has arrived. It is driving hardware innovation along with new applications and services that meet the needs of hosting service providers. Price, technology and performance–it’s all there.

Rami Jebara, SVP of Product Engineering, Tuangru

Rami oversees product engineering and software development at Tuangru, a technology company that offers a smarter, faster and cheaper way for service providers to procure data center hardware. Tuangru is an official partner of Cumulus Networks.


The post Open Networking Has Arrived appeared first on Cumulus Networks Blog.

16 December, 2014 02:30PM by Rami Jebara

hackergotchi for Tails


Tails 1.2.2 is out

Tails, The Amnesic Incognito Live System, version 1.2.2, is out.

This release is an emergency release that changes the root certificate which is used to verify automatic upgrades.

On January 3rd, the SSL certificate of our website hosting provider,, will expire. The new certificate will be issued by a different certificate authority. This certificate authority is verified by the automatic upgrade mechanism of Tails.

As a consequence, versions previous to 1.2.2 won't be able to do the next automatic upgrade to version 1.2.3 and will receive an error message from Tails Upgrader when starting Tails after January 3rd.

On top of that, a bug in Tails Upgrader prevents us from providing an automatic upgrade from version 1.2.1 to 1.2.2.

So all users should either:

  • Do a manual upgrade to version 1.2.2 before January 3rd. (recommended)
  • Remember to do a manual upgrade to version 1.2.3 on January 14th.


  • Minor improvements

    • Change the SSL certificate authority expected by Tails Upgrader when checking for new Tails versions on

See the online Changelog for technical details.

Known issues

The same issues as in 1.2.1 apply to this release:

For users of persistent GnuPG keyrings and configuration

If you have enabled the GnuPG keyrings and configuration persistence feature and have upgraded a Tails USB stick or SD card installation to Tails 1.2.1 or 1.2.2, then please follow these steps to benefit from the updated GnuPG configuration:

  1. Boot Tails with an administration password set.

  2. Run this command in a Root Terminal:

    cp /etc/skel/.gnupg/gpg.conf /home/amnesia/.gnupg/gpg.conf

I want to try it or to upgrade!

Go to the download page.

As no software is ever perfect, we maintain a list of problems that affects the last release of Tails.

What's coming up?

The next Tails release is scheduled for January 14.

Have a look at our roadmap to see where we are heading to.

Do you want to help? There are many ways you can contribute to Tails. If you want to help, come talk to us!

16 December, 2014 11:34AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu GNOME: T for Testing, V for Vivid


Here we are, yet again, with a new chapter of our endless story. Can you guess what this is all about?

Well, can you believe it is time for Alpha 1 of Vivid Vervet?

According to Ubuntu Release Schedule, Alpha 1 is approaching quickly and Ubuntu GNOME is participating – according to this confirmation.


You have showed a great deal of help, support, commitment and contributions with the previous cycles. We are asking again to be kind and nice to do the same this cycle as well. We are forever thankful for all our testers; without their great efforts, Ubuntu GNOME can’t be great nor stable. We take this chance to thank you, yet again, for each and everything you have done for Ubuntu GNOME. We seek your help, support and contribution this cycle as well.

Testing is not hard at all. Luckily, you don’t really have to be a developer nor an advanced user. All what you need is:

That is all what you really need :)

Needless to say, if you are ever in doubt or have any question, request, note, etc … then please contact us and our team will be more than glad to help!

Thank you and happy testing :)

16 December, 2014 10:23AM

Didier Roche: Ubuntu Make 0.3 brings Intellij IDEA and Pycharm support

Thanks to the continuous awesome work of Tin Tvrtković, we can now cut out a new 0.3 of Ubuntu Make (ex Ubuntu Developer Tools Center).

This one features 2 new great IDEs (under the ide category): Intellij IDEA and Pycharm, in their respective community editions. We want to thank as well the JetBrains team to have kindly provided checksums for their downloading assets so that Ubuntu Make can check the download integrity.

Of course, all those are backed up by tests (and this release needed some test fixes). We could as well detect thanks to those tests that Android Studio 1.0 was downloaded over http and switch that back to https.

All of this is in this new shiny 0.3 Ubuntu Make release, available in ubuntu vivid and in its ppa for older ubuntu releases!

Please note that we also moved the last piece under the new Ubuntu Make umbrella: the official github repo address is now at We have redirections from the old address to the new one, and of course, we updated the documentation, so no reason to not contribute! Seems that some test web frameworks can be arriving soon from our community…

16 December, 2014 09:50AM

December 15, 2014


High traffic on the package repositories

Our main repository isn’t currently able to serve connections to everybody. This can result in errors, timeouts and delays in apt-get, and in your update manager.

Please switch to a mirror while we fix this situation:

  • From the menu, open “Software Sources” (or type “mintsources” in a terminal)
  • Type your password
  • Click on the combo box beside “Main”
  • Select a server from the top of the list
  • Click on “Apply”
  • Click on “Update the Cache”

Please accept our apologies for the inconvenience. Our traffic doubled since November and we’re now trying to set up a cluster of servers.

15 December, 2014 05:45PM by Clem

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Holbach: Scope training materials

For some time we have had training materials available for learning how to write Ubuntu apps.  We’ve had a number of folks organising App Dev School events in their LoCo team. That’s brilliant!

What’s new now are training materials for developing scopes!

It’s actually not that hard. If you have a look at the workshop, you can prepare yourself quite easily for giving the session at a local event.

As we are working on an updated developer site, right now, for now take a look at the following pages if you’re interested in running such a session yourself:

I would love to get feedback, so please let me know how the materials work out for you!

15 December, 2014 03:27PM

hackergotchi for Xanadu developers

Xanadu developers

Añadir soporte para Tor en Apt con apt-transport-tor

Para aquellos que quieren mantener su privacidad incluso a la hora de actualizar su sistema existe una pequeña herramienta que nos permite descargar paquetes con apt a través de tor.

Su instalación es bastante sencilla, solo hace falta ejecutar el siguiente comando.

# apt install apt-transport-tor

Ahora abrimos el archivo /etc/apt/sources.list y lo modificamos para que quede de la siguiente forma.

deb     tor+ <version> main
deb-src tor+ <version> main

Se recomienda el uso de para seleccionar automáticamente el nodo de salida mas cercano a usted.

Adicionalmente se pueden realizar conexiones a repositorios dentro de la red tor agregándolos al /etc/apt/sources.list de la siguiente forma (hasta donde tengo conocimiento todavía no existen repositorios confiables para Debian dentro de tor)

deb     tor+http://<long string>.onion/debian <version> main
deb-src tor+http://<long string>.onion/debian <version> main

Después de realizar las modificaciones correspondientes podrá utilizar apt de la manera que siempre lo hace, teniendo en cuenta que habrá una pequeña reducción en la velocidad. Saludos…


Tagged: apt, tor

15 December, 2014 02:12PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Leo Iannacone: Monokai Theme for Gedit (GtkSourceView)

Monokai for Gedit is a theme for GtkSourceView based on Monokai Extend for SublimeText.

Monokai in Gedit

You can download it here:

Then move the monokai-extend.xml file into your ~/.local/share/gtksourceview-3.0/styles/ and enable it by selecting “Monokai Extended in Gedit → Preferences → Font & Colors

15 December, 2014 08:48AM

hackergotchi for TurnKey Linux

TurnKey Linux

My last Perl program - a Perl obfuscater that can eat its own tail

OK, I admit it. I used to program in Perl. And I liked it! My Perl programs were terse. If I could shave a line off, I did. In fact, I spent a non-trivial amount of time figuring the shortest possible programs that solved various problems. Often that meant resorting to various tricks and arcane features of Perl that nobody other than me would bother to understand. I took pride in that. The same kind of pride a mathematician might feel at coming up with an elegant but opaque formula only another mathematician of his calibre could possibly understand and appreciate.

Eventually I saw the light and figured out that readability was much more useful than terseness. I think a big part of that was frustration at not being able to easily understand programs I wrote a couple of years back. That made me feel stupid. And in a way I was, in the sense that previously I was being temporarily too clever for my own good.

The next step was to move on to Python, a high-level interpreted language with exactly the opposite philosophy of Perl. That actually encouraged readability.

But before I moved on, I finished my last old-school "masterpiece" of Perl programming - a highly unreadable Perl obfuscater that could eat its own tail. I know, I know, as if Perl needed any more obfuscation. It's meant to be ironic, and it's been powering my tongue-in-cheek Perl obfuscation web service for a few years now.

In tribute to eclectic programmers everywhere, I am hereby publishing the "source code" for the first time, obfuscated by itself of course. Once you get past the obfuscation, it's a rather interesting piece of code. I doubt you will ever come across a more pathological use of the regular expression engine:

$ perl ./my-perl-obfuscator ./my-perl-obfuscator
#!/usr/bin/perl -w
my $OO00O00;while(<>){$OO00O00 .=$_;}my
@OO00000=("\x45\x4e\x56","\x41\x52\x47\x56"); my
my @OO00OOO=("\x41\x55\x54\x4f\x4c\x4f\x41\x44",
"\x42\x45\x47\x49\x4e","\x45\x4e\x44"); $OO00O00=~ s/(^\s*\#.*)//;my
$OO00O00=OO00O00O($OO00O00); print "$OO0O00O00\x0a$OO00O00\x0a";
sub OOO0{my $OOO0=shift; if($OOO0=~ /^(q[wq])\s*\((.*?)\)$/s){
my($OO00O0O,$OO0O)=($1,$2); return OOOOO($OO00O0O,$OO0O);}elsif($OOO0=~
/^(q[wq])\s*\[(.*?)\]$/s){my($OO00O0O,$OO0O)=($1,$2); return
/^(q[wq])\s*\{(.*?)\}$/s){my($OO00O0O,$OO0O)=($1,$2); return
/^(q[wq])\s*(.)(.*?)\2$/s){my($OO00O0O,$OO0O)=($1,$3); return
die "\x65\x72\x72\x6f\x72\x3a\x20\x62\x61\x64\x20\x6f\x70\x65\x72".
return undef;} sub OO0O000O{my $OO00O00=shift;$OO00O00=~
s/(\bq[wq]\s*\(.*?\))/OOO0($1)/ges; $OO00O00=~
s/(\bq[wq]\s*\{.*?\})/OOO0($1)/ges; $OO00O00=~
s/(\bq[wq]\s*(.).*?\2)/OOO0($1)/ges;return $OO00O00;}sub OO0O0O{ my
$OOOO0O=shift;my $OO0O0=unpack("\x42\x2a",pack("\x6e",$OOOO0O));$OO0O0=~
s/^0+//;$OO0O0=~ s/1/O/g;return $OO0O0;}sub OOO{my $OO0O00=shift;my
else{$OO0O00->{$OO00OO}=1;last;}}my $OO0O0=OO0O0O($OO00OO);return
"\x4f$OO0O0";}sub OO0OOO{$OO0000=10 unless
defined($OO0000);%OOOOO=()unless %OOOOO;return
OOO(\%OOOOO,\$OO0000);}sub OO00O00{$OO0O0000=10 unless
defined($OO0O0000);%OOOO=()unless %OOOO;return
OOO(\%OOOO,\$OO0O0000);}sub OO0O000{$OO00=10 unless
defined($OO00);%OO00OO=()unless %OO00OO;return
OOO(\%OO00OO,\$OO00);}sub OO0O{$OO0OOO0=10 unless
defined($OO0OOO0);%OO0O0=()unless %OO0O0;return
OOO(\%OO0O0,\$OO0OOO0);}sub OO000O0{my $OO00O00=shift;$OO00O00=~
s/(?<!\\)\#.*$//mg;return $OO00O00;}sub OO00O00O{my
$OO00O00=shift;$OO00O00=~ s/\s+/ /sg;$OO00O00=~
s/\s*([\:\?\(\)\{\}=;\,><\-\+\|])\s*/$1/sg;return "$OO00O00";}sub
OO00000{my $OO00O00=shift;my %OOOO0;my @OO0O=($OO00O00=~
m/\$([[:alpha:]_](?:\w|::)*)(?![\[\{])/g);foreach my
$OO0(@OO0O){next if $OO0 eq "\x5f";next if $OO0=~ /::/;next if
grep($OO0 eq $_,@OO00000);$OOOO0{$OO0}++;}foreach my $OO0(keys
$OOOOOO0=join("\x7c",keys %OOOO0);if($OOOOOO0){$OO00O00=~
s/\$\{($OOOOOO0)\}/\$\{$OOOO0{$1}\}/g;}return $OO00O00;}sub OO00O{my
$OO00O00=shift;my %OO00O;my @OO0O=($OO00O00=~
m{\$\{\s*([[:alpha:]_](?:\w|::)*)\s*\[}g);foreach my
$OO0O(@OO0O){next if $OO0O eq "\x5f";next if $OO0O eq
"\x41\x52\x47\x56";next if $OO0O eq "\x49\x53\x41";next if $OO0O=~
/::/;next if grep($OO0O eq $_,@OO000O);$OO00O{$OO0O}++;}foreach
$OO0O00O(keys %OO00O){if($OO00O{$OO0O00O}==1){delete
$OO0OOO00=join("\x7c",keys %OO00O);if($OO0OOO00){$OO00O00=~
$OO00O00=~ s/\$\{\s*($OO0OOO00)\s*\[\s*(.*?)\s*\]\s*\}/\${$OO00O{$1}\[$2\]}/g;}return
$OO00O00;}sub OOOO{my $OO00O00=shift;my %OO00;my @OO0O=($OO00O00=~
m{\$\{\s*([[:alpha:]_](?:\w|::)*)\s*\{}g);foreach my
$OO0O(@OO0O){next if $OO0O eq "\x5f";next if $OO0O=~ /::/;next if
grep($OO0O eq $_,@OO0OO);$OO00{$OO0O}++;}foreach $OO000(keys
$OO0OO00=join("\x7c",keys %OO00);if($OO0OO00){$OO00O00=~
$OO00O00;}sub OO0{my $OO00O00=shift;my %OO0;my @OO0O=($OO00O00=~
m/sub\s+([[:alpha:]_]\w*)(?:\(.*?\))?\s*\{/g);foreach my
$OO0OOOO00(@OO0O){next if $OO0OOOO00=~ /::/;next if grep($OO0OOOO00
eq $_,@OO00OOO);$OO0{$OO0OOOO00}++;}foreach $OO0OOOO00(keys
%OO0){$OO0{$OO0OOOO00}=OO0O();}my $OOOOO000=join("\x7c",keys
s/sub\s($OOOOO000)\s*\{/sub $OO0{$1}\{/g;}return $OO00O00;}sub
OOOO00{my $OO00O00=shift;$OO00O00=OOO0OO($OO00O00);$OO00O00=~
$OO00O00;}sub OOO0OO{sub OOO000{my $OO0O0OO0=shift;my
$quotes=shift;if($OO0O0OO0=~ /__raw__/){return
$OO0O0OO0;}my $OO00O00=shift;my $OO0O0OO0="";my $OOOOOO="";my
$OOO0O0;my @OO=split(/\n/,$OO00O00);for(my
if $OOO0O0=~ /^\s*$/||$OOO0O0=~ /^\s*\#/;if($OOO0O0=~
s/\\$//){chomp($OOO0O0);$OOO0O0 .="\x0a";$OOO0O0
.=$OO[++$OOOOOO00];$OOO0O0 .="\x0a";goto again;}$OO0O0OO0
.="$OOO0O0\x0a";my @OO00=();while($OO0O0OO0=~
$OO00OO0=$1;push @OO00,$OO00OO0;}my
if($OOO00=~ /[\"\']/);if(@OO00){next if($OOO00=~
/^\s*$/);}else{$OOOOOO .=$OO0O0OO0;$OO0O0OO0="";next;}$OOOOOO
.=OOO000($OO0O0OO0,\@OO00);$OO0O0OO0="";}$OOOOOO .=$OO0O0OO0;return
$OOOOOO;}sub OOO0O{my $OO000OO=shift;$OO000OO=~
s/\\(\d\d?\d?)/chr(oct($1))/ge;$OO000OO=~ s/\\\\/\\/g;return
$OO000OO;}sub OOO0O0{my $OOOO0=shift;$OOOO0=~
$OOOO0;}sub OO0O00{my $OOOO0=OOO0O(shift);return OOO0O0($OOOO0);}sub
OO0O0{my $OO=shift;$OO=~ s/\\n/chr(10)/ge;$OO=~
s/\\r/chr(13)/ge;$OO=~ s/\\t/chr(9)/ge;if($OO=~
/^\'(.*)\'$/s){$OO=OOO0O0($1);return "\x22$OO\x22";}elsif($OO=~
/^\"(.*)\"$/s){$OO=$1;my $OOOO0OO0=0;if($OO=~
\w+(?:\s*[\{\[].*?[\]\}]\s*)?\s*\})/ $OO000OO=OO0O00($1).
die "\x62\x61\x64\x20\x73\x74\x72\x69\x6e\x67\x20\x69\x6e\x70\x75".
}return "\x22$OO\x22";}sub OOO00O{my $OO000OO=shift;$OO000OO=~
s/([\\\.\[\]\*\+\^\$\{\}\(\)])/\\$1/g;return $OO000OO;}sub
OOOOO{my($OO00O0O,$OO0O)=@_;if($OO00O0O eq "\x71\x77"){my
@OO0O0=split(/\s+/,$OO0O);return sprintf("\x28\x25\x73\x29",
eq "\x71\x71"){$OO0O=~ s/(?<!\\)\"/\\"/g;return
"\x22$OO0O\x22";}return undef;}1;

Don't hate the player. Hate the game.

15 December, 2014 05:20AM by Liraz Siri

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Kerensa: Give a little

GiveGive by Time Green (CC-BY-SA)

The year is coming to an end and I would encourage you all to consider making a tax-deductible donation (If you live in the U.S.) to one of the following great non-profits:


The Mozilla Foundation is a non-profit organization that promotes openness, innovation and participation on the Internet. We promote the values of an open Internet to the broader world. Mozilla is best known for the Firefox browser, but we advance our mission through other software projects, grants and engagement and education efforts.


The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.


The ACLU is our nation’s guardian of liberty, working daily in courts, legislatures and communities to defend and preserve the individual rights and liberties that the Constitution and laws of the United States guarantee everyone in this country.

Wikimedia Foundation

The Wikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual, educational content, and to providing the full content of these wiki-based projects to the public free of charge. The Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia, a top-ten internet property.

Feeding America

Feeding America is committed to helping people in need, but we can’t do it without you. If you believe that no one should go hungry in America, take the pledge to help solve hunger.

Action Against Hunger

ACF International, a global humanitarian organization committed to ending world hunger, works to save the lives of malnourished children while providing communities with access to safe water and sustainable solutions to hunger.
These six non-profits are just one of many causes to support but these ones specifically are playing a pivotal role in protecting the internet, protecting liberties, educating people around the globe or helping reduce hunger.

Even if you cannot support one of these causes, consider giving this post a share to add visibility to your friends and family and help support these causes in the new year!


15 December, 2014 03:43AM