August 03, 2015

hackergotchi for Webconverger


ChromeOS versus Webconverger

Webconverger's main competitor especially in the K12 school (assessments) market is Google with their Google Chrome OS aka Chrome{books,box} product, which is typically sold as a hardware and software bundle.

It's high time to post an update on a 6 year old Comparing Google Chrome OS blog, which demonstrated that we had support for inexpensive Netbook category of PCs and speculated what Google would do. Six years later and Google are domineering.

To summarise; the main advantages that Webconverger has over Google are:

  • Simpler, less is more. Less things to understand and we provide great customer service if get stuck!
  • No lock-in as opposed to Google Cloud Print, specific hardware, (App) extensions & other office related services upsell... these are all proprietary ways for Google to lock you into their solution
  • We don't try to own your users, we don't ask users to login or register!

The products are comparable

Webconverger actually started before Google did, in 2007:


Google recently became somewhat transparent with their pricing. We too offer discounts, and based upon their resellers perpetual license page they charge 150USD whilst we charge 200USD.

ChromeOS pricing 2015

ChromeOS's perpetual Licenses seem analogous to our One off subscriptions. Like us they also do annual subscriptions, but oddly limited to North America. They charge 50USD per year and we currently charge 100USD.

We want to be competitive, so do please ask sales for a quote to beat.


The hardware is difficult to directly compare since they design their own hardware and subsidise it accordingly. For example a Chromebook at 250USD is a good price, though you need to remember you are effectively locked to running Google software on that hardware, limiting your choice.

Webconverger does not design hardware. We focus on supplying software that will run on any PC. This broadens your hardware choice and software choices too. Have old PC stock you want to deploy Webconverger to? No problem. What to run in Virtualbox? No problem with Webconverger.

Google's embrace, extend and extinguish... via extensions

One of our system integrators reports why he chose Webconverger:

  1. The Chrome feature to allow a user to sign in to the browser itself is in appropriate in shared use environments but it happens all the time in practice and users frequently forget to sign out! - Webconverger prevents this user misbehavior completely.

  2. Installing Chrome Extensions is quickly becoming a 'go to' way for malicious actors to get malware onto endpoints even without Admin rights and again Webconverger prevents this from becoming an issue.

  3. Simply closing all tabs to get a fresh session is much faster than the login/logout required on a fat client. In the education environment Webconverger rocks!

One other thing to note is that malicious Chrome Extensions 'follow' the Google user account from computer to computer. Any malicious Extensions must be deleted in the user account itself which adds to Administrator workload.

What is the price of freedom?

We do currently charge a little bit more for our software. However running unconfigured or testing one machine without configuration management is free of cost. As a business supporting an opensource product, we want to attract and invoice large deployments. Deployments where choice, privacy and openness about the way their clients get on the Web are important. They choose Webconverger.

Please contact sales for a competitive quote.

03 August, 2015 07:30AM

August 02, 2015

hackergotchi for Maemo developers

Maemo developers

Re-Announcement of the Q2 2015 Community Council Election

Dear friends and Maemoans, as it happens we did not get enough confirmed candidates in the set nomination period, so according to the Council Election Rules we need to extend the nomination period by 4 weeks.

This will push all dates one month to future. The new schedule of the voting is as follows:

  • Nomination period continues until Saturday 29.08.2015
  • Contemplation period starts on Sunday 30.08.2015 and continues until Saturday 05.09.2015.
  • Election period starts on Sunday 06.09.2015 and continues until 12.09.2015.

Currently we have 5 confirmed candidates. (endsormeans, juiceme, reinob, Win7Mac, peterleinchen)

On behalf of the outgoing community council,

Jussi Ohenoja

0 Add to favourites0 Bury

02 August, 2015 10:00PM by Jussi Ohenoja (

2015-07-28 Council Meeting Minutes

Meeting held 2015-07-28 on FreeNode, channel #maemo-meeting (logs)

Attending: Jussi Ohenoja (juiceme), Peter Leinchen (peterleinchen), Gido Griese (Win7Mac), Paul Carlin (endsormeans)

Partial: Ruediger Schiller (chem|st)

Absent: Oksana Tkachenko (Oksana/Wikiwide), William McBee (gerbick), Alexander Kozhevnikov (MentalistTraceur)

Summary of topics (ordered by discussion):

  • Ongoing Council Elections
  • TM donation status

(Topic Ongoing Council Elections):

  • Elections announcement and nominations
  • Wiki page
  • Juiceme asked if endsormeans is willing to run for Council, and after some consideration he accepted candidacy.
  • There are now four confirmed candidates (reinob, endsormeans, peterleinchen, juiceme) but more are needed.

(Topic TM donation status):

  • Bitcoin account setup is going on.
  • Win7Mac is going to contact reinob for assistance/information on TM related matters.
  • Recap on the expiring TM's;
    • Europe - already expired, needs immediate refresh
    • USA - already expired, will not be refreshed
    • Taiwan - 2016
    • Brazil - 2018
    • Singapore - 2019
    • Japan, Korea, Russia, Switzerland, Norway - 2020
    • Canada - 2023

Action Items:
  • old items:
    • The selected Code of Conduct (KDE) still needs to be published on TMO.
    • Looking into automatic calculation of election results ...
    • Contacting freemangordon and merlin1991 about auto-builder: CSSU-thumb target, GCC versions?
    • Getting maemo trademark registration (everywhere?) renewed (and transferred to MCeV) by the end of February (or within six months since expiry date).
    • Archiving Ovi/Nokia store, especially for Harmattan.
    • Contacting Daphne Won on Facebook and LinkedIn to get administrator rights on Facebook for a Maemo member to migrate the plugin to v2.0 API and maintain it in the future.
  • new items:
0 Add to favourites0 Bury

02 August, 2015 08:23PM by Jussi Ohenoja (

hackergotchi for Ubuntu developers

Ubuntu developers

Launchpad News: Launchpad news, July 2015

Here’s a summary of what the Launchpad team got up to in July.


  • We fixed a regression in the wrapping layout of side-by-side diffs on (#1436483)
  • Various code pages now have meta tags to redirect “go get" to the appropriate Bazaar or Git URL, allowing the removal of special-casing from the “go" tool (#1465467)
  • Merge proposal diffs including mention of binary patches no longer crash the new-and-improved code review comment mail logic (#1471426), and we fixed some line-counting bugs in that logic as well (#1472045)
  • Links to the Git code browsing interface now use shorter URL forms

We’ve also made a fair amount of progress on adding support for triggering webhooks from Launchpad (#342729), which will initially be hooked up for pushes to Git repositories.  The basic code model, webservice API, and job retry logic are all in place now, but we need to sort out a few more things including web UI and locking down the proxy configuration before we make it available for general use.  We’ll post a dedicated article about this once the feature becomes available.

Mail notifications

We posted recently about improved filtering options (#1474071).  In the process of doing so, we cleaned up several older problems with the mails we send:

  • Notifications for a bug’s initial message no longer include a References header, which confuses some versions of some mail clients (#320034)
  • Package upload notifications no longer attempt to transliterate non-ASCII characters in package maintainer names into ASCII equivalents; they now use RFC2047 encoding instead (#362957)
  • Notifications about duplicate bugs now include an X-Launchpad-Bug-Duplicate header (#363995)
  • Package build failure notifications now include a “You are receiving this email because …” rationale (#410893)

Package build infrastructure

  • The sbuild upgrade last month introduced some regressions in our handling of package builds that need to wait for dependencies (e.g. #1468755), and it’s taken a few goes to get this right; this is somewhat improved now, and the next builder deployment will fix all the currently-known bugs in this area
  • In the same area, we’ve made some progress on adding minimal support for Debian’s new build profiles syntax, applying fixes to upload processing and dependency-wait analysis, although this should still be considered bleeding-edge and unlikely to work from end to end
  • We’ve been working on adding support for building snap packages (#1476405), but there’s still more to do here; we should be able to make this available to some alpha testers around mid-August


  • We’ve arranged to redirect translations for the overlay PPA used for current Ubuntu phone images to the ubuntu-rtm/15.04 series so that they can be translated effectively (#1463723); we’re still working on copying translations into place from before this fix
  • Projects and project groups no longer have separately-editable “display name” and “title” fields, which were very similar in purpose; they now just have display names (#1853, #4449)
  • Cancelled live file system builds are sorted to the end of the build history, rather than the start (#1424672)

02 August, 2015 08:01PM

Randall Ross: Real Local Community Growth

I was initially annoyed to see implications earlier on Planet Ubuntu that Ubuntu community was in decline. I was tempted to name this article "Why the Negativity? Let's Get On With Making Ubuntu Awesome"

Ubuntu community is not in decline, if you take a broader view and stick to basics. Some (may) continue to focus on a very narrow segment of society (developers mostly) and that's a shame. It's also not the ubuntu I joined. I seem to recall that "We're all one." We do not count certain types of people over others and we should not proclaim the decline of a community when a thin demographic is not increasing in numbers.

Let's define some terms:

A metropolitan area (city) in British Columbia, Canada.

An area that is traversable on foot or bike or public transit within 45 minutes.

A group of people that share an affinity to one another, historically by virtue of being local.

An increase in numbers over time.

Without limit.

Any questions?

02 August, 2015 07:12PM

Andrea Corbellini: Hello Pelican!

Today I switched from to Pelican and GitHub Pages.

First off, let me say: almost all URLs that were previously working should still work. Only the feed URLs are broken, and this is not something I can fix. If you were following my blog via a feed reader, you should update to the new feed. Sorry for the inconvenience.

Having said that, I'd like to share with you the motivation that made me move and the details of the migration.

The bad things of WordPress

Now, this doesn't want to be a rant, so I'll be pretty concise. WordPress, the content management system, is an excellent platform for blogging. Easy to start with, easy to maintain, easy to use. makes things even easier. It also comes with many useful features, like comments and social networks integration.

The problem is: you can't customize things or add features without paying. Of course, this is business, and I do not want to discuss business decisions made at Not only that, but I could live fine with most of the major limitations. Also, I was perfectly conscious of this kind of problems with when I started (after all, this is not the first blog I started).

I actually become upset of when writing the series of blog posts about Elliptic Curve Cryptography. When writing these articles, I spent a lot of time employing workarounds to overcome limitations. Being used to Vim and its advanced features, I also found the editors (both the old and the new one) as a great obstacle for getting things done quickly. I do not want to enter the details of the problems I'm referring to, what matters is that, eventually, I gave up and I realized it was time to move on and seek for an alternative.

Why Pelican

Pelican is a static site generator. I've always thought that a static site had too many limitations for me. But while seeking an alternative to, I realized that many of those limitations were not affecting me in any way. Actually, with a static site I can do everything I want: edit my articles with Vim, render my equations with MathJax, customize my theme, version control my content, write scripts to post process my content.

The only bad thing about Pelican is that it does not come with any theme I truly like. I decided to make my own. I'm not entirely satisfied with it, as I feel it is too "anonymous", but I believe it is fully responsive, fast, readable and offers all the features I want. Perhaps I'll tweak it a little more to make it more "personal".

Setting up Pelican and migrating everything required some time, but at least this time I worked on true solutions, not on ugly hacks and workarounds like I did with WordPress. This implies that when writing articles I will be able to focus more on content than other details.

Why not other static site generators

In short: Pelican is written in Python and to my eyes it looked better than the other Python static site generators. I'll be honest and say that I did not truly evaluate all of the alternatives: I knew switched to Pelican and that made me try Pelican before all other solutions.


In the end I decided to leave WordPress for Pelican hosted on GitHub Pages. I'm pretty satisfied with the result I got. The nature of GitHub Pages prevents me from using HTTP redirects (and therefore the old feed links are broken), however in exchange I've got much more freedom, and this is what matters to me.

02 August, 2015 06:55PM

hackergotchi for HandyLinux


Les nouvelles d'Août


La page de juillet se tourne et voici celle d'août qui s'ouvre... Ce qui signifie pour moi: les vacances!!! Après mon escapade Russkofienne, je vais certainement re-découvrir les environs de mon cher Lot et Garonne, il y a tellement de belles choses à voir par ici: le Lot, la Dordogne, la Corrèze... (et bien sur une ou deux randonnées dans les Pyrénées).
Alors, si vous en avez la possibilité, profitez en bien et n'oubliez pas la crème solaire :)

Des news du monde de l'informatique
La nouvelle a fait le tour de la presse spécialisée: ça y est, Windows 10 vient de sortir et son premier jour de lancement il y a eu 14 millions de téléchargements . Alors la grande question qui avait été abordée il y a quelques années concernant la "vente liée" (vendre un ordinateur équipée du système d'exploitation windows) est-elle toujours sans réponse?
J'ai retrouvé un article qui date de 3 ans concernant un conflit entre l’UFC-Que Choisir contre HP:

La Cour de cassation vient de statuer que la vente liée d'ordinateurs grand public et de Windows par le site de HP n'était pas « déloyale ». Elle casse la décision de la cour d’appel de Versailles de mai 2011 qui condamnait le constructeur à mentionner le prix du système sur son site.

Si la cour ne juge pas sur le fond (la légalité de cette vente liée sans alternative), elle estime que l'activité n'est pas attaquable sur la forme.

Donc finalement tout va bien pour W$, il n'auront bientôt plus qu'un seul environnement à gérer (W 10) sur tous les écrans et pourront toujours l'imposer chez tous les revendeurs de l'hexagone. Si vous avez plus d'éléments, faites le nous savoir!
Et Linux dans tout cela, comment s'en sort-il? (nous en sortons nous?).

Il est impossible de savoir exactement combien de personnes utilisent des systèmes GNU/Linux ni combien d'ordinateurs tournent un système d'exploitation libre. C'est justement le principe de la liberté, les utilisateurs ne sont pas obligés d'informer qui que ce soit s'ils utilisent ces systèmes, les utilisateurs n'ont pas besoin d'enregistrer leur copie et personne n'est surveillé.

Le Linux counter estime qu'il y a à peu près 73 millions d'utilisateurs, sur la base d'environ 130.000 utilisateurs enregistrés (, 9/2014) mais ce chiffre est simplement une estimation basée sur les utilisateurs qui se sont enregistrés sur leur site. Ils pensent qu'il y a entre 0,2% et 5% des utilisateurs de Linux qui se sont inscrits.

Ce qui suit est une pure estimation . Nous pourrions dire qu'il y a approximativement 2 milliards d'ordinateurs en utilisation aujourd'hui (environ 1 milliard en 2008 et une estimation de 2 milliards en 2014 selon Gartner). Dans ce cas, 1,67% des ordinateurs GNU/Linux qui naviguent sur Internet représentent quelque chose comme 33,400,000 machines. En comptant les appareils Android ce chiffre passe à 391,200,000 systèmes d'exploitation basés sur Linux qui naviguent le web.

Pour finir, on peut se dire que la France serait parmi les plus importants utilisateurs de Linux dans le monde

A bientot

Liens utiles:
- Vente liée
- Linux et la France
- Gnu/Linux dans le monde
- W 10 (téléchargements)
HandyLinux - la distribution Debian sans se prendre la tête...

02 August, 2015 10:14AM by fibi

hackergotchi for Serbian GNU/Linux

Serbian GNU/Linux

Сербиан 2014 у домаћим медијима

                                                             Сербиан 2015 у домаћим медијима

На снимцима се налазе најзначајнији домаћи медији који су објавили текст о оперативном систему Сербиан. Такође, њихови текстови су реемитовани на више локалних портала, попут Врања, Пожаревца, Прњавора и других градова Србије и Српске. Захвалност иде свим медијима и појединцима, без изузетака , који су на различите начине допринели промоцији оперативног система.                                                                                                                                                                                                                                                                                                                                  

  Telegraf                                                                                                                     Mondo   

Србин.инфо                                                                                                              Срби на Окуп



         Магацин                                                                                                           IT-Modul

Ћирилица Београд                                                                                                 Фронтал.рс


02 August, 2015 07:17AM by Debian Srbija (

hackergotchi for Semplice


Semplice 7.0.1 bugfix release

It's my pleasure to announce the immediate release of the first bugfix release of Semplice 7.

It has been discovered that some machines won't boot with the kernel version shipped in Semplice 7.
The issue has now been addressed and corrected in the kernel version 3.19-3.semplice.1, which is shipping in this bugfix release.
This means that if you had some issues booting Semplice 7 in you machine, these new ISO images should work properly.

For details on Semplice 7, refer to the release notes.

Upgrading from Semplice 7

Just do a normal upgrade (Settings → Device details → Check for updates) to get Semplice 7.0.1 and the updated packages.

Keep rockin'!

02 August, 2015 03:46AM

Semplice 7-preview released

It's my pleasure to announce the immediate release of Semplice 7-preview, codenamed "Comfortably Numb".

After long delays on the Semplice 7 cycle, we are now pretty confident to make a preview release available to the general public.

Although this release comes pretty late in the development cycle and then it's pretty feature-packed and similar to the upcoming final version, there are still some rough edges that we need to polish. You can read more about those in the "Known issues" section of the release notes.

This preview snapshot ships with the latest sid as yesterday (2014-11-27) and Linux kernel 3.17.2.

New features include our new vera desktop environment (with a new control center), pulseaudio enabled by default, chromium replaced by iceweasel, a desktop launcher, an interactive tutorial and many more things.
The release notes explain in detail what's changed.

As always, you can get Semplice 7-preview from our download page.

Keep rockin'!

02 August, 2015 03:46AM

Semplice 6 released!

ITALY - It's our pleasure to announce the release of Semplice 6, codenamed "Stairway to Heaven".

This release brings many important performance-related changes, such as systemd as the default init system, a new desktop-optimized kernel and compressed memory. Also, we have rewritten our menu builder, alan. With this release, the central part of the system, the menu, is faster than ever.

Other noteworthy changes are the support to window snapping and an easy to use tool to add launchers to the panel.

As always, you can learn more by reading the change log.

The codename

You may notice that the codename of this release doesn't come from a Pink Floyd work. We selected a Led Zeppelin codename this time to say happy birthday to our team member Luca, which just today is eighteen years old! (Lucky you, but I will come at that later this year! :P)

So, happy birthday Luca!

The future

We are really proud of this release, and we believe many of you will enjoy the (even more) speed of the distribution. But, of course, we need to look at our future too. We have great things in mind, and we hope we'll be able to show them in Semplice 7.

In the meantime, enjoy the fastest Semplice yet.

Keep rockin'! On behalf of the entire Semplice Team (Eugenio, Giuseppe and Luca), Eugenio "g7" Paolantonio

02 August, 2015 03:46AM

Semplice 7 "Comfortably Numb" released

Every revolution starts with a small step. Semplice 7 is ours.
Since our first release 5 years ago, our goal was to create an innovative and lightweight operating system, and not being a "me-too" Openbox-based distribution like many are showing up lately.
Admittedly, we have been good at being an Openbox-based distribution. Semplice 6 was our own peak and I'm truly proud of it.
But it's time to move on.

It's my pleasure to announce the release of Semplice 7, codenamed "Comfortably Numb".

Functionally and aesthetically wise, you won't find that many differences with Semplice 6. But under-the-hood there are plenty.
Openbox became a component of the Desktop Environment, and not the Desktop Environment itself.
This distinction will help me introduce vera, a plugin-based GTK+3 Desktop Environment, made from scratch by us.
Currently Openbox and tint2 run as plugins, but they will eventually get replaced by our own ones.
We are not fan of the NIH thing, but personally I feel that this is the right step to make in order to get things up and running on Wayland.

Wayland is the future and it's actually already production-ready (I, for example, already run it in my phone, and it's pretty damn exciting).
We can't go on with GTK+2 and Xlib, and switching to GTK+3 is a time-consuming but necessary step.

This is a long road, and with Semplice 7 we are just at the beginning of it.
That doesn't mean that the release I'm announcing today is not stable or whatever. Just keep in mind that it's the good ol' Semplice, and do not expect too much (for now) by the new Desktop Environment.

That said, the changes in this release are indeed huge and this blog post is probably not the right place to list them all.
You can have a look at the release notes here.

For a glimpse on what you can find in the iso image you're downloading right now (because you are, aren't you?), in Semplice 7 you can find a brand new Control Center, a new desktop launcher, a new power manager, a new screenshot applet, a new music player (pragha), a new music control menu extension that supports every MPRIS2-based music player (yes, you can control Spotify with it), a new artwork and many other (new :P) things.
And the best thing? You change the wallpaper and the System adapts the theme color to it. It's truly amazing!

This is our greatest release yet, but hopefully Semplice 8 will be even better.

After 15 intense months, development will slow down a bit, at least for the remaining of the first half of this year, as I have high school exams to take.
There are great things planned though and I can't wait to show them all!

In the meantime, get Semplice 7 now!

As a side note, I'm toying up with the idea of a Debian-stable based Semplice. This of course is not meant as a replacement of the Debian sid variant, but just as an alternative to the awesome Crunchbang that unfortunately is not maintained anymore. Given that Debian Jessie is really similar to sid these days, a stable variant would not be hard to make. What do you guys think? Please give me feedback to the idea!

Rock on!

02 August, 2015 03:46AM

Semplice 5 "High Hopes" released!

ITALY - It's our pleasure to announce the immediate release of the fifth stable release of Semplice Linux.

Changes? Are there any changes or you just kept drinking?

We haven't just spent nights drinking. we changed a lot of things and fixed many nasty bugs. For example, we added UEFI, LVM and encrypted LVM support in our even more awesome installer. So, even if NSA goes to your home, they can't retrieve your important personal data. And you can get easily to your favourite web applications via our new WebKit2-based web application viewer, oneslip. By default we include links to Twitter, Facebook, YouTube and a beautiful Tetris game. Also, you can now further customize the features of your Semplice box.

Other changes are listed in the changelog.

Can I upgrade from Semplice 4?

Obviously! Just do your normal dist-upgrade.


The Semplice team -- Eugenio, Luca and Giuseppe.

02 August, 2015 03:46AM

Semplice 5.1 released!

ITALY - It's our pleasure to announce the first bugfix release for Semplice 5.

What has been changed?

As it is simply a point release, we only fixed some bugs discovered after the release of Semplice 5.

Semplice 5 users will need only a regular dist-upgrade to upgrade to 5.1.
Installation from Semplice 5 discs will continue to work as always.
Ensure you keep the installation program updated to get the latest fixes.

You can read the full release notes here.

Why this version number?

We really wanted to call it Semplice 5S¹ - as there aren't really new features - but we aren't so bastard.

Keep rockin',
Eugenio "g7" Paolantonio on behalf of the entire Semplice Team.

[1] Every reference to that popular fruit company is purely coincidental.

02 August, 2015 03:46AM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Mengatasi Masalah Nomor Header pada LibreOffice

Tadi saat saya dalam perjalan dari Ungaran ke Semarang dengan naik BRT (Trans Semarang), tiba-tiba di HP buatan China saya ada notifikasi pemberitahuan dari salah satu group Facebook, ternyata dari Group Facebook LibreOffice Indonesia, yang isinya seperti dibawah ini

Sejujurnya saya pernah mengalami masalah tersebut, setelah oprek sana-sini akhirnya dapat menyelesaikan masalah tersebut, sayangnya waktu itu saya tidak punya waktu yang banyak untuk mendokumentasikannya karena sedang banyak pekerjaan.

Di bawah ini saya buat video tutorial cara mengatasi masalah tersebut, alasan membuat tutorial video karena saya sedang malas menulis yang cukup panjang.

Demikian Video Tutorial singkat ini, semoga bermanfaat untuk kita bersama

02 August, 2015 03:25AM by Istana Media (

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Mako Hill: Understanding Hydroplane Races for the New Seattleite

It’s Seafair weekend in Seattle. As always, the centerpiece is the H1 Unlimited hydroplane races on Lake Washington.

EllstromManufacturingHydroplaneIn my social circle, I’m nearly the only person I know who grew up in area. None of the newcomers I know had heard of hydroplane racing before moving to Seattle. Even after I explain it to them — i.e., boats with 3,000+ horse power airplane engines that fly just above the water at more than 320kph (200mph) leaving 10m+ (30ft) wakes behind them! — most people seem more puzzled than interested.

I grew up near the shore of Lake Washington and could see (and hear!) the races from my house. I don’t follow hydroplane racing throughout the year but I do enjoy watching the races at Seafair. Here’s my attempt to explain and make the case for the races to new Seattleites.

Before Microsoft, Amazon, Starbucks, etc., there were basically three major Seattle industries: (1) logging and lumber based industries like paper manufacturing; (2) maritime industries like fishing, shipbuilding, shipping, and the navy; (3) aerospace (i.e., Boeing). Vintage hydroplane racing represented the Seattle trifecta: Wooden boats with airplane engines!

The wooden U-60 Miss Thriftway circa 1955 (Thriftway is a Washinton-based supermarket that nobody outside has heard of) below is a picture of old-Seattle awesomeness. Modern hydroplanes are now made of fiberglass but two out of three isn’t bad.

miss_thriftwayAlthough the boats are racing this year in events in Indiana, San Diego, and Detroit in addition to the two races in Washington, hydroplane racing retains deep ties to the region. Most of the drivers are from the Seattle area. Many or most of the teams and boats are based in Washington throughout the year. Many of the sponsors are unknown outside of the state. This parochialness itself cultivates a certain kind of appeal among locals.

In addition to old-Seattle/new-Seattle cultural divide, there’s a class divide that I think is also worth challenging. Although the demographics of hydro-racing fans is surprisingly broad, it can seem like Formula One or NASCAR on the water. It seems safe to suggest that many of the demographic groups moving to Seattle for jobs in the tech industry are not big into motorsports. Although I’m no follower of motorsports in general, I’ve written before cultivated disinterest in professional sports, and it remains something that I believe is worth taking on.

It’s not all great. In particular, the close relationship between Seafair and the military makes me very uneasy. That said, even with the military-heavy airshow, I enjoy the way that Seafair weekend provides a little pocket of old-Seattle that remains effectively unchanged from when I was a kid. I’d encourage others to enjoy it as well!

02 August, 2015 02:45AM

Ubuntu GNOME: Help needed to test 14.04.3


Time for Trusty Tahr, yet again :)

As you know, Ubuntu GNOME 14.04 was our first LTS release. Thus, there are point of releases. Ubuntu GNOME 14.04.1 and 14.04.2 have been released already. Now, it is time for 14.04.3 to be released in the 6th of August, 2015.

This is a call for help to test the daily builds of Ubuntu GNOME Trusty Tahr to make sure 14.04.3 will be, just like our previous releases, as solid as rock.

Please, make sure to use the ISO Tracker:

If you are NEW to all the testing process, that’s not a problem at all. This page:

Which has been re-written to become easier and better, will help you to get started 😉

Please, help us and test the daily builds of Ubuntu GNOME Trusty Tahr. If you need any help or have any question, don’t hesitate to contact us:

As always, your endless help and continuous support are highly appreciated :)

Happy Testing!

02 August, 2015 02:21AM

August 01, 2015

Dustin Kirkland: Appellation of Origin: FROM ubuntu

tl;dr:  Your Ubuntu-based container is not a copyright violation.  Nothing to see here.  Carry on.
I am speaking for my employer, Canonical, when I say you are not violating our policies if you use Ubuntu with Docker in sensible, secure ways.  Some have claimed otherwise, but that’s simply sensationalist and untrue.

Canonical publishes Ubuntu images for Docker specifically so that they will be useful to people. You are encouraged to use them! We see no conflict between our policies and the common sense use of Docker.

Going further, we distribute Ubuntu in many different signed formats -- ISOs, root tarballs, VMDKs, AMIs, IMGs, Docker images, among others.  We take great pride in this work, and provide them to the world at large, on, in public clouds like AWS, GCE, and Azure, as well as in OpenStack and on DockerHub.  These images, and their signatures, are mirrored by hundreds of organizations all around the world. We would not publish Ubuntu in the DockerHub if we didn’t hope it would be useful to people using the DockerHub. We’re delighted for you to use them in your public clouds, private clouds, and bare metal deployments.

Any Docker user will recognize these, as the majority of all Dockerfiles start with these two words....

FROM ubuntu

In fact, we gave away hundreds of these t-shirts at DockerCon.

We explicitly encourage distribution and redistribution of Ubuntu images and packages! We also embrace a very wide range of community remixes and modifications. We go further than any other commercially supported Linux vendor to support developers and community members scratching their itches. There are dozens of such derivatives and many more commercial initiatives based on Ubuntu - we are definitely not trying to create friction for people who want to get stuff done with Ubuntu.

Our policy exists to ensure that when you receive something that claims to be Ubuntu, you can trust that it will work to the same standard, regardless of where you got it from. And people everywhere tell us they appreciate that - when they get Ubuntu on a cloud or as a VM, it works, and they can trust it.  That concept is actually hundreds of years old, and we’ll talk more about that in a minute....

So, what do I mean by “sensible use” of Docker? In short - secure use of Docker. If you are using a Docker container then you are effectively giving the producer of that container ‘root’ on your host. We can safely assume that people sharing an Ubuntu docker based container know and trust one another, and their use of Ubuntu is explicitly covered as personal use in our policy. If you trust someone to give you a Docker container and have root on your system, then you can handle the risk that they inadvertently or deliberately compromise the integrity or reliability of your system.

Our policy distinguishes between personal use, which we can generalise to any group of collaborators who share root passwords, and third party redistribution, which is what people do when they exchange OS images with strangers.

Third party redistribution is more complicated because, when things go wrong, there’s a real question as to who is responsible for it. Here’s a real example: a school district buys laptops for all their students with free software. A local supplier takes their preferred Linux distribution and modifies parts of it (like the kernel) to work on their hardware, and sells them all the PCs. A month later, a distro kernel update breaks all the school laptops. In this case, the Linux distro who was not involved gets all the bad headlines, and the free software advocates who promoted the whole idea end up with egg on their faces.

We’ve seen such cases in real hardware, and in public clouds and other, similar environments.  Digital Ocean very famously published some modified and very broken Ubuntu images, outside of Canonical's policies.  That's inherently wrong, and easily avoidable.

So we simply say, if you’re going to redistribute Ubuntu to third parties who are trusting both you and Ubuntu to get it right, come and talk to Canonical and we’ll work out how to ensure everybody gets what they want and need.

Here’s a real exercise I hope you’ll try...

  1. Head over to your local purveyor of fine wines and liquors.
  2. Pick up a nice bottle of Champagne, Single Malt Scotch Whisky, Kentucky Straight Bourbon Whiskey, or my favorite -- a rare bottle of Lambic Oude Gueze.
  3. Carefully check the label, looking for a seal of Appellation d'origine contrôlée.
  4. In doing so, that bottle should earn your confidence that it was produced according to strict quality, format, and geographic standards.
  5. Before you pop the cork, check the seal, to ensure it hasn’t been opened or tampered with.  Now, drink it however you like.
  6. Pour that Champagne over orange juice (if you must).  Toss a couple ice cubes in your Scotch (if that’s really how you like it).  Pour that Bourbon over a Coke (if that’s what you want).
  7. Enjoy however you like -- straight up or mixed to taste -- with your own guests in the privacy of your home.  Just please don’t pour those concoctions back into the bottle, shove a cork in, put them back on the shelf at your local liquor store and try to pass them off as Champagne/Scotch/Bourbon.

Rather, if that’s really what you want to do -- distribute a modified version of Ubuntu -- simply contact us and ask us first (thanks for sharing that link, mjg59).  We have some amazing tools that can help you either avoid that situation entirely, or at least let’s do everyone a service and let us help you do it well.

Believe it or not, we’re really quite reasonable people!  Canonical has a lengthy, public track record, donating infrastructure and resources to many derivative Ubuntu distributions.  Moreover, we’ve successfully contracted mutually beneficial distribution agreements with numerous organizations and enterprises. The result is happy users and happy companies.

FROM ubuntu,

The one and only Champagne region of France

01 August, 2015 04:19PM by Dustin Kirkland (

hackergotchi for Maemo developers

Maemo developers

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S08E21 – United Passions - Ubuntu Podcast

01 August, 2015 10:30AM

July 31, 2015

hackergotchi for Blankon developers

Blankon developers

Herpiko Dwi Aguno: Tektok Gede Pangrango

Dulu saya sempat berpikir untuk berhenti naik gunung, jadi kerilnya saya kasihkan ke teman, tendanya saya gulung erat-erat. Tapi ternyata itu cuma omong kosong.

Halo lagi,

Ini pertama kalinya saya :

  • Mendaki gunung Gede Pangrango
  • Bertemu langsung dengan pengkayuh pedal dari KPCI (selama ini cuma baca-baca di blognya)
  • Mendaki satu tim dengan seorang wanita

Ini tentang gunung. Saya hanya pernah mendaki gunung Rinjani dan Tambora, jadi sudut pandang saya diambil dari pengalaman saya yang sedikit ini. Catatan ini ditulis dua bulan setelah pendakian tersebut, jadi ingatan saya tidak begitu jelas tentang beberapa hal, misal jarak tempuh dari titik A ke B itu berapa jam.

Saya belum sampai dua bulan berada di Bogor, satelit ibukota yang terlalu sering kehujanan dan nomor angkotnya membingungkan. Ini masih berkaitan dengan acara GnomeAsia2015 yang diselenggarakan di UI, Depok. Mas Yanuar Arafat ikut hadir di acara itu dan langsung nyeletuk ngajak nanjak Gede Pangrango, persis setelah GnomeAsia2015 berakhir. Butuh beberapa waktu buat saya untuk menentukan apakah akan mengiyakan atau tidak, terkait beberapa pekerjaan yang belum selesai, tapi akhirnya berangkat juga.

Saya packing kilat dan menemani Mas Yan belanja logistik kesana kemari. Saya belum pernah mempersiapkan pendakian gunung dengan daftar belanja yang serumit ini. Banyak, harus ini dan harus itu. Ini pasti jadi makanan yang enak di atas sana!

Diantar saudaranya Mas Yan, kami tiba di daerah yang bernama Seuseupan, mirip nama warung "Bakso Seuseupan" di dekat Btech. Kami disambut oleh Mang Koboi, penggemar sepeda kayuh dan alam bebas, yang juga membantu kami mengurusi surat-surat izin terkait pendakian. Saya tidak menduga aturan TNGGP seketat ini. Di Rinjani dan Tambora, mau nyusup tanpa lapor juga bisa (mungkin sekarang berbeda). Surat izin ini dicetak di lembaran khusus dengan mesin cetak dot matrik, menandakan urusan ini agak serius. Belakangan saya jadi paham mengapa.

Rumah Mang Koboi adalah rumah sederhana gaya lama, dikelilingi banyak sekali tumbuhan, dipenuhi perabotan lawas dari kayu padat. Ini. Ini pertama kalinya tumbuh keinginan saya untuk memiliki sebuah rumah (setidaknya, semacam dan sesederhana ini) dan berkeluarga. Di sini saya dikenalkan dengan Mang Ogi dan Mang Mandoe yang berdomisili di dekat gunung Salak, yang akan menemani kami. Sekarang kami berempat. Mang Koboi meminjami kami beberapa peralatan dan perlengkapan pendakian yang sangat berguna (terima kasih!). Terutama lampu illuminAid itu keren sekali. Mas Yan bilang mestinya ada tiga orang lagi, namun satu sedang cidera (Mbak Ocid) dan dua orang akan menyusul (Mang Cliff Damora dan Mbak Dian). Setelah nenggak kopi, malam itu juga kami cabut dari Seuseupan menuju desa Putri, salah satu pintu masuk menuju TNGGP.

Tetiba di pos pintu masuk, kami istirahat dan musti siap berangkat esok pagi atau siang. Esoknya saat bangun, saya sudah menggigil saja dan jadi luar biasa malas, menunggu matahari naik. Kemudian Mbak Dian tiba menyusul kami. Kami sekarang berlima.

Sebelum lanjut, mari saya beberkan beberapa kesalahan / pelanggaran saya terhadap peraturan TNGGP :

  • Tidak bawa jaket
  • Tidak pakai sepatu

Untungnya, saya dipinjami jaket sama Tante Dian. Sebenarnya "Dipinjami" bukan kata yang bagus. Setiap pendaki mestinya mempersiapkan diri sendiri dengan matang.

Secara pribadi, saya lebih suka mendaki gunung dengan sepatu sandal. Lebih ringan dan ringkas, asyik dipakai lari. Saya terbiasa mendaki dengan sepatu sandal di gunung Rinjani dan Tambora. Tapi Gede Pangrango adalah gunung yang sangat berbeda di beberapa aspek. Gunung ini pendek, sangat dingin dan basah. Pokoknya, pelajaran penting : pakai sepatu, terutama sepatu hiking.

Ngomong-ngomong, kedua pelanggaran tersebut sebenarnya adalah hal serius. Gunung Gede Pangrango adalah salah satu gunung yang banyak memakan korban dikarenakan ketololan pendaki. Jadi, maafkan ketololan saya ini. :D

Kami mulai nanjak kira-kira pukul 10 dan saya mulai menerima kenyataan dari perkataan teman-teman sebelumnya bahwa jalur Gede Pangrango itu kebanyakan nanjak mulu. Tapi asyik!

Pada sore hari, kami tiba di Surya kencana, lembah kecil di antara deretan bukit dan gunung Gede. Sisi kiri (dari arah saya datang) Surya Kencana dipenuhi semak Edelweis. Surya Kencana adalah lembah yang indah, tapi ada yang lebih indah di atas sana.

Ngomong-ngomong lagi mengenai Edelweis, aturan di sini juga ketat. Dulu saya juga pernah memetik beberapa tangkai bunga Edelweis. Saat itu saya tidak tahu mengapa ini mestinya tidak dilakukan. Pendakian saya yang pertama dan kedua, saya masih memetik. Tapi pendakian saya yang berikutnya, saya sudah mulai paham mengapa. Eh, tetap saja saya sempat memetik buah gak jelas (yang ternyata tidak bisa dimakan) sesaat sebelum tiba di Surya Kencana.

Karena pernah melakukan tindakan kriminal ini dan pernah merasa tidak tahu atau terlalu bodoh untuk tahu alasannya, saya kurang setuju kalau mereka-mereka yang melanggar hal-hal semacam ini dibully, seperti yang sering terjadi di sosmed. Mending disapa dan ditegur pelan-pelan. Halooo. Petik-memetik ini masih marak terjadi lho, termasuk saat saya masih berada di Alun-alun Suryakencana.

Lanjut deh.

Di Surya Kencana, saya dan Mas Yan (setelah sebelumnya doi dikalahin bule Korea) sempat bertaruh sut, yang kalah bawain keril yang menang selama 5 menit. Kami impas, sama-sama kalah sekali. Taruhan paling tolol yang pernah saya ambil. Capek banget bawa tas-tas itu. Kami bermalam di sini, di samping parit alami kecil yang airnya sedingin es dan bisa diminum langsung. Sore itu kami menggelar tenda dan malamnya saya kedinginan seperti orang lemah. Saya cuma bisa meringkuk sambil menggigil. Malam itu terlalu dingin bagi saya jadi saya tidak membantu teman-teman membuat makanan. Saat tulisan ini ditulis, saya masih kesulitan mengingat makanan apa yang saya makan malam itu. Hangat dan enak. Tidur tidak terlalu nyenyak karena dingin sekali.

Paginya, kami dibangunkan oleh alam, Mang Ogi tidur lagi karena belum cukup tidur, tenda yang satunya sempat tembus embun. Saya sudah mulai bisa beradaptasi dengan suhu dinginnya.

Kami packing agak siang, kemudian beranjak ke puncak Gede. Treknya biasa saja dan saya belum terkesan dengan puncak Gede. Puncak terbentuk sebagai tebing, seolah-olah setengah dari gunung itu terpotong dan ambrol. Di sini tidak terlalu aman karena tebing tersebut. Kita masih bisa mencium aroma belerang. Kami mengambil beberapa foto di sini sebelum beranjak turun ke Kandang Badak.

Trek turun ke Kandang Badak tidak terlalu jelas, terpisah dalam beberapa jalur, namun akhirnya menyatu (rombongan kami sempat terpisah). Setelah beberapa waktu, kami tiba di Kandang Badak.

Ramai sekali! Dan (seperti juga Rinjani) lumayan kotor. Kami sempat membersihkan sebindang tanah yang sedianya akan kami tempati untuk membangun tenda. Namun karena tidak puas (kotor banget), kami pindah lagi. Satu hal yang bikin saya terkejut di Kandang Badak adalah, toiletnya lumayan dan berfungsi penuh.

Kemudian datanglah Mang Cliff Damora. Tinggi dan ceking, lelah dan bersemangat. Mang Cliff sebelumnya sudah nyari-nyari rombongan kami, sudah naik ke Pangrango juga, tapi malah ketemu di sini. Setelah makan sore (atau malam?) dan bincang-bincang, tercetus ide untuk mendaki puncak Pangrango malam itu juga. Ini sebenarnya ide yang tidak oke karena cuaca saat itu tidak menyakinkan. Mendung gluduk-gluduk dan Tante Dian menggumam tidak setuju. Tapi saya sudah gatal-gatal pengen nanjak lagi. Maka berangkatlah kami : Mas Yan, Mang Cliff, Mang Ogi dan saya. Bu Dian dan Mang Mandoe menjagai tenda. Kami bawa tenda sendiri.

Di tengah perjalanan ke puncak, tiba-tiba hujan turun. Ini pertama kalinya saya mendaki gunung dan kemudian turun hujan di tengah perjalanan. Tapi persiapan Mang Cliff itu oke pake banget. Setelah menggelar "apa mungkin namanya" dengan sigap, kami berteduh dan nyemil sedikit, menunggu hujan reda. Setelah reda, kami lanjut.

Kami tiba di puncak dan kemudian...

Terpujilah ingatan saya yang lemah ini, sampai di sini saya kurang ingat betul. Kami ngecamp di pinggir pepohonan. Kalau tidak salah esoknya kami sarapan dengan makanan yang asin banget karena sama Mang Cliff satu balok keju dicampur dengan keju. Hahahaha.

Di sini saya menghabiskan sekitar 2 jam untuk bengong-bengong di Mandalawangi. Jauh-jauh kesana cuma bengong-bengong? Ya, memang itu "tujuannya". Ini salah satu dari sedikit tempat yang saya tandai sebagai "mesti kesini lagi, mesti".

Kemudian kami turun. Jumpa lagi dengan Mbak Dian dan Mang Mandoye. Lanjut turun ke arah Cibodas. Kaki saya (tanpa sepatu) tercebur di air panas. Berpas-pasan dengan dedek-dedek pulen (enggak ku sapa lho). Tante Dian sudah mulai terlihat tidak sehat. Jalan gelap berbatu. Daaan akhirnya sampai di cibodas. Berpisah dengan Teteh Dian, terus makan malam. Terus...

Saya kurang ingat lagi. Pokoknya seperti itulah perjalanan ini. Sampai jumpa lagi, O Gede Pangrango.


31 July, 2015 07:03PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ronnie Tucker: FCM#100-1 is OUT!

FCM99-coverFull Circle – the independent magazine for the Ubuntu Linux community are proud to announce the release of our ninety-ninth issue.

This month:
* Command & Conquer
* How-To : LaTeX, LibreOffice, and Programming JavaScript
* Graphics : Inkscape.
* Chrome Cult
* Linux Labs: Customizing GRUB
* Ubuntu Phones
* Review: Meizu MX4 and BQ Aquaris E5
* Book Review: How Linux Works
* Ubuntu Games: Brutal Doom, and Dreamfall Chapters
plus: News, Arduino, Q&A, and soooo much more.

Get it while it’s hot!

We now have several issues available for download on Google Play/Books. If you like Full Circle, please leave a review.

AND: We have a Pushbullet channel which we hope will make it easier to automatically receive FCM on launch day.

31 July, 2015 06:53PM

Jonathan Riddell: Kubuntu Paddleboard Club

I always say the best way to tour a city is from the water






facebooktwittergoogle_pluslinkedinby feather

31 July, 2015 03:30PM

Raphaël Hertzog: My Free Software Activities in July 2015

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 15 hours on Debian LTS. In that time I did the following:

  • Finished the work on to make it display detailed security status on each supported release (example).
  • Prepared and released DLA-261-2 fixing a regression in the aptdaemon security update (happening only when you have python 2.5 installed).
  • Prepared and released DLA-272-1 fixing 3 CVE in python-django.
  • Prepared and released DLA-286-1 fixing 1 CVE in squid3. The patch was rather hard to backport. Thankfully upstream was very helpful, he reviewed and tested my patch.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 19 commits to the security tracker.

Kali Linux / Debian Stretch work

kaliKali Linux wants to experiment something close to Debian Constantly Usable Testing: we have a kali-rolling release that is based on Debian Testing and we want to take a new snapshot every 4 months (in order to have 3 releases per year).

More specifically we have a kali-dev repository which is exactly Debian Stretch + our own Kali packages (the kali package take precedence) updated 4 times a day, just like testing is. And we have a britney2 setup that generates kali-rolling out of kali-dev (without any requirement in terms of delay/RC bugs, it just ensures that dependencies are not broken), also 4 times a day.

We have jenkins job that ensures that our metapackages are installable in kali-dev (and kali-rolling) and that we can build our ISO images. When things break, I have to fix them and I try to fix them on the Debian side first. So here are some examples of stuff I did in response to various failures:

  • Reported #791588 on texinfo. It was missing a versioned dependency on tex-common and migrated too early. The package was uninstallable in testing for a few days.
  • Reported #791591 on pinba-engine-mysql-5.5: package was uninstallable (had to be rebuilt). It appeared on output files of our britney instance.
  • I made a non-maintainer upload (NMU) of chkrootkit to fix two RC bugs so that the package can go back to testing. The package is installed by our metapackages.
  • Reported #791647: debtags no longer supports “debtags update –local” (a feature that went away but that is used by Kali).
  • I made a NMU of debtags to fix a release critical bug (#791561 debtags: Missing dependency on python3-apt and python3-debian). kali-debtags was uninstallable because it calls debtags in its postinst.
  • Reported #791874 on python-guess-language: Please add a python 2 library package. We have that package in Kali and when I tried to sync it from Debian I broke something else in Kali which depends on the Python 2 version of the package.
  • I made a NMU of tcpick to fix a build failure with GCC5 so that the package could go back to testing (it’s part of our metapackages).
  • I requested a bin-NMU of jemalloc and a give-back of hiredis on powerpc in #792246 to fix #788591 (hiredis build failure on powerpc). I also downgraded the severity of #784768 to important so that the package could go back to testing. Hiredis is a dependency of OpenVAS and we need the package in testing.

If you analyze this list, you will see that a large part of the issues we had come down to package getting removed from testing due to RC bugs. We should be able to anticipate those issues and monitor the packages that have an impact on Kali. We will probably add new jenkins job that installs all the metapackages and then run how-can-i-help -s testing-autorm --old… I just submitted #794238 as a wishlist against how-can-i-help.

At the same time, there are bugs that make it into testing and that I fix / work around on the Kali side. But those fixes / work around might be more useful if they were pushed to testing via testing-proposed-updates. I tried to see whether other derivatives had similar needs to see if derivatives could join their efforts at this level but it does not look like so for now.

Last but not least, bugs reported on the Kali side also resulted in Debian improvements:

  • I reported #793360 on apt: APT::Never-MarkAuto-Sections not working as advertised. And I submitted a patch.
  • I orphaned dnswalk and made a QA upload to fix its only bug.
  • We wanted a newer version of the nvidia drivers. I filed #793079 requesting the new upstream release and the maintainer quickly uploaded it to experimental. I imported it on the Kali side but discovered that it was not working on i386 so I submitted #793160 with a patch.
  • I noticed that Kali build daemons tend to accumulate many /dev/shm mounts and tracked this down to schroot. I reported it as #793081.

Other Debian work

Sponsorship. I sponsored multiple packages for Daniel Stender who is packaging prospector, a software that I requested earlier (through RFP bug). So I reviewed and uploaded python-requirements-detector, python-setoptconf, pylint-celery and pylint-common. During a review I also discovered a nice bug in dh-python (#793609a comment in the middle of a Build-Depends could break a package). I also sponsored an upload of notmuch-addrlookup (new package requested by a Freexian customer).

Packaging. I uploaded python-django 1.7.9 in unstable and 1.8.3 in experimental to fix security issues. I uploaded a new upstream release of ditaa through a non-maintainer uploaded (again at the request of a Freexian customer).

Distro Tracker. Beside the work to integrate detailed security status, I fixed the code to be compatible with Django 1.8 and modified the tox configuration to ensure that the test suite is regularly run against Django 1.8. I also merged multiple patches of Christophe Siraut (cf #784151 and #754413).


See you next month for a new summary of my activities.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

31 July, 2015 02:45PM

hackergotchi for Xanadu developers

Xanadu developers

Feliz día del Sysadmin

El Día del Administrador de Sistemas Informáticos (System Administrator Appreciation Day en inglés, puede verse abreviado como SAAD o SAD) es una celebración que reconoce la labor del administrador de sistemas informáticos (sysadmin en inglés).

La festividad fue instaurada por un administrador de sistemas apodado la bella durmiente. Consiguió la inspiración en un anuncio de Hewlett-Packard en una revista, en el que los compañeros de trabajo de un administrador le regalaban flores y una cesta de frutas por haberles instalado sus impresoras nuevas y una almohada para descansar en los ratos libres.

Se celebra el último viernes de julio desde el año 2000

  • 2011: 29 de julio
  • 2012: 27 de julio
  • 2013: 26 de julio
  • 2014: 25 de julio
  • 2015: 31 de julio
  • 2016: 29 de julio
  • 2017: 28 de julio

Archivado en: Geekstuff Tagged: festividades, sysadmin

31 July, 2015 02:00PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Eric Hammond: AWS SNS Outage: Effects On The Unreliable Town Clock

It took a while, but the Unreliable Town Clock finally lived up to its name. Surprisingly, the fault was not mine, but Amazon’s.

For several hours tonight, a number of AWS services in us-east-1, including SNS, experienced elevated error rates according to the AWS status page.

Successful, timely chimes were broadcast through the Unreliable Town Clock public SNS topic up to and including:

2015-07-31 05:00 UTC

and successful chimes resumed again at:

2015-07-31 08:00 UTC

Chimes in between were mostly unpublished, though SNS appears to have delivered a few chimes during that period up to several hours late and out of order.

I had set up Unreliable Town Clock monitoring and alerting through This worked perfectly and I was notified within 1 minute of the first missed chime, though it turned out there was nothing I could do but wait for AWS to correct the underlying issue with SNS.

Since we now know SNS has the potential to fail in a region, I have launched an Unreliable Town Clock public SNS Topic in a second region: us-west-2. The infrastructure in each region is entirely independent.

The public SNS topic ARNs for both regions are listed at the top of this page:

You are welcome to subscribe to the public SNS topics in both regions to improve the reliability of invoking your scheduled functionality.

The SNS message content will indicate which region is generating the chime.

Original article and comments:

31 July, 2015 09:55AM

Benjamin Kerensa: Unnecessary Finger Pointing

I just wanted to pen quickly that I found Chris Beard’s open letter to Satya Nadella (CEO of Microsoft) to be a bit hypocritical. In the letter he said:

“I am writing to you about a very disturbing aspect of Windows 10. Specifically, that the update experience appears to have been designed to throw away the choice your customers have made about the Internet experience they want, and replace it with the Internet experience Microsoft wants them to have.”

Right, but what about the experiences that Mozilla chooses to default for users like switching to Yahoo and making that the default upon upgrade and not respecting their previous settings ?What about baking Pocket and Tiles into the experience? Did users want these features? All I have seen is opposition to them.

“When we first saw the Windows 10 upgrade experience that strips users of their choice by effectively overriding existing user preferences for the Web browser and other apps, we reached out to your team to discuss this issue. Unfortunately, it didn’t result in any meaningful progress, hence this letter.”

Again see above and think about the past year or two where Mozilla has overridden existing user preferences in Firefox. The big difference here is Mozilla calls it acting on behalf of the user as its agent, but when Microsoft does the same it is taking away choice?

Set Firefox as Windows 10 DefaultClearly not that difficult

Anyways, I can go on but the gist is the letter is hypocritical and really unnecessarily finger pointing. Let’s focus on making great products for our users and technical changes like this to Windows won’t be a barrier to users picking Firefox. Sorry, that I cannot be a Mozillian that will blindly retweet you and support a misguided social media campaign to point fingers at Microsoft.

Read the entire letter here:

31 July, 2015 06:39AM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Menggambar Logo dengan mudah menggunakan Inkscape

Saat ini saya sudah mulai bosan menulis, baik menulis blog maupun buku, tapi di hati kecil ini tetap ingin berbagi ilmu pengetahuan. Salah satu cara berbagi ilmu adalah berupa Video, terutama Video Tutorial.

Pada postingan kali ini  saya berusaha membuat video tutorial sederhana cara membuat logo dengan mudah, adapun contoh logo dalam video ini adalah cara membuat logo Ubuntu. Sebelumnya sih saya ingian membuat video tutorial cara membuat logo Blankon Linux, namun karena video tersebut terlalu mudah akhirnya saya urungkan.

Dalam video ini saya menggunakan beberapa fitur dan trik yang sangat mudah dipahami dan tentunya dengan konsep dan rumus sederhana juga.

Semua fitur, trik dan kosep dalam video ini sudah tertulis di buku Desain Grafis dengan Inkscape yang saya terbitkan, jadi buat Anda yang sudah memiliki buku tersebut pasti akan langsung memahaminya. 

Berikut beberapa fitur, tool, dan konsep yang digunakan dalam membuat logo Ubuntu:
  1. Circles Tool, digunakan untuk membuat Lingkaran.
  2. Rectangles Tool, untuk membuat garis kotak memanjang.
  3. Fitur Create Tiled Clones, digunakan untuk menggandakan object dengan jarak dan ukuran sesuai keinginan.
  4. Guides, digunakan untuk snapping object antara object satu dan lainnya.
  5. Align & Distributte, digunakan untuk mengatur object satu dan lainnya supaya selaras.
  6. Difference, digunakan untuk memotong object satu dengan object lainnya.
  7. dll.
Selain itu, saya juga menggunakan aplikasi Calculator yang digunakan untuk menghitung pembagian lingkaran dengan rumus derajat.

Silakan langsung nikmati saja videonya di bawah ini

Dalam video tutorial ini sengaja tidak saya sertakan cara membuat warna berbeda pada logo Ubuntu, silakan oprek sendiri untuk menyempurnakan logo tersebut.

31 July, 2015 02:19AM by Istana Media (

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu Wire: Plasma Mobile References Images by Kubuntu

We launched Plasma Mobile at KDE’s Akademy conference, a free, open and community made mobile platform.

Kubuntu has made some reference images which can be installed on a Nexus 5 phone.

More information is on the Plasma Mobile wiki pages.

Reporting includes:

31 July, 2015 01:32AM

July 30, 2015

Ayrton Araujo: Ubuntu shell overpowered

In order to have more productivity under my environment, as a command line centric guy, I started three years ago to use zsh as my default shell. And for who never tried it, I would like to share my personal thoughts.

What are the main advantages?

  • Extended globbing: For example, (.) matches only regular files, not directories, whereas az(/) matches directories whose names start with a and end with z. There are a bunch of other things;
  • Inline glob expansion: For example, type rm *.pdf and then hit tab. The glob *.pdf will expand inline into the list of .pdf files, which means you can change the result of the expansion, perhaps by removing from the command the name of one particular file you don’t want to rm;
  • Interactive path expansion: Type cd /u/l/b and hit tab. If there is only one existing path each of whose components starts with the specified letters (that is, if only one path matches /u/l/b*), then it expands in place. If there are two, say /usr/local/bin and /usr/libexec/bootlog.d, then it expands to /usr/l/b and places the cursor after the l. Type o, hit tab again, and you get /usr/local/bin;
  • Nice prompt configuration options: For example, my prompt is currently displayed as tov@zyzzx:/..cts/research/alms/talk. I prefer to see a suffix of my current working directory rather than have a really long prompt, so I have zsh abbreviate that portion of my prompt at a maximum length.


The Z shell is mainly praised for its interactive use, the prompts are more versatility, the completion is more customizable and often faster than bash-completion. And, easy to make plugins. One of my favorite integrations is with git to have better visibility of current repository status.

As it focuses on the interactive use, is a good idea to keep maintaining your shell scripts starting with #!/bin/bash for interoperability reasons. Bash is still most mature and stable for shell scripting in my point of view.

So, how to install and set up?

sudo apt-get install zsh zsh-lovers -y

zsh-lovers will provide to you a bunch of examples to help you understand better ways to use your shell.

To set zsh as the default shell for your user:

chsh -s /bin/zsh

Don’t try to set zsh as default shell to your full system or some things should stop to work.

Two friends of mine, Yuri Albuquerque and Demetrius Albuquerque (brothers of a former hacker family =x) also recommended using Thanks for the tip.

How to install oh-my-zsh as a normal user?

curl -L | sh

My $ZSH_THEME is set to “bureau” under my $HOME/.zshrc. You can try “random” or other themes located inside $HOME/.oh-my-zsh/themes.

And, if you use Ruby under RVM, I also recommend to read this:

Happy hacking :-)

30 July, 2015 11:53PM

Chris J Arges: linux make deb-pkg speedup

Because I've run make deb-pkg so many times, I've started to see exactly where it starts to slow down even with really large machines. Observing cpu usage, I noticed that many parts of the build were serialized on a single core. Upon further investigation I found the following.

30 July, 2015 08:20PM

Lubuntu Blog: Lubuntu 15.10 alpha 2 released

The Alpha 2 of Lubuntu 15.10 is now released. Check out all about it at the wiki.

30 July, 2015 06:06PM by Rafael Laguna (

Kubuntu: Kubuntu Wily Alpha 2

The Second Alpha of Wily (to become 15.10) has now been released!

The Alpha-2 images can be downloaded from:

More information on Kubuntu Alpha-2 can be found here:

30 July, 2015 05:51PM

The Fridge: Wily Werewolf Alpha 2 Released

"I do not think there is any thrill that can go through the human heart like that felt by the inventor as he sees some creation of he brain unfolding to success… such emotions make a man forget food, sleep, friends, love, everything."
– Nikola Tesla

The second alpha of the Wily Werewolf (to become 15.10) has now been released!

This alpha features images for Kubuntu, Lubuntu, Ubuntu MATE, Ubuntu Kylin and the Ubuntu Cloud images.

Pre-releases of the Wily Werewolf are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 2 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Alpha 2 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Wily Werewolf. In particular, once newer daily images are available, system installation bugs identified in the Alpha 2 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.


Kubuntu uses KDE software and now features the new Plasma 5 desktop.

The Kubuntu 15.10 Alpha 2 images can be downloaded from:

More information about Kubuntu 15.10 Alpha 2 can be found here:


Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 15.10 Alpha 2 images can be downloaded from:

More information about Lubuntu 15.10 Alpha 2 can be found here:

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 15.10 Alpha 2 images can be downloaded from:

More information about Ubuntu MATE 15.10 Alpha 2 can be found here:

Ubuntu Kylin

Ubuntu Kylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 15.10 Alpha 2 images can be downloaded from:

More information about Ubuntu Kylin 15.10 Alpha 2 can be found here:

Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds.

The Ubuntu Cloud 15.10 Alpha 2 images can be downloaded from:

Regular daily images for Ubuntu can be found at:

If you’re interested in following the changes as we further develop Wily, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases and other interesting events.

A big thank you to the developers and testers for their efforts to pull together this Alpha release!

Originally posted to the ubuntu-release mailing list on Thu Jul 30 17:03:26 UTC 2015 by Martin Wimpress on behalf of Ubuntu Release Team

30 July, 2015 05:18PM

Miley: Goodness Gracious Me

Here I was worrying attendance would be poor, Fool me,

Our first AfricanTeams meeting was a rip roaring success even though I struggled to keep up at times. Attendance peeked at 54 at times. Special thanks to CC and LC members who attended, you made my day. :D I cant write about the whole meeting because suggestions and ideas were flying to and fro so rapidly.
All I know is that the plan is working and the last 2 teams will be included in the next meeting. As you will see at we have added a section where lugs can now join our group. Being rather on the old side I don't understand if there is a difference between them and us. Aren't we all just one big Linux family. So what if some prefer other Linux Distros, betcha they have Ubuntu running somewhere. Personal thanks goes out to everyone involved for making this whole project such a success. Meeting minutes can be seen at
Thank you everyone

30 July, 2015 04:07PM by Miles Sharpe (

hackergotchi for Xanadu developers

Xanadu developers

Definiciones de Darkweb y Deepweb


Se le conoce informalmente como Internet profunda (en inglés Deepweb o Hidden Web) a una porción presumiblemente muy grande de la Internet que es difícil de rastrear o ha sido hecha casi imposible de rastrear. Dicho de otro modo, se le conoce así a todo el contenido de Internet que no forma parte del Internet superficial, es decir, de las páginas indexadas por las redes de los motores de búsqueda de la red.

Esto se debe a las limitaciones que tienen las redes para acceder a todos los sitios web por distintos motivos. La mayor parte de la información encontrada en la Internet Profunda está enterrada en sitios generados dinámicamente y para los motores de búsqueda tradicionales es difícil hallarla.


La DarkWeb o Web Oscura es el contenido público de la World Wide Web que existe en darknets, redes que se superponen a la Internet pública y requieren de software específico, configuraciones o autorización para acceder. Las darknets que constituyen la DarkWeb incluyen pequeñas, redes amigo-a-amigo y peer-a-peer, así como grandes redes populares como Freenet, I2P, y Tor, operadas por organizaciones públicas y particulares. Los usuarios de la DarkWeb se refieren a la web normal como la Clearnet debido a su naturaleza sin cifrar.

Archivado en: Anónimo Tagged: darknet, deepweb, definicion

30 July, 2015 02:30PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Charles Butler: The power of components

While dogfooding my own work, I decided it was time to upgrade my distributed docker services into the shiny Kubernetes charms now that 1.0 landed last week. I've been running my own "production" (I say in air quotes, because my 20 or so microservices aren't mission critical. If my RSS reader tanks, life will go on.) services with some of the charm concepts I've posted about over the last 4 months. Its time to really flex the Kubernetes work we've done and fire up the latest and greatest, and start to really feel the burn of a long-running kubernetes cluster, as upgrades happen and unforseen behaviors start to bubble up to the surface.


One of the things I knew right away, is that our provivded bundle was way overkill for what I wanted to do. I really only needed 2 nodes, and using colocation for the services - I could attain this really easily. We spent a fair amount of time deliberating about how to encapsulate the topology of a kubernetes cluster, and what that would look like with the mix and match components one could reasonably deploy with.

Node 1

  • ETCD (running solo, I like to live dangerously)
  • Kubernetes-Master

Node 2

  • Docker
  • Kubernetes Node (the artist formerly known as a minion)

Did you know: The Kubernetes project retired the minion title from their nodes and have re-labeled them as just 'node'?

Why this is super cool

I'm excited to say that our attention to requirements has made this ecosystem super simple to decompose and re-assemble in a manner that fits your needs. I'm even considering contributing a Single server bundle that will stuff all the component services on a single machine. This makes it even lower cost of entry to people looking to just kick the tires and get a feel for Kubernetes.

Right now our entire stack consumes bare minimum of 4 units.

  • 1 ETCD node
  • 2 Docker/Kubernetes Nodes
  • 1 Kubernetes-Master node

This distributed system is more along the lines of what I would recommend starting your staging system with, scaling ETCD to 3 nodes for quorem and HA/Failover and scaling your Kubernetes nodes as required. Leaving the Kubes-Master to only handle the API/Load of client interfacing, and ecosystem management.

I'm willing to eat this compute space on my node, as I have a rather small deployment topology, and Kubernetes is fairly intelligent with placement of services once a host starts to reach capacity.

Whats this look like in bundle format?

Note, I'm using my personal branch for the Docker charm, as it has a UFS filesystem fix that resolves some disk space concerns that hasn't quite landed in the Charm Store yet due to a rejected review. This will be updated to reflect the Store charm once that has landed.

Deploy Today

juju quickstart

Deploy Happy!

30 July, 2015 12:26PM

Jonathan Riddell: Akademy Day Trip


The GCC 5 Transition caused the apocalypse so we went out to see the world while it still existed


No Soy Líder, Ahora Soy El Capitán


See Hoarse


El Torre


We will climb this!


David reached the top


Fin de la terre!

facebooktwittergoogle_pluslinkedinby feather

30 July, 2015 10:07AM

Daniel Pocock: Free Real-time Communications (RTC) at DebConf15, Heidelberg

The DebConf team have just published the first list of events scheduled for DebConf15 in Heidelberg, Germany, from 15 - 22 August 2015.

There are two specific events related to free real-time communications and a wide range of other events related to more general topics of encryption and privacy.

15 August, 17:00, Free Communications with Free Software (as part of the DebConf open weekend)

The first weekend of DebConf15 is an open weekend, it is aimed at a wider audience than the traditional DebConf agenda. The open weekend includes some keynote speakers, a job fair and various other events on the first Saturday and Sunday.

The RTC talk will look at what solutions exist for free and autonomous voice and video communications using free software and open standards such as SIP, XMPP and WebRTC as well as some of the alternative peer-to-peer communications technologies that are emerging. The talk will also look at the pervasive nature of communications software and why success in free RTC is so vital to the health of the free software ecosystem at large.

17 August, 17:00, Challenges and Opportunities for free real-time communications

This will be a more interactive session people are invited to come and talk about their experiences and the problems they have faced deploying RTC solutions for professional or personal use. We will try to look at some RTC/VoIP troubleshooting techniques as well as more high-level strategies for improving the situation.

Try the Debian and Fedora RTC portals

Have you registered for It can successfully make federated SIP calls with users of other domains, including Fedora community members trying

You can use for regular SIP (with clients like Empathy, Jitsi or Lumicall) or WebRTC.

Can't get to DebConf15?

If you can't get to Heidelberg, you can watch the events on the live streaming service and ask questions over IRC.

To find out more about deploying RTC, please see the RTC Quick Start Guide.

Did you know?

Don't confuse Heidelberg, Germany with Heidelberg in Melbourne, Australia. Heidelberg down under was the site of the athlete's village for the 1956 Olympic Games.

30 July, 2015 09:23AM

hackergotchi for Webconverger


Webconverger 31 release

Two months ago was our momentous Jessie based Webconverger 30 release and since then we've: for details

Please download from and follow the USB imaging guide to put it on a USB stick! The sha1sum is 4032c86eefdcbcb3486f7e00661d1f9920fffdf7.

What next?

We could really do with your feedback, help & support on a couple of goals for the next release:

If you can't code or have the time to, please purchase a subscription!

Other hardware news

Managed to integrate the EETI eGalaxTouch driver for some models of resistive touchscreens. It works well as your can see by this touch screen demo video and it's typically deployed in Point of Sales in retail. Unfortunately the driver is in a private branch since the End User of License is very limited and a good case study of how not to write a license. Please enquire to know more.

Recently obtained a ~130 USD Intel NUC NUC5CPYH and it works well as our single take NUC video shows. We have recommended this neat PC form factor before and thankfully they are getting better and cheaper.

30 July, 2015 06:59AM

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Kerensa: Nóirín Plunkett: Remembering Them

Nóirín Plunkett & Benjamin KerensaNóirín and I

Today I learned of some of the worst kind of news, my friend and a valuable contributor to the great open source community Nóirín Plunkett passed away. They (this is their preferred pronoun per their twitter profile) was well regarded in the open source community for contributions.

I had known them for about four years now, having met them at OSCON and seen them regularly at other events. They were always great to have a discussion with and learn from and they always had a smile on their face.

It is very sad to lose them as they demonstrated an unmatchable passion and dedication to open source and community and surely many of us will spend many days, weeks and months reflecting on the sadness of this loss.

Other posts about them:

30 July, 2015 03:01AM

July 29, 2015

Nicholas Skaggs: Opportunity: Automated Installer Testing

I wanted to share a unique opportunity to get invovled with ubuntu and testing. Last cycle, as part of a datacenter shuffle, the automated installer testing that was occurring for ubuntu flavors stopped running. The images were being test automatically via a series of autopilot tests, written originally by the community (Thanks Dan et la!). These tests are vital in helping reduce the burden of manual testing required for images by running through the base manual test cases for each image automatically each day.

When it was noticed the tests didn't run this cycle, wxl from Lubuntu accordingly filed an RT to discover what happened. Unfortunately, it seems the CI team within Canonical can no longer run these tests. The good news however is that we as a community can run them ourselves instead.

To start exploring the idea of self-hosting and running the tests, I initially asked Daniel Chapman to take a look. Given the impending landing of dekko in the default ubuntu image, Daniel certainly has his hands full. As such Daniel Kessel has offered to help out and begun some initial investigations into the tests and server needs. A big thanks to Daniel and Daniel!

But they need your help! The autopilot tests for ubiquity have a few bugs that need solving. And a server and jenkins need to be setup, installed, and maintained. Finally, we need to think about reporting these results to places like the isotracker. For more information, you can read more about how to run the tests locally to give you a better idea of how they work.

The needed skillsets are diverse. Are you interested in helping make flavors better? Do you have some technical skills in writing tests, the web, python, or running a jenkins server? Or perhaps you are willing to learn? If so, please get in touch!

29 July, 2015 08:26PM by Nicholas Skaggs (

Launchpad News: Improved filtering options for Gmail users

Users of some email clients, particularly Gmail, have long had a problem filtering mail from Launchpad effectively.  We put lots of useful information into our message headers so that heavy users of Launchpad can automatically filter email into different folders.  Unfortunately, Gmail and some other clients do not support filtering mail on arbitrary headers, only on message bodies and on certain pre-defined headers such as Subject.  Figuring out what to do about this has been tricky.  Space in the Subject line is at a premium – many clients will only show a certain number of characters at the start, and so inserting filtering tags at the start would crowd out other useful information, so we don’t want to do that; and in general we want to avoid burdening one group of users with workarounds for the benefit of another group because that doesn’t scale very well, so we had to approach this with some care.

As of our most recent code update, you’ll find a new setting on your “Change your personal details” page:

Screenshot of email configuration options

If you check “Include filtering information in email footers”, Launchpad will duplicate some information from message headers into the signature part (below the dash-dash-space line) of message bodies: any “X-Launchpad-Something: value” header will turn into a “Launchpad-Something: value” line in the footer.  Since it’s below the signature marker, it should be relatively unobtrusive, but is still searchable.  You can search or filter for these in Gmail by putting the key/value pair in double quotes, like this:

Screenshot of Gmail filter dialog with "Has new words" set to "Launchpad-Notification-Type: code-review"

At the moment this only works for emails related to Bazaar branches, Git repositories, merge proposals, and build failures.  We intend to extend this to a few other categories soon, particularly bug mail and package upload notifications.  If you particularly need this feature to work for some other category of email sent by Launchpad, please file a bug to let us know.

29 July, 2015 04:43PM

Costales: Migrar fácilmente juegos móviles en HTML5 a Ubuntu Phone (por Alan Pope)

Este artículo es una traducción del post de Alan Pope, disponible aquí en Inglés.

Me gusta jugar en el móvil y en la tablet y quería añadir algunos juegos más a Ubuntu. Con poco trabajo se 'migran' fácilmente juegos a Ubuntu Phone. He puesto la palabra 'migrar' entre comillas porque en algunos casos es muy poco esfuerzo, por lo que llamarlo 'migrar' puede aparentar que es más trabajo del que realmente es.

Actualización: Algunos usuarios me preguntaron por qué alguien quedría hacer esto, pudiendo simplemente crear un marcador en el navegador. Mis disculpas si no dejé esto claro. La gran ventaja es que el juego es cacheado offline. Con la ventaja que tiene esto en muchas situaciones, por ejemplo en viajes o con mal acceso a Internet. Por supuesto, no todos los juegos pueden ser completamente offline, este tutorial no será de gran ayuda para juegos online, como Clash of Clans. Sin embargo, sí será útil para muchos otros. También se hace uso del confinamiento de aplicaciones en Ubuntu, por lo que la aplicación/juego no tendrá acceso exterior a su directorio de datos.

Invertí algunas tardes y fines de semana con sturmflut, quien también plasmó su experiencia en el artículo Panda Madness.

Nos divertimos mucho migrando algunos juegos y quiero compartir qué hicimos, para que facilite la tarea de otros desarrolladores. Creé una plantilla básica en Github que puede usarse como punto de partida, pero quiero explicar el proceso y los problemas que tuvimos, para que otros puedan migrar más aplicaciones y juegos.

Si tienes alguna duda, déjame un comentario, o si lo prefieres, también puedes escribirme por privado.

Prueba de concepto

Para demostrar que podemos migrar fácilmente juegos existentes, distribuí un par de juegos de Code Canyon. Tienda donde desarrolladores pueden distribuir sus juegos, a la vez que otros desarrolladores aprenden de ellos. Comencé con un pequeño juego llamado Don't Crash el cual es un juego HTML5 creado con Construct 2. Podría distribuir más juegos, e incluso hay más tiendas de juegos, pero esto es sólo un buen ejemplo para mostrar el proceso.

Apunte: Construct 2 de Scirra es una herramienta que sólo funciona en Windows, es popular, potente y rápida, para el desarrollo multiplataforma de aplicaciones y juegos HTML5. Es usado por muchos desarrolladores indie para crear juegos que se ejecutan en navegadores de escritorio y dispositivos móviles. En desarrollo está Construct 3, el cual será más compatible y también estará disponible para Linux.

Antes de distribuir Don't Crash comprobé que funcionaba bien en Ubuntu Phone usando la demo que hay en Code Canyon. Tras verificar que funcionaba, pagué y recibí los ficheros con el 'código' de Construct 2.

Si eres un desarrollador de tus propios juegos, puedes saltarte este paso, porque ya tendrás el código a migrar.

Migrando a Ubuntu

Lo mínimo que se necesita para migrar un juego son algunos ficheros de texto y el directorio que contiene el código fuente. Algunas veces hacen falta un par de trucos para los permisos y bloquear la rotación, pero en líneas generales, Simplemente Funciona (TM).

Yo estoy usando un ordenador con Ubuntu para todo el empaquetado y pruebas, pero para este juego necesité un ordenador con Windows para exportarlo desde Construct 2. Los requisitos pueden variar, pero si no tienes Ubuntu, puedes instalarlo en una máquina virtual como VMWare o VirtualBox, y sólo tendrás que añadir el SDK como se detalla en el

Este es el contenido entero del directorio, con el juego en la carpeta www/

alan@deep-thought:~/phablet/code/popey/licensed/html5_dontcrash⟫ ls -l
total 52
-rw-rw-r-- 1 alan alan   171 Jul 25 00:51 app.desktop
-rw-rw-r-- 1 alan alan   167 Jun  9 17:19 app.json
-rw-rw-r-- 1 alan alan 32826 May 19 19:01 icon.png
-rw-rw-r-- 1 alan alan   366 Jul 25 00:51 manifest.json
drwxrwxr-x 4 alan alan  4096 Jul 24 23:55 www

Creando el metadata


Contiene los detalles básicos acerca de la aplicación, como el nombre, descripción, autor, email y alguna cosa más. Aquí están los mios (en el manifest.json) de la última versión de Don't Crash. Los campos a rellenar son aclaratorios por sí mismos. Por lo que sustituye cada uno de ellos con los detalles de tu propia aplicación.

    "description":  "Don't Crash!",
    "framework":    "ubuntu-sdk-14.10-html",
    "hooks": {
        "dontcrash": {
            "apparmor": "app.json",
            "desktop":  "app.desktop"
    "maintainer":   "Alan Pope ",
    "name":         "dontcrash.popey",
    "title":        "Don't Crash!",
    "version":      "0.22"

Apunte: "popey" es mi nombre de desarrollador en la tienda, tienes que sustituirlo por el mismo nombre que usas en tu página del portal de desarrollador.


 Perfil de seguridad

El fichero app.json, detalla qué permisos necesita la aplicación para ejecutarse:

    "template": "ubuntu-webapp",
    "policy_groups": [
    "policy_version": 1.2

Fichero Desktop

Define cómo se lanza la aplicación, cual es el icono utilizado y algunos otros detalles:

[Desktop Entry]
Name=Don't Crash
Comment=Avoid the other cars
Exec=webapp-container $@ www/index.html

De nuevo, cambia los campos Name y Comment, y practicamente hemos finalizado.

Construyendo el paquete click

Con los ficheros creados y un icono icon.png, compilamos para crear el paquete .click que subiremos a la tienda. Este es el proceso entero:

alan@deep-thought:~/phablet/code/popey/licensed⟫ click build html5_dontcrash/
Now executing: click-review ./
./ pass
Successfully built package in './'.

En mi portátil apenas se compila en un segundo.

Ten en cuenta la salida del comando, la cual realiza comprobaciones de validez de paquetes .click al compilar, asegurándose de que no haya fallos que lo rechacen en la Tienda.

Comprobación en un dispositivo Ubuntu

Comprobar el paquete .click en un móvil es muy fácil. Copia el fichero .click desde el PC con Ubuntu vía USB, usando adb para instalarlo:

adb push /tmp
adb shell
pkcon install-local --allow-untrusted /tmp/

Vete al scope de aplicaciones y arrastra hacia abajo para que refresque, pulsa en el icono y prueba el juego.

¡Conseguido! :)


Configurando la aplicación

En este punto, para alguno de los juegos ví algunas mejoras, que las expondré aquí:

Cargar localmente los ficheros

Construct 2 indica que "Los juegos exportados no funcionarán hasta que los subas por un popup ("When running on the file:/// protocol, browsers block many features from working for security reasons") que se muestra en javascript. Borré esas líneas de js que comprueban que el index.html y el juego funcionan adecuadamente en nuestro navegador.

Orientación del dispositivo

Con la reciente actualización OTA de Ubuntu siempre podemos activar la orientación del dispositivo, lo cual significa que algunos juegos pueden rotarse y no ser jugables. Podemos bloquear los juegos en modo vertical u horizontal mediante el fichero .desktop (creado previamente) con simplemente añadir esta línea:


Obviamente cambiar "portrait" por "landscape" si el juego usa el modo horizontal. Para Don't Crash no lo hice porque el desarrollador tenía la deteción de la rotación por código y dice al jugador que rote el dispositivo a la posición necesaria.

Enlaces Twitter

Algunos juegos tenían enlaces de Twitter embebidos, mediante los cuales los jugadores pueden publicar su puntuación. Desafortunadamente la versión web móvil de Twitter no admite eso, por lo que no debería de haber un enlace que contiene "Check out my score in Don’t Crash". Por el momento, quité los enlaces a Twitter.


Nuestro navegador no soporta cookies locales. Algunos juegos las usan. Para Heroine Dusk cambié las cookies a Local Storage.

Publicando en la tienda

Publicar paquetes .click en la Tienda de Ubuntu es rápido y fácil. Simplemente accede a , identificate, pulsa en "New Application" y sigue los pasos para subir el paquete click.


¡Esto es todo! Seguiré publicando algunos juegos más en la tienda. Mejorasa la plantilla de Github son bienvenidas.

Artículo original de Alan Pope. Traducido por Marcos Costales.

29 July, 2015 04:07PM by Marcos Costales (

hackergotchi for Xanadu developers

Xanadu developers

DEFCON 20: El documental

Un documental sobre la conferencia de hacking mas grande del mundo, filmando en el 2012 en Las Vegas – Nevada.

Enlace al vídeo en Youtube

Archivado en: Documentales Tagged: defcon

29 July, 2015 02:30PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Thierry Carrez: The Age of Foundations

At OSCON last week, Google announced the creation around Kubernetes of the Cloud-Native Computing Foundation. The next day, Jim Zemlin dedicated his keynote to the (recently-renamed) Open Container Initiative, confirming the Linux Foundation's recent shift towards providing Foundations-as-a-Service. Foundations ended up being the talk of the show, with some questioning the need for Foundations for everything, and others discussing the rise of Foundations as tactical weapons.

Back to the basics

The main goal of open source foundations is to provide a neutral, level and open collaboration ground around one or several open source projects. That is what we call the upstream support goal. Projects are initially created by individuals or companies that own the original trademark and have power to change the governance model. That creates a tilted playing field: not all players are equal, and some of them can even change the rules in the middle of the game. As projects become more popular, that initial parentage becomes a blocker for other contributors or companies to participate. If your goal is to maximize adoption, contribution and mindshare, transferring the ownership of the project and its governance to a more neutral body is the natural next step. It removes barriers to contribution and truly enables open innovation.

Now, those foundations need basic funding, and a common way to achieve that is to accept corporate members. That leads to the secondary goal of open source foundations: serve as a marketing and business development engine for companies around a common goal. That is what we call the downstream support goal. Foundations work to build and promote a sane ecosystem around the open source project, by organizing local and global events or supporting initiatives to make it more usable: interoperability, training, certification, trademark licenses...

Not all Foundations are the same

At this point it's important to see that a foundation is not a label, the name doesn't come with any guarantee. All those foundations are actually very different, and you need to read the fine print to understand their goals or assess exactly how open they are.

On the upstream side, few of them actually let their open source project be completely run by their individual contributors, with elected leadership (one contributor = one vote, and anyone may contribute). That form of governance is the only one that ensures that a project is really open to individual contributors, and the only one that prevents forks due to contributors and project owners not having aligned goals. If you restrict leadership positions to appointed seats by corporate backers, you've created a closed pay-to-play collaboration, not an open collaboration ground. On the downstream side, not all of them accept individual members or give representation to smaller companies, beyond their founding members. Those details matter.

When we set up the OpenStack Foundation, we worked hard to make sure we created a solid, independent, open and meritocratic upstream side. That, in turn, enabled a pretty successful downstream side, set up to be inclusive of the diversity in our ecosystem.

The future

I see the "Foundation" approach to open source as the only viable solution past a given size and momentum around a project. It's certainly preferable to "open but actually owned by one specific party" (which sooner or later leads to forking). Open source now being the default development model in the industry, we'll certainly see even more foundations in the future, not less.

As this approach gets more prevalent, I expect a rise in more tactical foundations that primarily exist as a trade association to push a specific vision for the industry. At OSCON during those two presentations around container-driven foundations, it was actually interesting to notice not the common points, but the differences. The message was subtly different (pods vs. containers), and the companies backing them were subtly different too. I expect differential analysis of Foundations to become a thing.

My hope is that as the "Foundation" model of open source gets ubiquitous, we make sure that we distinguish those which are primarily built to sustain the needs or the strategy of a dozen of large corporations, and those which are primarily built to enable open collaboration around an open source project. The downstream goal should stay a secondary goal, and new foundations need to make sure they first get the upstream side right.

In conclusion, we should certainly welcome more Foundations being created to sustain more successful open source projects in the future. But we also need to pause and read the fine print: assess how open they are, discover who ends up owning their upstream open source project, and determine their primary reason for existing.

29 July, 2015 01:30PM

hackergotchi for ArcheOS


When Veterinary Medicine and 3D printing meet each other

TV story with English subtitle

The Brazilian Team of Forensic Anthropology and Forensic Dentistry (Ebrafol) was founded in 2014. Comprising a number of liberal professionals, mostly belonging to the field of Dentistry, it always sought a shortcut using the know-how of its members and considering the necessity of the Brazilian population for relevant applications. Even before the Ebrafol started, we (me and Dr. Paulo Miamoto) had already worked a handful of partnerships and they would contemplate not only the human population but also other animals.

TV Story with English subtitle
In the second half of 2013 we have met the Veterinarian Dr. Roberto Fecchio. He was well known for his mastery in saving the lives of many animals and bringing dignity and quality of life to them. Rebuilt beaks, perfectly fitting prosthesis, and implanted well treated teeth. And I mean animals ranging from a small rodent to a scary feline, whether a Guinea pig or a lion, there would be Dr. Fecchio and his staff, caring for and rehabilitating them.
When I meet Dr. Fecchio (at center) at Sao Paulo University (USP)

It seems a short period, but since 2013 a lot has happened. In the meantime, regarding skills in computer graphics applied to human and animal health, our knowledge advanced quite a bit. Since 2013, Dr. Fecchio would motivate us to develop 3D-printed prosthetic beaks, but at that time we just did not have the necessary know-how to actually model them, nor the equipment to print them.

The red-footed tortoise (Chelonoidis carbonaria) "Fred" in the surgery home

That changed a few weeks ago. Dr. Paulo Miamoto purchased a 3D printer. The goal was to explore it for scientific studies and commercial printing. All was very new, interesting and unknown.

Wealthy tortoise scanned (wireframe) and Fred inside it.

Upon learning about the 3D printer, Dr. Fecchio, always at the forefront, proposed that we participated in a project with him, from Santos-SP, and other team, from Brasília-DF. The case was about a poor tortoise, who had been the victim of a bushfire in the Brazilian plains. The flames injured her hoof and she lost a considerable part of its structure. Luckily the animal was rescued and taken barely alive to the hands of Dr. Rodrigo Rabello, whom with the aid of his brother Dr. Matheus Rabello, successfully treated and healed two pneumonia episodes and other diseases caused by the animal’s deficient immune system.

System of matching

Although the tortoise regained stable health, she tortoise found herself in big trouble. She had no hoof as the bony plates that were left fell off and gave her a shelled egg-like aspect, with only a thin membrane which could be perforated quite easily.

Hoof exploded

That’s when Dr. Fecchio stepped in, proposing the partnership and finding himself quite content with everybody’s agreement to participate in this project.

I figured the reconstruction of the hoof could be made using a simple methodology. First we would do a 3D scan of the tortoise who had lost the hoof. The technique used is called photogrammetry. Roughly, we took several pictures of the animal, sent them to a computational algorithm and it reconstructed the 3D volume. Then we did the same with a healthy tortoise hoof. This way, we digitized the 3D volumes of the tortoise without a hoof and the healthy hoof. Then, we would just have to proceed with Boolean calculations and, a structure that fits the sick animal is obtained.

Printed part

Of course we had a lot of problems in the process. The hoof had to be printed in 4 parts because we did not know if Dr. Paulo’s printer would finish the job in time. That’s why we divided it into four pieces, so that we could hire companies or people who offered this service in case we had any problems with 3D printing. Fortunately, 3D prints were successful, although this process wasn’t quite quick. It took five days of almost uninterrupted printing for the hoof to get ready. After that, we had an unpleasant surprise upon cleaning the support material created by the printer. In the joint areas it was very difficult to remove it. Thanks to the help of Dr. Paulo Esteves, an experienced Dentist, cleaning the support material was possible and everything went smoothly well.

Te team after surgery (I'm on the grayscale photo)

The surgery was covered by the largest Brazilian TV station, Rede Globo. The procedure was a success and at the end Fred the tortoise, received a new hoof and it wasn’t necessary to screw it to bony parts of her body, as a photogrammetry provided a high precision scanning of the area and made possible a very nice adaptation of the prosthesis.

Steps of surgery - toucan

Meanwhile, another case had been handled by the team. Zeca, The toucan, broke his beak when he hit a window. A homologous prosthesis was installed using a cadaver beak adapted to the fracture, which is a common practice in veterinary medicine. Unfortunately, Zeca’s “new” beak could not stand a very high load and broke. Upon seeing that the toucan had lost his beak, Dr. Fecchio proposed reviving the first project that we developed together, back in the pre-Ebrafol period, i.e., to create digitized beak prosthesis. Inspired by the successful surgery, we got back in track and to our complete joy, everything worked out and Zeca is fully adapted to his new beak!

Our team is very happy and honored for all that has happened. Besides the feeling of nobility and accomplishment, we are also proud of accomplishing everything using free and open software. Photogrammetry was done with PPT-GUI, and 3D modeling was done in Blender. We used Cork for Boolean calculations and sliced the mesh for printing with Slic3r.

We barely enjoyed the taste of success and we are already engaged in a new project. Soon we'll post more news, see you!


Dr. Everton da Rosa, which made possible my trip to Brasilia to meet the Rabello doctors and participate in Fantástico, Brazil’s most popular Sunday TV show. Claudio Marques Sampaio (patola) to help us with 3D printing. Denise Oltramari, which provided us with one of the tortoises she takes care of for photogrammetry. Daniel Ludwig and Lis Caroline for the aid in the process of photography (photogrammetry). Giovanna Leite Soares and Dr. Paulo Miamoto, who assisted us with translations into English. To all the news crews that documented this project while respecting the scientific aspects and highlighting the importance of such initiatives for the sake of animals.

29 July, 2015 12:28PM by cogitas3d (

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: I'm Walking Away...From The Troubles...

I don't really know what to say as of late. I've been around but I've been hiding in the background. When you end up having to read appellate court decisions, Inspector General audit reports, GAO audit reports, and ponder if your job will be funded into the new fiscal gets weird. This is the closest illustration I can find of what I do at work:

With all the storm and stress that some persons seem to be trying to raise in the *buntu community I feel it appropriate to truly step away formally for a while. I'm still working on the cross-training matter relative to job functions at work. I'm still occasionally working on backports for pumpa and dianara. I am just going to be off the cadence for a while.

I'm wandering. With luck I may return.

29 July, 2015 12:00AM

July 28, 2015

Lubuntu Blog: Lubuntu 15.10 alpha 2

Hi, testing of the second Alpha of Lubuntu 15.10, codename Wily Werewolf is now taking place. Please do help test. Details of how to test can be found at the Testing wiki. Feedback appreciated.

28 July, 2015 08:46PM by Rafael Laguna (

hackergotchi for ArcheOS


The meaning of an "open source exhibition"

As many of you know, since more than one year we are working on the exhibition "Facce. I molti volti della storia umana". Now that the exhibition was inaugurated and our work is completed, it is time to share with open licenses (CC-BY) what we produced. 
It will be a long process, as the materials are different (images, photos, video, 3D models), nevertheless we have to start uploading the documents. Thanks to a short discussion with +Maurizio Napolitano (Fondazione Bruno Kessler) and +Rodrigo Padula (Grupo Wikimedia Brasileiro de Educação e Pesquisa), both expert in open data, I think that the best solution will be to upload the data directly in ATOR, where I can cite all the people who participated in the "production process", from 3D scan to facial reconstruction, till scientific validation.
IMHO the best image to start with is the one of the Taung Child, for different reason: it has been the first attempt performed by our team in order to reconstruct the face of an hominid; it summarizes our concept of Open Research; it has been one of the ideas that gave birth to exhibition "Facce", as Dr. Nicola Carrara conceived it; it is the first project in which Arc-Team, the Anthropological Museum of the University of Padua and Antrocom worked together; last but not least, it is a perfect example of what we mean of open data. Indeed the first reconstruction we produced (version 1.0), which actually is already part of the related article in Wikipedia, has been modified after the development (and the validation) of a new technique of paleo-art, based on the anatomical deformation of a CT scan of a Pan troglodytes. For this reason now we have a new and more accurate reconstruction, which can be considered a version 2.0 of the same model. 
The open data we intend to share here in ATOR are meant to be open not only in the direction of free access for everyone, but also (most important) under the temporal dimension: they should just represent a step of a continuous evolution of the research, in which all the reconstructions can be considerate simply as the latest release of a model (exactly like in software development, with new versions and forks). For this reason we choose the Creative Commons Attribution license, in order to allow derived works and projects.
The two images below can explain better this concept: the first one represent the Taung Child reconstruction in his first version (based on an anatomical study of primates), while the second one is derived by the anatomical deformation of Pan troglodytes CT scan.

The first version of the reconstruction of the Taung Child

The second version of the Taung Child
Both of the models are the result of a team work, although most of the process (and in particular the most important and delicate phases) has been performed by the 3D artist  +Cícero Moraes. Here below I want to cite the credits for this reconstruction (following the same order of the work-flow):

1. 3D scan of the cast: Luca Bezzi (Arc-Team) and +Moreno Tiziani (Antrocom)
2. 3D modeling (skull restoration, anatomical study, CT deformation): +Cícero Moraes (Arc-Team)
3. scientific validation: Prof. Telmo Pievani (University of Padua, Department of Biology) and Dott. Nicola Carrara(Anthropological Museum of the University of Padua)


The Anthropological Museum of the University of Pauda, for providing the cast of the fossil.
The KUPRI, Primate Research Institute Kyoto University, for sharing the CT scans of different primates.
Dr. Claudio Paluani (University of Padua), who, during the lesson "Digital bones" at the Botanical Garden of the University of Padua, had the same idea we had about the validation of the methodology of anatomical deformation through the modification of two CT scans of living primates. This fact convinced us to perform the test, after seeing that more people reached the same conclusion about the validation problem.

28 July, 2015 08:21PM by Luca Bezzi (

hackergotchi for Ubuntu developers

Ubuntu developers

José Antonio Rey: Charms as Babies: Introduction

Hello everyone, and welcome to my new blog post series: Charms as Babies. It’s been a long time since I’ve written something about charms.

My purpose with this series is to introduce you to Juju, Juju Charms, and it’s development and maintenance flow. At the end of the series you should be able to develop your own Juju Charm, and know how to take care of it. You may even become a Juju Charmer!

In the next couple days I’m gonna be posting little pieces on how to develop and take care of your own charm, or maybe even adopt a charm. Later today I will be posting an introduction to Juju and Charms. Make sure to keep an eye on my blog and Planet Ubuntu for the upcoming posts!

Oh! We are also having a “What is Juju?” session at UbuConLA, this time given by my fellow charm contributor Sebastián Ferrari. If you are not coming to the conference make sure to tune in to the livestream, at

This post is going to be used as an archive. Each chapter will be linked here. You can see all the chapters posted so far below.

28 July, 2015 07:32PM

José Antonio Rey: Charms as Babies, Chapter 1: The Cloud, Juju and Charms

Welcome to the first chapter of the Charms as Babies series. This chapter is dedicated to explain what are the Cloud, Juju and Juju Charms.

Have you heard about the Cloud? No, not the ones in the sky. THE Cloud. Let’s insert an XKCD comic for reference.

Interesting, huh? Yes, the Cloud is a huge group of servers. Anyone can rent part of a server for a period of time. Usually these are hours.

So, let’s put an example. I am Mr. VP from Blogs Company. I host… well, a blog. Instead of renting hosting like I would usually do, I decide to host my blog on the Cloud. So I go ahead, let’s say to Amazon Web Services. I tell Amazon I want an Ubuntu 14.04 server, with this amount of RAM and this amount of disk space. Amazon provides it to me for a price, and bills me per hour. It’s not renting me a username inside the entire server, but instead is launching a virtual machine (or VM) with the specs I requested, and giving the entire VM to me (including root access).

How is this different to common hosting, you ask? Sure, hosting is a practical solution when you just want things done, maybe FTP access to upload your site and that’s it. However, you can use the Cloud for whatever you want (within what’s legally accepted, of course). I may host my blog, but someone else on the other side of the world may host complicated data analysis applications, or maybe a bug tracker. Also, the Cloud gives you the possibility to rent a VM per hours. If I want to launch a server to try how to install MyProgram, then I launch a VM, install MyProgram, have fun with it, and destroy it right ahead. That way someone else can rent it after you’re done with it, and then you’re billed for the number of hours you used it. Everyone ends up being happy! There are several other benefits that come with the Cloud, but we will get to them later.

As I mentioned, you can do a lot of stuff with the Cloud. However, for most things, you need to know how to use a server, install stuff, compile code and more. Why don’t we simplify stuff and make it a lot easier? Why not run a couple commands and have whatever you want in a matter of minutes, without all the hassle? Well, that’s the basic idea behind Juju. With Juju you can execute simple commands such as `juju deploy wordpress` and you will have a WordPress instance set up for you, in the cloud of your choice (some restrictions apply), within minutes. Isn’t that amazing? However, all of this work is contained in what we call a Juju Charm. Charms are a set of scripts that help us automate the orchestration we see in the cloud. When I execute that command above, there are some scripts that are ran in the machine to install and configure. And someone else has written that charm to make your life easier.

I’m not going to dig deeper on how to use Juju, but want to highlight that even though things seem to be automated, there is a hero behind that automation, who helped you out doing the hardest part, so you can execute a command and get what you want.

What? YOU want to become the hero now?! Sure! In the next chapters we’ll see how can you become one the heroes in the Juju ecosystem. That’s all I have for this chapter, but if you have any questions about Juju and Charms, make sure to leave a comment below. Or drop by our IRC channel, #juju on Look, there’s even a link that will take you to the channel on your web browser!

28 July, 2015 07:30PM

Ubuntu Kernel Team: Kernel Team Meeting Minutes – July 28, 2015

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20150728 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: CVE’s

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Kernel Prep
  • Trusty – Kernel Prep
  • lts-Utopic – Kernel Prep
  • Vivid – Kernel Prep
    Current opened tracking bugs details:
    For SRUs, SRU report is a good source of information:

    cycle: 26-Jul through 15-Aug
    24-Jul Last day for kernel commits for this cycle
    26-Jul – 01-Aug Kernel prep week.
    02-Aug – 08-Aug Bug verification & Regression testing.
    09-Aug – 15-Aug Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

28 July, 2015 05:22PM

hackergotchi for Blankon developers

Blankon developers

Mahyuddin Idram Ahmad: Install NodeJS di Debian

Tutorial Nodejs

Node.js adalah platform perangkat lunak pada sisi-server dan aplikasi jaringan. Ditulis dengan bahasa javascript dan bisa dijalankan pada WindowsMac OS X dan Linux tanpa perubahan kode program. Node.js memiliki pustaka server HTTP sendiri sehingga memungkinkan untuk menjalankan webserver tanpa menggunakan program webserver seperti Apache atau Lighttpd.

Pasang dependensi:
sudo apt-get install git-core curl libssl-dev build-essential g++ nginx make

Buat direktori baru untuk lokasi install nodejs
$ mkdir ~/local

Tambahkan fullpath direktori ~/local
$ echo 'export PATH=$HOME/local/bin:$PATH' >> ~/.bashrc

Reload Konfigurasi .bashrc
$ source ~/.bashrc

Tentukan versi nodejs yang akan diinstall
$ export NODE_VERSION='0.10.36'

Unduh kode sumber nodejs
Extraks nodejs
$ tar xvfz node-v$NODE_VERSION.tar.gz

Masuk ke dalam direktori hasil extraks nodejs
$ cd node-v$NODE_VERSION

Setting lokasi install nodejs
$ ./configure --prefix=~/local

Install nodejs
$ make -j4 install
$ cd ~/

Hapus kode sumber nodejs
$ rm node-v$NODE_VERSION.tar.gz
$ rm -rf node-v$NODE_VERSION

28 July, 2015 05:13PM by Mahyuddin Idram Ahmad (

Mahyuddin Idram Ahmad: Solusi Mount Partisi

samba mount

Bagi pengguna BlankOn rote sudah tak asing lagi dengan tampilan gambar diatas, muncul saat melakukan mount partisi/drive. Setiap melakukan mounting (mengaitkan) partisi selalu diminta untuk melakukan pengisian password user. Emm... kadang bagi sebagian user ini bisa merasa terganggu dengan permintaan pengisian password setiap melakukan mounting melalu nautilus.

Untuk mengakalinya, bisa dilakukan dengan cara seperti dibawah ini :
  • Buat group mounter dengan perintah :
sudo addgroup mounter
  • Tambahkan user (akun anda) kedalam group mounter, sebagai contoh :
sudo adduser dotovr mounter
  • Buat script baru :
gksudo gedit /var/lib/polkit-1/localauthority/10-vendor.d/com.manokwari.desktop.pkla

         Isi dengan script berikut :

[Mounting, checking, etc. of internal drives]


  • Simpan dan keluar, buka nautilus coba lakukan mounting partisi

Semoga bisa membantu

28 July, 2015 05:11PM by Mahyuddin Idram Ahmad (

Mahyuddin Idram Ahmad: BlankOding : Linux Bahasa Aceh

Bulan Puasa Ramadhan kali ini, Aceh mengambil bagian membuat BlankOding Linux Bahasa Aceh yang diselenggarakan oleh HMIF Unsyiah dan bekerjasama dengan BlankOn Linux, juga didukung oleh TRC, MLC Unsyiah, ADOC Unsyiah dan APTIKOM. Acara yang semula dijadwalkan tanggal 14 Juli 2013 di undur ke tanggal 21 juli 2013.

Linux Bahasa Aceh

Salut dengan antusiasme peserta yang awalnya cuma ditargetkan 50 peserta, daam 1 minggu pendaftaan peminat mencapai 130 peserta sehingga harus mengambil gedung lain yang lebih besar mengingat gedung sebelumnya haya sanggup menampung 60 peserta.

Peserta pada umumnya adalah mahasiswa TI dan ada juga dari kalangan luar kampus juga ada beberapa dosen juga ambil bagian untuk mengikuti acara hingga usai, mengingat ini adalah BlankOding perdana yang kami buat di Aceh dan mengenai bahasa Aceh itu  menjadi daya tarik bagi peserta dimana dalam BlankOding diajarkan alih bahasa BlankOn Linux ke bahasa Aceh.

BlankOding dimulai dari pagi hingga menjelang berbuka puasa, antusiasme peserta mengikuti BlankOding sangat tinggi dari awal sampai acara selesai. Banyak harapan yang diinginkan oleh peserta salah satunya mengenai Bahasa Aceh resmi masuk disetiap rilis BlankOn Linux.

BlankOding dibagi dalam 4 sesi, BlankOding dibuka oleh Ketua Jurusan Informatika FMIPA Unsyiah, Dr. Taufik Abidin, M.Tech. Sesi pertama Pengenalan Proyek BlankOn, disesi pertama banyak dipaparkan tentang sejarah BlankOn, Produk BlankOn dan Misi Proyek BlankOn. Banyak peserta yang baru tau ternyata BlankOn dikembangkan oleh para pengembang yang kebanyakan para pengembangnya bukan berlatar belakang IT.

Pada sesi kedua, peserta sangat serius menginstall BlankOn Linux ke Laptopnya masing-masing dan banyak peserta yang terkagum-kagum dengan tampilan BlankOn-Installer yang berbasis HTML5 dan kecepatan instalasinya. Sesi kedua berlangsung hingga istirahat Shalat Zuhur.

Sesi ketiga dimulai setelah istirahat Shalat Zuhur, disesi ketiga BlankOding masuk dalam pelatihan Pelokalan Blankon Linux ke Bahasa Aceh. Peserta lebih antusias mengikuti berjalannya pelatihan BlankOding. Banyak peserta yang mula-mula menerka-nerka tentang pelokalan BlankOn Linux yang sebelumnya banyak peserta yang mengira bahwa pelokalan akan sulit mengingat kabanyakan dari peserta belum banyak yang pernah terjun langsung dalam hla terjemahan di Linux. Saat dipaparkan tentang terjemahan ini banyak peserta yang terkesima melihat bahwa portal penerjemahan BlankOn berbasis web atau tepatnya aplikasi open source transifex.  Peserta dipandu membuat akun OpenId id BlankOn hingga memulai penerjemahan. Harapannya semoga BlankOn Linux resmi berbahasa Aceh mulai rilis BlankOn 9.

Sesi terakhir juga tidak kalah seru, pada sesi terakhir di pandu oleh Razinal untuk mempercantik desktop Manokwari.

BlankOding berjalan lancar, walau di sesi ketiga sempat terhenti 15 menit saat portal penerjemahan BlankOn ngadat.

BlankOding kali ini merupakan pengalaman pertama bagi saya sendiri, mengingat ini adalah kali pertama saya menjadi pemateri.

28 July, 2015 05:10PM by Mahyuddin Idram Ahmad (

Mahyuddin Idram Ahmad: Panduan Pemaketan Debian dan turunannya

Panduan Pemaketan Debian dasar

Panduan pemaketan dasar ini ditujukan bagi Anda yang ingin membuat atau mengelola paket BlankOn. Walaupun beberapa konsep dalam panduan ini dapat digunakan untuk membuat paket binary untuk penggunaan pribadi, panduan ini untuk mereka yang ingin mendistribusikan paket ke, dan, untuk orang lain. Walau panduan ini ditulis untuk digunakan pada distribusi BlankOn Linux, namun panduan ini dapat berguna untuk distribusi lain yang berbasis dari Debian.

Mungkin ada beberapa alasan yang mendasari Anda ketika belajar cara memaketkan program untuk BlankOn:

  • Membuat dan memperbaiki paket BlankOn adalah salah satu cara untuk berkontribusi bagi komunitas BlankOn.
  • Cara ini merupakan cara yang baik untuk mengetahui bagaimana BlankOn dan aplikasi yang telah Anda instal dibangun.
  • Mungkin Anda ingin menginstal sebuah paket yang tidak ada di repository BlankOn atau paket khas yang anda kembangkan sendiri.
  • Kenapa panduan pemaketan ini diperlukan?
Karena cara ini tepat untuk membuat suatu paket yang belum ada direpository, mungkin selama ini kita hanya mengenal cara manual dalam hal menginstall paket baru yang belum ada di repository ./configure (./, make dan sudo make install. Sayangnya cara manual ini hanya bersifat sementara disaat kita menginstallnya dan tidak bisa kita share ke distro (debian dan turunannya) lain.

Semoga setelah selesai membaca panduan ini Anda memiliki perkakas dan
pengetahuan yang cukup untuk melakukan semua itu.
Panduan ini mengasumsikan bahwa pembaca sudah memiliki pengetahuan yang cukup untuk membangun dan menginstal perangkat lunak dari kode sumber pada suatu distribusi Linux. Panduan ini juga menggunakan Command Line Interface (CLI) di seluruh bagian, jadi Anda harus nyaman untuk menggunakan terminal. Anda juga bisa mempelajari hal berikut:

make: GNU Make adalah perkakas yang sangat penting untuk membangun perangkat lunak. Digunakan untuk mengubah tugas kompilasi yang kompleks menjadi mudah. Sangat penting untuk tahu bagaimana menggunakannya, karena kami akan menyimpan informasi mengenai proses pembuatan paket di dalam Makefile. Dokumentasi tersedia pada situs GNU

➢ ./configure: Skrip ini disertakan di hampir semua source GNU/Linux, khususnya di perangkat lunak yang ditulis menggunakan bahasa kompilasi seperti C/C++/Vala atau bahasa pemograman lainnya. 

Apt/Dpkg: Selain digunakan untuk menginstal program, apt dan dpkg memiliki banyak fitur yang berguna untuk pembuatan paket.

  • apt-cache dump - menampilkan daftar setiap paket dalam cache. Paket ini sangat berguna apabila digunakan bersama grep pipe seperti apt-cache dump | grep foo untuk mencari paket yang nama atau dependencies-nya menyertakan “foo”.
  • apt-cache policy - menampilkan informasi paket.
  • apt-cache show - menampilkan informasi paket lebih lengkap.
  • apt-cache showsrc - menampilkan informasi mengenai paket source.
  • apt-cache rdepends - menampilkan reverse dependensi untuk sebuah paket (yang dibutuhkan paket untuk melakukan query).
  • dpkg -S - menampilkan daftar paket binary yang dimiliki oleh suatu berkas tertentu.
  • dpkg -l - menampilkan daftar paket yang baru saja diinstal. Perintah ini mirip dengan apt-cache dump tetapi hanya untuk paket yang sudah diinstal.
  • dpkg -c - menampilkan daftar isi dari paket binary. Berguna untuk memastikan bahwa berkas sudah terinstal di tempat yang benar.
  • dpkg -f - menampilkan berkas control untuk paket binary. Berguna untuk memastikan bahwa dependencies sudah benar.
  • grep-dctrl - mencari informasi khusus di dalam paket. Merupakan penggunaan khusus dari paketgrep (tapi tidak terinstal secara baku)
  • diff: Program diff dapat digunakan untuk membandingkan dua berkas dan membuat patch.
  • patch : Program patch digunakan untuk menerapkan suatu patch.


  • Kode Sumber

Sebagian besar pengguna distribusi berbasis Debian seperti halnya BlankOn tidak akan pernah berurusan dengan kode sumber yang digunakan untuk membuat aplikasi di komputer mereka. Kode sumber telah dikompilasi menjadi paket binary dari paket source yang berisi kode sumber itu sendiri dan aturan untuk membuat paket binary. Pemaket meng-upload paket source dengan perubahan yang telah mereka lakukan untuk sistem build lalu mengkompilasi paket binary untuk setiap arsitektur komputer. Sebuah sistem yang terpisah lalu mendistribusikan berkas .deb binary dan source yang telah berubah ke repository mirror.

  • Peralatan Pemaketan
Ada banyak perkakas yang dibuat khusus untuk pemaketan di sistem berbasis
Daftar paket berikut berguna untuk memulai pemaketan:
build-essential adalah metapackage yang bergantung pada libc6-dev, gcc, g++, make, and dpkg-dev. Paket yang mungkin kurang Anda kenali adalah dpkg-dev. Paket ini mengandung perkakas seperti dpkg-buildpackage dan dpkg-source yang digunakan untuk membuat, membongkar, dan membangun paket source dan binary.
devscripts memuat banyak skrip yang membuat pekerjaan pengelolaan paket menjadi mudah.
Beberapa yang sering digunakan adalah debdiff, dch, debuild, dan debsign.
debhelper dan dh-make adalah skrip untuk mengotomatisasi tugas pembuatan paket. dh-make dapat digunakan untuk membuat "debianization" dan menyediakan banyak berkas.
diff dan patch digunakan untuk membuat dan menerapkan patch, berturut-turut. Kedua aplikasi tersebut sering digunakan dalam pembuatan paket karena lebih mudah, bersih dan efisien untuk menampilkan perubahan kecil sebagai patch daripada harus menggunakan banyak salinan berkas.
gnupg adalah pengganti dari PGP yang lengkap dan bebas untuk digunakan
menandatangani berkas secara digital (termasuk juga paket).
fakeroot mensimulasi cara menjalankan perintah dengan hak akses root. Hal ini sangat berguna bila Anda ingin membuat paket binary dari pengguna biasa.
lintian membedah paket Debian, pelaporan bug, dan pelanggaran Kebijakan. Paket ini berisi pemeriksaan otomatis untuk banyak aspek dari Kebijakan Debian seperti halnya error umum.
pbuilder membangun sistem chroot dan membuat paket di dalam chroot. Merupakan sistem yang ideal untuk digunakan jika sebelumnya sebuah paket diperiksa apakah telah memiliki dependency yang tepat serta membangun paket yang bersih dan siap untuk diuji serta didistribusikan.

Install peralatan :
$ sudo apt-get install devscripts build-essential fakeroot debhelper gnupg pbuilder dh-make dpkg-dev ubuntu-dev-tools

Mulai membangun paket

• Informasi pribadi :
$ nano ~/.bashrc
(isi pada baris terakhir)
export DEBFULLNAME=”Dotovr” 
export DEBMAIL=””

Catatan: Sesuaikan dengan informasi pribadi

• Memeriksa informasi pribadi yang talah dibuat
$ source ~/.bashrc 
$ export | grep DEB

• Membuat kunci pribadi
$ gpg –gen-key
Isi dengan informasi pribadi
Real name: Dotovr 
E-mail address: 
Passphrase: password

Catatan: Sesuaikan dengan informasi pribadi
Selanjutnya mulai membangun paket dengan dpkg-buildpackage, cara ini harus menginstall dependensinya terlebih dahulu secara manual
$ sudo apt-get build-dep ed
Sebagai contoh, unduh kode sumber di GNU:
$ wget
$ tar zxvf ed-1.6.tar.gz
$ cd ed-1.6
$ ls
$ dh_make -e isiemailkita -f ../ed-1.6.tar.gz
Type of package: single binary, multiple binary, library, kernel
module or cdbs?
[s/m/l/k/b] s
(Pilih "s")
cd debian
$ ls
rm *.ex *.EX docs info README.*

Masih di direktori debian, jalankan perintah berikut untuk membuat/mengubah changelog:
$ dch -e
Isi seperti berikut:
ed (1.6-1) unstable; urgency=low
  * Initial release (Closes: #nnnn)
-- Dotovr <> 20:36:22 +0700 Tue, 29 May 2012
Ubah seperlunya:
$ nano control
Lengkapi seperti berikut:
Source: ed
Section: editors
Priority: extra
Maintainer: Dotovr <>
Build-Depends: debhelper (>= 8.0.0), autotools-dev
Standards-Version: 3.9.2
Package: ed
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: classic UNIX line editor
 The ed is a line-oriented text editor. It is used to create,
 display, modify and otherwise manipulate text files.

Copyright ubah seperlunya:
$ nano copyright
Lalu isi sebagai berikut:
Upstream-Name: ed
Files: *
Copyright: 1993, 1994, 2006, 2007, 2008, 2009, 2010, 2011 Free
Software Foundation, Inc
2006, 2007, 2008, 2009 Backus <>
1993, Karl Berry <>
1994, 2011 Theo Deraadt <>
2006, 2007 Kaveh R. Ghazi <>
2010, 2011 Mike Haertel <>
2011 Francois Pinard <>
1993, 1994 Rodney Ruddock <>
License: GPL-3.0+
Files: debian/*
Copyright: 2012 Dotovr <>
                                          License: GPL-3.0+
License: GPL-3.0+
This program is free software: you can redistribute it and/or
it under the terms of the GNU General Public License as
published by
the Free Software Foundation, either version 3 of the License,
(at your option) any later version.
This package is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
GNU General Public License for more details.
You should have received a copy of the GNU General Public
along with this program. If not, see
Pindah ke direktori sebelumnya dengan cara ketik:
$ cd ..

Jalankan perintah untuk membangun paket:
$ dpkg-buildpackage -rfakeroot
jika diminta Passphrase, isi dengan Passphrase yang anda buat tadi. Tunggu sampai proses selesai.

28 July, 2015 04:35PM by Mahyuddin Idram Ahmad (

hackergotchi for Ubuntu developers

Ubuntu developers

Unity Team: What’s new in Mir 0.14


We have recently released Mir 0.14 to Wily (0.14.0+15.10.20150723.1-0ubuntu1). It’ll soon be released to Vivid+ as well. We have lots of goodies in 0.14 :

It was a comprehensive release where we broke every single ABI under the sun including the client ABI. That required us to release many dependent projects.

Here is a list of 0.14 content highlights :

  • Preparation work for new buffer semantics
  • MirEvent-2.0 related changes and unifications
  • New SurfaceInputDispatcher to replace the android InputDispatcher
  • Preparation work for new buffer semantics
  • Preparation work for mir-on-X11: splitting of mesa platform in common
    and KMS parts
  • g++-5.0 compilation
  • Thread sanitizer issues
  • Numerous bugs addressed

For a detailed list, please see the changelog.

We will soon start the release process for Mir 0.15. Expect to have :

  • Application-not-responding (ANR) handling
  • ANR optimizations
  • Raw input events
  • Experimental mir support on X11 (Mir server runs as an X client in a window)
  • Latency reduction optimizations using “predictive bypass”
  • Client API for specifying input region shape
  • Support for relative pointer motion events
  • More window management support
  • More new buffer semantics
  • libinput platform

Do not hesitate to visit us on freenode #ubuntu-mir IRC channel.

28 July, 2015 04:09PM