October 31, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Chris J Arges: getting kernel crashdumps for hung machines

Debugging hung machines can be a bit tricky. Here I'll document methods to trigger a crashdump when these hangs occur.

What exactly does it mean when a machine 'hangs' or 'freezes-up'? More information can be found in the kernel documentation [1], but overall there are a few types of hangs A "Soft Lock-Up" is when the kernel loops in kernel mode for a duration without giving tasks a chance to run. A "Hard Lock-Up" is when the kernel loops in kernel mode for a duration without letting other interrupts run. In addition a "Hung Task" is when a userspace task has been blocking for a duration. Thankfully the kernel has options to panic on these conditions and thus create a proper crashdump.

In order to setup crashdump, on an Ubuntu machine we can do the following. First we need to install and setup crashdump, more info can be found here [2].
sudo apt-get install linux-crashdump
Select NO unless you really would like to use kexec for your reboots.

Next we need to enable it since by default it is disabled.
sudo sed -i 's/USE_KDUMP=0/USE_KDUMP=1/' /etc/default/kdump-tools

Reboot to ensure the kernel cmdline options are properly setup
sudo reboot

After reboot run the following:
sudo kdump-config show

If this command shows 'ready to dump', then we can test a crash to ensure kdump has enough memory and will dump properly. This command will crash your computer, so hopefully you are doing this on a test machine.
echo c | sudo tee /proc/sysrq-trigger

The machine will reboot and you'll see a crash in /var/crash.

All of this is already documented in [2], so now we need to enable panics for hang and lockup conditions. Now we need to enable crashing on lockups, so we'll enable many cases at once.

Edit /etc/default/grub and change this line to the following:
GRUB_CMDLINE_LINUX="nmi_watchdog=panic hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic"

In addition you could enable these via /proc/sys/kernel or sysctl. For more information about these parameters there is documentation here [3].

If you've used the command line change, update grub and then reboot.
sudo update-grub && sudo reboot

Now your machine should crash when it locks up, and you'll get a nice crashdump to analyze. If you want to test such a setup I wrote a module [4] that induces a hang to see if this works properly.

Happy hacking.

  1. https://www.kernel.org/doc/Documentation/lockup-watchdogs.txt
  2. https://wiki.ubuntu.com/Kernel/CrashdumpRecipe
  3. https://www.kernel.org/doc/Documentation/kernel-parameters.txt
  4. https://github.com/arges/hanger

31 October, 2014 08:53PM by arges (noreply@blogger.com)

Ronnie Tucker: Full Circle Magazine #90 has arrived!

Full Circle
Issue #90
Full Circle - the independent magazine for the Ubuntu Linux community are proud to announce the release of our ninetieth issue.

This month:
* Command & Conquer
* How-To : OpenConnect to Cisco, LibreOffice, and Broadcasting With WebcamStudio
* Graphics : Inkscape.
* Linux Labs: Compiling a Kernel Pt.3
* Review: MEGAsync
* Ubuntu Games: Prison Architect, and X-Plane Plugins
plus: News, Arduino, Q&A, and soooo much more.

Get it while it’s hot!
We now have several issues available for download on Google Play/Books. If you like Full Circle, please leave a review.
AND: We have a Pushbullet channel which we hope will make it easier to automatically receive FCM on launch day.

31 October, 2014 08:10PM

Canonical Design Team: Washington Devices Sprint

Last week was a week of firsts for me: my first trip to America, my first Sprint and my first chili-dog.

Introducing myself as the new (only) Editorial and Web Publisher, I dove head first into the world of developers, designers and Community members. It was a very absorbing week, which after felt more like a marathon than a sprint.

After being grilled by Customs, finally we arrived at Tyson’s Corner where 200 or so other developers, designers and Community members gathered for the Devices Sprint. It was a great opportunity for me to see how people from each corner of the world contribute to Ubuntu, and share their passion for open source. I especially found it interesting to see how designers and developers work together, given their different mind sets and how they collaborated together.

The highlight for me was talking to some of the Community guys, it was really interesting to talk to them about why and how they contribute from all corners of the world.

From left to right: Riccardo, Andrew, Filippo and Victor.

(From left to right: Riccardo, Andrew, Filippo and Victor)

The main ballroom.

(The main Ballroom)

Design Team dinner.  From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni.

(Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni)

I caught up with Olga and Giorgio to share their thoughts and experiences from the Sprint:

So how did the Sprint go for you guys?

Olga: “It was very busy and productive in terms of having face time with development, which was the main reason we went, as we don’t get to see them that often.

For myself personally, I have a better understanding of things in terms of what the issues are and what is needed, and also what can or cannot be done in certain ways. I was very pleased with the whole sprint. There was a lot of running around between meetings, where I tried to use the the time in-between to catch-up with people. On the other hand as well, Development made the approach to the Design Team in terms of guidance, opinions and a general catch-up/chat, which was great!

Steph: “I agree, I found it especially productive in terms of getting the right people in the same room and working face-to-face, as it was a lot more productive than sharing a document or talking on IRC.”

Giorgio: “Working remotely with the engineers works well for certain tasks, but the Design Team sometimes needs to achieve a higher bandwidth through other means of communication, so these sprints every 3 months are incredibly useful.

What a Sprint allows us to do is to put a face to the name and start to understand each other’s needs, expectations and problems, as stuff gets lost in translation.

I agree with Olga, this Sprint was a massive opportunity to shift to much higher level of collaboration with the engineers.

What was your best moment?

Giorgio: “My best moment was when the engineers perception towards the efforts of the Design Team changed. My goal is to better this collaboration process with each Sprint.”

Did anything come up that you didn’t expect?

Giorgio: “Gaming was an underground topic that came up during the Sprint. There was a nice workshop on Wednesday on it, which was really interesting.”

Steph: “Andrew a Community Developer I interviewed actually made two games one evening during the Sprint!”

Olga: “They love what they do, they’re very passionate and care deeply.”

Do you feel as a whole the Design Team gave off a good vibe?

Giorgio: “We got a good vibe but it’s still a working progress, as we need to raise our game and become even better. This has been a long process as the design of the Platform and Apps wasn’t simply done overnight. However, now we are in a mature stage of the process where we can afford to engage with Community more. We are all in this journey together.

Canonical has a very strong engineering nature, as it was founded by engineers and driven by them, and it is has evolved because of this. As a result, over the last few years the design culture is beginning to complement that. Now they expect steer from the Design Team on a number of things, for example: Responsive design and convergence.

The Sprint was good, as we finally got more of a perception on what other parties expect from you. It’s like a relationship, you suddenly have a moment of clarity and enlightenment, where you start to see that you actually need to do that, and that will make the relationship better.”

Olga: The other parties and the Development Team started to understand that initiated communication is not just the responsibility of the Design Team, but it’s an engagement we all need to be involved in.”

In all it was a very productive week, as everyone worked hard to push for the first release of the BQ phone; together with some positive feedback and shout-outs for the Design Team :)

Unicorn hard at work.

(Unicorn hard at work)

There was a bit of time for some sightseeing too…

It would have been rude not to see what the capital had to offer, so on the weekend before the sprint we checked out some of Washington’s iconic sceneries.

The Washington Monument.

(The Washington Monument)

We saw most of the important parliamentary buildings like the White House, Washington Monument and Lincoln’s Statue. Seeing them in the flesh was spectacular, however, I half expected a UFO to appear over the Monument like in ‘Independence Day’, and for Abraham Lincoln to suddenly get up off his chair like in the movie ‘Night at the Museum’ - unfortunately none of that happened.

The White House.

(The White House)

D.C. isn’t as buzzing as London but it definitely has a lot of character, as it embodies an array of thriving ethnic pockets that represented African, Asian and Latin American cultures, and also a lot of Italians. Washington is known for getting its sax on, so me and a few of the Design Team decided to check-out the night scene and hit a local Jazz Club in Georgetown.

...And all the jazz.

(Twins Jazz Club)

On the Sunday, we decided to leave the hustle and bustle of the city and venture to the beautiful Great Falls Park, which was only 10-15 minutes from the hotel. The park was located in the Northern Fairfax County along the banks of the Potomac River, which is an integral part of the George Washington Memorial Parkway. Its creeks and rapids made for some great selfie opportunities…

Great Falls Park.

(Great Falls Park)

31 October, 2014 02:21PM

Oli Warner: Bulk renaming files in Ubuntu; the briefest of introductions to the rename command

I've seen more than a few Ask Ubuntu users struggling with how to batch rename their files. They get lost in Bash and find -exec loops and generally make a big mess of things before asking for help. But there is an easy method in Ubuntu that relatively few users know about: the rename command.

I was a couple of years into Ubuntu before I discovered the rename command but now wonder how I ever got along without it. I seem to use it at least once a week for personal gain and that again for helping other people. Let's just spend a second or two marvelling at the outward simplicity of syntax and we'll crack on.

rename [-v] [-n] [-f] perlexpr [filenames]

Before we get too crazy, let's talk about those first two flags. -v will tell you what it's doing and -n will exit before if does anything. If you are in any doubt about your syntax, sling -vn on the end. It'll tell you what it would have done if you hadn't had the -n there. -f will give permission for rename to overwrite files. Be careful.

Replacing (and adding and removing) with s/.../.../

This is probably the most common usage for rename. You've got a bunch of files that have the wrong junk in their filenames in them, or you want to change the formatting, or add a prefix, or replace certain characters... rename lets us do all this through simple regular expressions.

I'm using -vn here so the changes aren't persisting. Let's start by creating a few files:

$ touch dog{1..3}.dog
$ ls
dog1.dog  dog2.dog  dog3.dog

Replacing the first dog with cat:

$ rename 's/dog/cat/' * -vn
dog1.dog renamed as cat1.dog
dog2.dog renamed as cat2.dog
dog3.dog renamed as cat3.dog

Replacing the last dog with cat (note $ means "end of line" in this context, ^ means start):

$ rename 's/dog$/cat/' * -vn
dog1.dog renamed as dog1.cat
dog2.dog renamed as dog2.cat
dog3.dog renamed as dog3.cat

Replacing all instances of dog with the /g (global) flag:

$ rename 's/dog/cat/g' * -vn
dog1.dog renamed as cat1.cat
dog2.dog renamed as cat2.cat
dog3.dog renamed as cat3.cat

Removing a string is as simple as replacing it with nothing. Let's nuke the first dog:

$ rename 's/dog//' * -vn
dog1.dog renamed as 1.dog
dog2.dog renamed as 2.dog
dog3.dog renamed as 3.dog

Adding strings is a case of finding your insertion point and replacing it with your string. Here's how to add "PONIES-" to the start:

$ rename 's/^/PONIES-/' * -vn
dog1.dog renamed as PONIES-dog1.dog
dog2.dog renamed as PONIES-dog2.dog
dog3.dog renamed as PONIES-dog3.dog

This is all fairly simple and your ability to use it in the wild is largely going to depend on your ability to manipulate regular expressions. I've been using them for well over a decade in a professional setting so this might be something I take for granted, but they aren't hard once you get past the syntax. Here's a fairly simple introduction to REGEX if you would like to learn more.

Zero-padding numbers so they sort correctly

ls can be pretty shoddy sorting numbers correctly. Here's a simple example:

$ touch {1..11}
$ ls
1  10  11  2  3  4  5  6  7  8  9

It's sorting each character position at a time, left to right. This isn't too bad when we're only talking about tens, but this scales up and you end up with thousands coming before 9s. A good way to fix this (ls isn't the only application with this issue) is to zero-pad the beginnings of the numbers so all numbers are the same length and their values are in corresponding positions. Rename makes this super-easy because we can dip into Perl and use sprintf to reformat the number:

$ rename 's/\d+/sprintf("%02d", $&)/e' *
$ ls
01  02  03  04  05  06  07  08  09  10  11

The %02d there means we're printing at least 2 characters and padding it with zeroes if we need to. If you're dealing with thousands, increase that to 4, 5 or 6.

$ rename 's/\d+/sprintf("%05d", $&)/e' *
$ ls
00001  00002  00003  00004  00005  00006  00007  00008  00009  00010  00011

Similarly, you can parse a number and remove the zero padding with something like this:

$ rename 's/\d+/sprintf("%d", $&)/e' *
$ ls
1  10  11  2  3  4  5  6  7  8  9

Attaching a counter into the filename

Say we have three files and we want to add a counter onto the filename:

$ touch {a..c}
$ rename 's/$/our $i; sprintf("-%02d", 1+$i++)/e' * -vn
a renamed as a-01
b renamed as b-02
c renamed as c-03

It's the our $i that let's us persist a variable state over multiple passes.

Incrementing an existing number in a file

Given three files with consecutive numbers, increment them. It's a simple enough expression but we have to be mindful that sometimes there are going to be conflicting filenames. Here's an example that moves all the files to temporary filenames and then strips them back to what they should be.

$ touch file{1..3}.ext
$ rename 's/\d+/sprintf("%d-tmp", $& + 1)/e' * -v
file1.ext renamed as file2-tmp.ext
file2.ext renamed as file3-tmp.ext
file3.ext renamed as file4-tmp.ext

$ rename 's/(\d+)-tmp/$1/' * -v
file2-tmp.ext renamed as file2.ext
file3-tmp.ext renamed as file3.ext
file4-tmp.ext renamed as file4.ext

Changing a filename's case

Until now we've been using substitutions and expressions but there are other forms of Perl expression. In this case we can remap all lowercase characters to uppercase with a simple translation:

$ touch lowercase UPPERCASE MixedCase
$ rename 'y/a-z/A-Z/' * -vn
lowercase renamed as LOWERCASE
MixedCase renamed as MIXEDCASE

We could do that with an substitution-expression like: s/[a-z]/uc($&)/ge

Fixing extension based on actual content or MIME

What if you are handed a bunch of files without extensions? Well you could loop through and use things like the file command, or you could just use Perl's File::MimeInfo::Magic library to parse the file and hand you an extension to tack on.

rename 's/.*/use File::MimeInfo qw(mimetype extensions); $&.".".extensions(mimetype($&))/e' *

This one is a bit of a monster but further highlights that anything you can do in Perl can be done with rename. You could read ID3 tags from music or process internal data to get filename fragments.

Why doesn't my rename take a perlexpr?!

Ubuntu's default rename is actually a link to a Perl script called prename. Some distributions ship the util-linux version of rename --called rename.ul in Ubuntu-- instead. You can work out which version you have using the following command:

$ dpkg -S $(readlink -f $(which rename))
perl: /usr/bin/prename

So unless you want to shunt things around, you'll have to install and call prename instead of rename.

31 October, 2014 01:41PM

Valorie Zimmerman: Season of KDE - Let's go!

We're now in the countdown. The deadline for applications is midnight 31 October UT. So give the ideas page one last look:


We even got one last task just now, just for you devops. Please talk to your prospective mentor and get their OK before signing up on https://season.kde.org/

If you have already signed up and your mentor has signed off on your plans and timeline, get to work!

31 October, 2014 08:35AM by Valorie Zimmerman (noreply@blogger.com)

hackergotchi for HandyLinux developers

HandyLinux developers

j'ai pleuré

aujourd'hui, pas de debian, d'informatique ou autre futilité... journal d'un apprenti dev debian ... aujourd'hui, c'est journal.

la morale, la religion et le pouvoir nous ont appris que la violence "c'est mal" .. alors on culpabilise, on ne veut pas être violent... et puis un jour, les gendarmes débarquent chez toi car tu va être exproprié, car des gars ont décidé que ton terrain il rapportera plus de fric si c'est eux qui s'en occupe. et on appelle ça "l'intérêt général" ... la grosse blague. l'intérêt de quelques uns surtout, au détriment de tous les autres.  
tu es pacifiste, mais tu vois tout le travail d'une vie réduit à néant pour que 3 cols blancs s'en mettent encore plus dans les poches ... alors ça t'agace un peu...

faut pas trop agacer le peuple. autant tu peux le manipuler et faire joujou pendant les campagnes électorales, faire le beau gosse dans les interviews télévisés, lui faire croire qu'il a absolument besoin d'un iphone...

autant un jour, le peuple il en a marre de ne jamais être réellement représenté. alors il arrête de voter le peuple (c'est ce qui se passe depuis quelques années déjà), puis il arrête de croire le peuple, et puis enfin, il dit NON le peuple. et depuis la nuit des temps, quand le peuple a dit non, les grands sont tombés et tous ceux qui courbaient l'échine, les moutons, se sont retrouvés comme des cons.

la société bouge, avance, évolue, et pas dans le sens qui arrange les puissants. les peuples du monde bougent.  
agir ? mais comment ? le vote ne fonctionne pas, nos représentants ne nous représentent plus. la manifestation pacifique, ils s'en tape, ils attendent que ça passe. la manifestation musclée ? tu tombe sous le coup de la loi, et tu deviens un terroriste. faire confiance à celles et ceux qui ont construit le système pour réparer ou réformer le système ? c'est absurde comme idée, même si c'est ce qu'on fait depuis des années.

je suis père de 4 enfants. j'assure leur éducation et leur instruction à la maison avec ma femme. j'ai décidé de leur dire la vérité : non, le nucléaire, c'est pas propre. oui, il faut acheter des produits alimentaires locaux, et ce genre de choses. mais surtout, je vis dans un monde ou je suis obligé d'expliquer à mes enfants que les gens censés les protéger, en fait, ils s'en tapent... ils peuvent venir demain, car on leur a donné un ordre. il n'y pas de bien ou de mal, pas d'humanité, juste un ordre donné. et c'est ça, le monde dans lequel ils vont vivre.

alors d'un coup, le paysan exproprié, c'est moi. le manifestant violenté, c'est moi. le sdf harcelé, c'est moi, l'algérien contrôlé dix fois par jour, c'est moi. et les autres ? et bien ce sont mes ennemis. ce sont des gens qui pourront sans aucun scrupule, frapper ma femme, gazer mes mômes et me mettre en prison, juste car je refuse le pouvoir établi par d'autres. juste parce que le monde qu'on me propose n'est qu'une course sans fin à la croissance, dans la négation totale de toute logique naturelle.

j'entends celui qui me dit que le chantier lui a permis de trouver un travail, et qu'il peut nourrir sa famille. mais à quel prix ? et à quoi ça sert de nourrir sa famille aujourd'hui si les ressources sont épuisées demain ? je peux comprendre mes parents, ils n'étaient pas informés comme nous le sommes. mais mes enfants comprendront-ils que je sois resté là, à la maison, en sécurité pour les instruire, pendant que la planète crie, pendant que les peuples hurlent ?

la meilleure façon de transmettre des valeurs, est-ce de les exprimer, de les expliquer; ou de les incarner...

et puis Rémi est mort.

j'ai pleuré. je n'ai pas compris pourquoi tout de suite. d'abord par pure tristesse, par simple compassion. mais aussi car la veille du drame, on se demandait avec ma femme lequel de nous deux irait à sivens. lorsque nous en avons parlé, aucun de nous n'aurait pu imaginer un instant mourir dans une manifestation écologiste. notre questionnement portait surtout sur l'organisation avec nos 4 enfants à garder à la maison. bref, nous n'y sommes pas allé, et je le regrette aujourd'hui. je ressens une sorte de honte, comme si en restant chez moi, même si je remplis mon rôle de chef de famille, même si personne ne me reproche rien, je trahissais ce que je représente.

oh mais môsieur "représente un truc ?" oui, mon devoir est de faire en sorte que ma famille se sente aimée et en sécurité, car c'est comme ça que chacun de ses membres pourra s'épanouir et faire ses choix. mais si le monde vers lequel je dirige mes enfants est cette chose injuste et violente que je vois dehors... quel est mon devoir ? on est loin des grandes réflexions sur la liberté d'expression et la morale. on est dans la réalité simple d'un monde en perdition dans lequel je vais bientôt plonger mes enfants.

je suis pacifiste. certes. mais on se retrouve avec 1% de la population mondiale qui dirige le monde, 90% qui suivent le mouvement et vont même jusqu'à cautionner les abus de pouvoir (ils aimeraient en profiter eux aussi...), et les 9% qui restent tentent désespérément d'empêcher le désastre "pacifiquement". alors que me reste-t-il comme champ d'action ?

le jour ou la honte sera plus forte que le désir de protéger mes enfants en étant proche d'eux. je serai l'un d'entre eux. pardonnez-moi de ne pas y être déjà.

31 October, 2014 01:38AM by arpinux

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Towards a Better Password Manager

The consensus in the security community is that passwords suck, but they're here to stay, at least for a while longer. Given breaches like Adobe, ..., it's becoming more and more evident that the biggest threat is not weak passwords, but password reuse. Of course, the solution to password to reuse is to use one password for every site that requires you to log in. The problem is that your average user has dozens of online accounts, and they probably can't remember those dozens of passwords. So, we build tools to help people remember passwords, mostly password managers, but do we build them well?

I don't think so. But before I look at the password managers that are out there, it's important to define the criteria that a good password manager would meet.

  1. Use well-understood encryption to protect the data. A good password manager should use cryptographic constructions that are well understood and reviewed. Ideally, it would build upon existing cryptographic libraries or full cryptosystems. This includes the KDF (Key-derivation function) as well as encryption of the data itself. Oh, and all of the data should be encrypted, not just the passwords.

  2. The source should be auditable. No binaries, no compressed/minified Javascript. If built in a compiled language, it should have source available with verifiable builds. If built in an interpreted language, the source should be unobfuscated and readable. Not everyone will audit their password manager, but it should be possible.

  3. The file format should be open. The data should be stored in an open, documented, format, allowing for interoperability. Your passwords should not be tired into a particular manager, whether that's because the developer of that manager abandoned it, or because it's not supported on a particular platform, or because you like a blue background instead of grey.

  4. It should integrate with the browser. Yes, there are some concerns about exposing the password manager within the browser, but it's more important that this be highly usable. That includes making it easy to generate passwords, easy to fill passwords, and most importantly: harder to phish. In-browser password managers can compare the origin of the page you're on to the data stored, so users are less likely to enter their password in the wrong page. With a separate password manager, users generally copy/paste their passwords into a login page, which relies on the user to ensure they're putting their password into the right site.

  5. Sync, if offered, should be independent to encryption. Your encryption passphrase should not be used for sync. In fact, your encryption passphrase should never be sent to the provider: not at signup, not at login, not ever. Sync, unfortunately, sounds simple: drop a file in Dropbox or Google Drive, right? What happens if the file gets updated while the password manager is open? How do changes get synced if two clients are open?

These are just the five most important features as I see them, and not a comprehensive design document for password managers. I've yet to find a manager that meets all of these criteria, but I'm hoping we're moving in this direction.

31 October, 2014 01:16AM

October 30, 2014

Ubuntu Podcast from the UK LoCo: S07E31 – The One with the Dozen Lasagnas

Join Laura Cowen, Tony Whitmore, Mark Johnson and Alan Pope in Studio L for season seven, episode thirty-one of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll be talking about the fanfare surrounding the latest Ubuntu release and looking over your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

30 October, 2014 08:00PM

Ubuntu App Developer Blog: It’s time for a Scope development competition!

With all of the new documentation coming to support the development of Unity Scopes, it’s time for us to have another development shodown! Contestants will have five (5) weeks to develop a project, from scratch, and submit it to the Ubuntu Store. But this time all of the entries must be Scopes.

Be sure to update to the latest SDK packages to ensure that you have the correct template and tools. You should also create a new Click chroot to get the latest build and runtime packages.


prizesWe’ve got some great prizes lined up for the winners of this competition.

  • 1st place will win a new Dell XPS 13 Laptop, Developer Edition (preloaded with Ubuntu)
  • Runners up will receive one of:
    • Logitech UE Boom Bluetooth speakers
    • Nexus 7 running Ubuntu
    • An Ubuntu bundle, featuring:
      • Ubuntu messenger bag
      • Ubuntu Touch Infographic T-shirt
      • Ubuntu Neoprene Laptop Sleeve
    • An Ubuntu bundle, featuring:
      • Ubuntu backpack
      • Ubuntu Circle of Friends Dot Design T-shirt
      • Ubuntu Neoprene Laptop Sleeve


Scope entries will be reviewed by a panel of judges from a variety of backgrounds and specialties, all of whom will evaluate the scope based on the following criteria:

  • General Interest – Scopes that are of more interest to general phone users will be scored higher. We recommend identifying what kind of content phone users want to have fast, easy access to and then finding an online source where you can query for it
  • Creativity – Scopes are a unique way of bringing content and information to a user, and we’ve only scratched the surface of what they can do. Thinking outside the box and providing something new and exciting will lead to a higher score for your Scope
  • Features – There’s more to scopes than basic searching, take advantage of the departments, categories and settings APIs to enhance the functionality of your Scope
  • Design – Scopes offer a variety of ways to customize the way content is displayed, from different layouts to visual styling. Take full advantage of what’s possible to provide a beautiful presentation of your results.
  • Awareness / Promotion – we will award extra points to those of you who blog, tweet, facebook, Google+, reddit, and otherwise share updates and information about your scope as it progresses.

The judges for this contest are:

  • Chris Wayne developer behind a number of current pre-installed Scopes
  • Joey-Elijah Sneddon Author and editor of Omg!Ubuntu!
  • Victor Thompson Ubuntu Core Apps developer
  • Jouni Helminen Designer at Canonical
  • Alan Pope from the Ubuntu Community Team at Canonical

Learn how to write Ubuntu Scopes

To get things started we’ve recently introduced a new Unity Scope project template into the Ubuntu SDK, you can use this to get a working foundation for your code right away. Then you can follow along with our new SoundCloud scope tutorial to learn how to tailor your code to a remote data source and give your scope a unique look and feel that highlights both the content and the source. To help you out along the way, we’ll be scheduling a series of online Workshops that will cover how to use the Ubuntu SDK and the Scope APIs. In the last weeks of the contest we will also be hosting a hackathon on our IRC channel (#ubuntu-app-devel on Freenode) to answer any last questions and help you get your c If you cannot join those, you can still find everything you need to know in our scope developer documentation.

How to participate

If you are not a programmer and want to share some ideas for cool scopes, be sure to add and vote for scopes on our reddit page. The contest is free to enter and open to everyone. The five week period starts on the Thursday 30th October and runs until Wednesday 3rd December 2014! Enter the Ubuntu Scope Showdown >

30 October, 2014 06:36PM

Ronnie Tucker: Packt offers library subscription with additional $150 worth of free content

As  you may know, Packt Publishing supports Full Circle Magazine with review copies of books, so it’s only fair that we help them by bringing this offer to your attention:


PacktLib provides full online access to over 2000 books and videos to give users the knowledge they need, when they need it. From innovative new solutions and effective learning services to cutting edge guides on emerging technologies, Packt’s extensive library has got it covered. For a limited time only, Packt is offering 5 free eBook or Video downloads in the first month of a new annual subscription – up to $150 worth of extra content. That’s in addition to one free download a month for the rest of the year.

This special PacktLib Plus offer marks the release of the new and improved reading and watching platform, packed with new features.

The deal expires on 4 November.

30 October, 2014 06:33PM

Alessio Treglia: Handling identities in distributed Linux cloud instances

I’ve many distributed Linux instances across several clouds, be them global, such as Amazon or Digital Ocean, or regional clouds such as TeutoStack or Enter.

Probably many of you are facing the same issue: having a consistent UNIX identity across all multiple instances. While in an ideal world LDAP would be a perfect choice, letting LDAP open to the wild Internet is not a great idea.

So, how to solve this issue, while being secure? The trick is to use the new NSS module for SecurePass.

While SecurePass has been traditionally used into the operating system just as a two factor authentication, the new beta release is capable of holding “extended attributes”, i.e. arbitrary information for each user profile.

We will use SecurePass to authenticate users and store Unix information with this new capability. In detail, we will:

  • Use PAM to authenticate the user via RADIUS
  • Use the new NSS module for SecurePass to have a consistent UID/GID/….

 SecurePass and extended attributes

The next generation of SecurePass (currently in beta) is capable of storing arbitrary data for each profile. This is called “Extended Attributes” (or xattrs) and -as you can imagine- is organized as key/value pair.

You will need the SecurePass tools to be able to modify users’ extended attributes. The new releases of Debian Jessie and Ubuntu Vivid Vervet have a package for it, just:

# apt-get install securepass-tools

ERRATA CORRIGE: securepass-tools hasn’t been uploaded to Debian yet, Alessio is working hard to make the package available in time for Jessie though.

For other distributions or previous releases, there’s a python package (PIP) available. Make sure that you have pycurl installed and then:

# pip install securepass-tools

While SecurePass tools allow local configuration file, we highly recommend for this tutorial to create a global /etc/securepass.conf, so that it will be useful for the NSS module. The configuration file looks like:

app_id = xxxxx
app_secret = xxxx
endpoint = https://beta.secure-pass.net/

Where app_id and app_secrets are valid API keys to access SecurePass beta.

Through the command line, we will be able to set UID, GID and all the required Unix attributes for each user:

# sp-user-xattrs user@domain.net set posixuid 1000

While posixuid is the bare minimum attribute to have a Unix login, the following attributes are valid:

  • posixuid → UID of the user
  • posixgid → GID of the user
  • posixhomedir → Home directory
  • posixshell → Desired shell
  • posixgecos → Gecos (defaults to username)

Install and Configure NSS SecurePass

In a similar way to the tools, Debian Jessie and Ubuntu Vivid Vervet have native package for SecurePass:

# apt-get install libnss-securepass

For previous releases of Debian and Ubuntu can still run the NSS module, as well as CentOS and RHEL. Download the sources from:



make install (Debian/Ubuntu Only)

For CentOS/RHEL/Fedora you will need to copy files in the right place:

/usr/bin/install -c -o root -g root libnss_sp.so.2 /usr/lib64/libnss_sp.so.2
ln -sf libnss_sp.so.2 /usr/lib64/libnss_sp.so

The /etc/securepass.conf configuration file should be extended to hold defaults for NSS by creating an [nss] section as follows:

realm = company.net
default_gid = 100
default_home = "/home"
default_shell = "/bin/bash"

This will create defaults in case values other than posixuid are not being used. We need to configure the Name Service Switch (NSS) to use SecurePass. We will change the /etc/nsswitch.conf by adding “sp” to the passwd entry as follows:

$ grep sp /etc/nsswitch.conf
 passwd:     files sp

Double check that NSS is picking up our new SecurePass configuration by querying the passwd entries as follows:

$ getent passwd user
 user:x:1000:100:My User:/home/user:/bin/bash
$ id user
 uid=1000(user)  gid=100(users) groups=100(users)

Using this setup by itself wouldn’t allow users to login to a system because the password is missing. We will use SecurePass’ authentication to access the remote machine.

Configure PAM for SecurePass

On Debian/Ubuntu, install the RADIUS PAM module with:

# apt-get install libpam-radius-auth

If you are using CentOS or RHEL, you need to have the EPEL repository configured. In order to activate EPEL, follow the instructions on http://fedoraproject.org/wiki/EPEL

Be aware that this has not being tested with SE-Linux enabled (check off or permissive).

On CentOS/RHEL, install the RADIUS PAM module with:

# yum -y install pam_radius

Note: as per the time of writing, EPEL 7 is still in beta and does not contain the Radius PAM module. A request has been filed through RedHat’s Bugzilla to include this package also in EPEL 7

Configure SecurePass with your RADIUS device. We only need to set the public IP Address of the server, a fully qualified domain name (FQDN), and the secret password for the radius authentication. In case of the server being under NAT, specify the public IP address that will be translated into it. After completion we get a small recap of the already created device. For the sake of example, we use “secret” as our secret password.

Configure the RADIUS PAM module accordingly, i.e. open /etc/pam_radius.conf and add the following lines:

radius1.secure-pass.net secret 3
radius2.secure-pass.net secret 3

Of course the “secret” is the same we have set up on the SecurePass administration interface. Beyond this point we need to configure the PAM to correct manage the authentication.

In CentOS, open the configuration file /etc/pam.d/password-auth-ac; in Debian/Ubuntu open the /etc/pam.d/common-auth configuration and make sure that pam_radius_auth.so is in the list.

auth required   pam_env.so
auth sufficient pam_radius_auth.so try_first_pass
auth sufficient pam_unix.so nullok try_first_pass
auth requisite  pam_succeed_if.so uid >= 500 quiet
auth required   pam_deny.so


Handling many distributed Linux poses several challenges, from software updates to identity management and central logging.  In a cloud scenario, it is not always applicable to use traditional enterprise solutions, but new tools might become very handy.

To freely subscribe to securepass beta, join SecurePass on: http://www.secure-pass.net/open
And then send an e-mail to info@garl.ch requesting beta access.

30 October, 2014 12:55PM

October 29, 2014

Rhonda D'Vine: Feminist Year

If someone would have told me that I would visit three feminist events this year I would have slowly nodded at them and responded with "yeah, sure..." not believing it. But sometimes things take their own turns.

It all started with the Debian Women Mini-Debconf in Barcelona. The organizers did ask me how they have to word the call for papers so that I would feel invited to give a speech, which felt very welcoming and nice. So we settled for "people who identify themselves as female". Due to private circumstances I didn't prepare well for my talk, but I hope it was still worth it. The next interesting part though happened later when there were lightning talks. Someone on IRC asked why there are male people in the lightning talks, which was explicitly allowed for them only. This also felt very very nice, to be honest, that my talk wasn't questioned. Those are amongst the reasons why I wrote My place is here, my home is Debconf.

Second event I went to was the FemCamp Wien. It was my first event that was a barcamp, I didn't know what to expect organization wise. Topic-wise it was set about Queer Feminism. And it was the first event that I went to which had a policy. Granted, there was an extremely silly written part in it, which naturally ended up in a shit storm on twitter (which people from both sides did manage very badly, which disappointed me). Denying that there is sexism against cis-males is just a bad idea, but the background of it was that this wasn't the topic of this event. The background of the policy was that usually barcamps but events in general aren't considered that save of a place for certain people, and that this barcamp wanted to make it clear that people usually shying away from such events in the fear of harassment can feel at home there.
And what can I say, this absolutely was the right thing to do. I never felt any more welcomed and included in any event, including Debian events—sorry to say that so frankly. Making it clear through the policy that everyone is on the same boat with addressing each other respectfully totally managed to do exactly that. The first session of the event about dominant talk patterns and how to work around or against them also made sure that the rest of the event was giving shy people a chance to speak up and feel comfortable, too. And the range of the sessions that were held was simply great. This was the event that I came up with the pattern that I have to define the quality of an event on the sessions that I'm unable to attend. The thing that hurt me most in the afterthought was that I couldn't attend the session about minorities within minorities. :/

Last but not least I attended AdaCamp Berlin. This was a small unconference/barcamp dedicated to increase women's participation in open technology and culture named after Ada Lovelace who is considered the first programmer. It was a small event with only 50 slots for people who identify as women. So I was totally hyper when I received the mail that was accepted. It was another event with a policy, and at first reading it looked strange. But given that there are people who are allergic to ingredients of scents, it made sense to raise awareness of that topic. And given that women are facing a fair amount of harassment in the IT and at events, it also makes sense to remind people to behave. After all it was a general policy for all AdaCamps, not for this specific one with only women.
I enjoyed the event. Totally. And that's not only because I was able to meet up with a dear friend who I haven't talked to in years, literally. I enjoyed the environment, and the sessions that were going on. And quite similar to the FemCamp, it started off with a session that helped a lot for the rest of the event. This time it was about the Impostor Syndrome which is extremely common for women in IT. And what can I say, I found myself in one of the slides, given that I just tweeted the day before that I doubted to belong there. Frankly spoken, it even crossed my mind that I was only accepted so that at least one trans person is there. Which is pretty much what the impostor syndrome is all about, isn't it. But when I was there, it did feel right. And we had great sessions that I truly enjoyed. And I have to thank one lady once again for her great definition on feminism that she brought up during one session, which is roughly that feminism for her isn't about gender but equality of all people regardless their sexes or gender definition. It's about dropping this whole binary thinking. I couldn't agree more.

All in all, I totally enjoyed these events, and hope that I'll be able to attend more next year. From what I grasped all three of them think of doing it again, the FemCamp Vienna already has the date announced at the end of this year's event, so I am looking forward to meet most of these fine ladies again, if faith permits. And keep in mind, there will always be critics and haters out there, but given that thy wouldn't think of attending such an event anyway in the first place, don't get wound up about it. They just try to talk you down.

P.S.: Ah, almost forgot about one thing to mention, which also helps a lot to reduce some barrier for people to attend: The catering during the day and for lunch both at FemCamp and AdaCamp (there was no organized catering at the Debian Women Mini-Debconf) did take off the need for people to ask about whether there could be food without meat and dairy products by offering mostly Vegan food in the first place, even without having to query the participants. Often enough people otherwise choose to go out of the event or bring their own food instead of asking for it, so this is an extremely welcoming move, too. Way to go!

/personal | permanent link | Comments: 0 | Flattr this

29 October, 2014 07:47PM

Jonathan Riddell: Kubuntu Vivid in Bright Blue

KDE Project:

Kubuntu Vivid is the development name for what will be released in April next year as Kubuntu 15.04.

The exiting news is that following some discussion and some wavering we will be switching to Plasma 5 by default. It has shown itself as a solid and reliable platform and it's time to show it off to the world.

There are some bits which are missing from Plasma 5 and we hope to fill those in over the next six months. Click on our Todo board above if you want to see what's in store and if you want to help out!

The other change that affects workflow is we're now using Debian git to store our packaging in a kubuntu branch so hopefully it'll be easier to share updates.

29 October, 2014 07:11PM

Daniel Holbach: Washington sprint

In the Community Q&A with Alan and Michael yesterday, I talked a bit about the sprint in Washington already, but I thought I’d write up a bit more about it again.

First of all: it was great to see a lot of old friends and new faces at the sprint. Especially with the two events (14.10 release and upcoming phone release) coming together, it was good to lock people up in various rooms and let them figure it out when nobody could run away easily. For me it was a great time to chat with lots of people and figure out if we’re still on track and if our old assumptions still made sense.  :-)

We were all locked up in a room as well...We were all locked up in a room as well…

What was pretty fantastic was the general vibe there. Everyone was crazy busy, but everybody seemed happy to see that their work of the last months and years is slowly coming together. There are still bugs to be fixed but we are close to getting the first Ubuntu phone ever out the door. Who would have thought that a couple of years ago?

It was great to catch up with people about our App Development story. There were a number of things we looked at during the sprint:

  • Up until now we had a Virtualbox image with Ubuntu and the SDK installed for people at training (or App Dev School) events, who didn’t have Ubuntu installed. This was a clunky solution, my beta testing at xda:devcon confirmed that. I sat down with Michael Vogt who encouraged me to look into providing something more akin to an “official ISO” and showed me the ropes in terms of creating seeds and how livecd-rootfs is used.
  • I had a number of conversations with XiaoGuo Liu, who works for Canonical as well, and has been testing our developer site and our tools for the last few months. He also wrote lots and lots of great articles about Ubuntu development in Chinese. We talked about providing our developer site in Chinese as well, how we could integrate code snippets more easily and many other things.
  • I had a many chats at the breakfast buffet with Zoltan and Zsombor of the SDK team (it always looked like we were there at the same time).  We talked about making fat packages easier to generate, my experiences with kits and many other things.
  • It was also great to catch up with David Callé who is working on scopes documentation. He’s just great!

What also liked a lot was being able to debug issues with the phone on the spot. I changed to the proposed channel, set it to read-write and installed debug symbols and voilà, grabbing the developer was never easier. My personal recommendation: make sure the problem happens around 12:00, stand in the hallway with your laptop attached to the phone and wait for the developer in charge to grab lunch. This way I could find out more about a couple of issues which are being fixed now.

It was also great to meet the non-Canonical folks at the sprint who worked on the Core Apps like crazy.

What I liked as well was our Berlin meet-up: we basically invited Berliners, ex-Berliners and honorary Berliners and went to a Mexican place. Wished I met those guys more often.

I also got my Ubuntu Pioneers T-Shirt. Thanks a lot! I’ll make sure to post a selfie (as everyone else :-)) soon.

Thanks a lot for a great sprint, now I’m looking forward to the upcoming Ubuntu Online Summit (12-14 Nov)! Make sure you register and add your sessions to the schedule!

29 October, 2014 05:46PM

Randall Ross: Make Software? Come to San Francisco and Check Out Ubuntu on Power!

Do you make software that solves real-world problems? Do you want your software to be instantly available to everyone that's building cloud solutions? Did you know that Ubuntu powers most of the cloud?

Some fun Ubuntu folks will be with their IBM and OpenPower friends just south of San Francisco, California next Wednesday (Nov. 5th, 2014) to talk about the future: Ubuntu on Power.

The event is free, but you'll have to register in advance.

Click the power button to get more information and to register!

Ubuntu Community Manager
Ubuntu on *Power*

Questions? randall AT ubuntu DOT com

29 October, 2014 05:36PM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Uji Coba Biostar MCP6P M2+ pada Linux

Uji Hardware Istana Media kali ini menggunakan Motherboard Biostar MCP6P M2+ dengan Processor AMD Athlon 3000+ Berikut spesifikasi lengkap Hardware yang digunakan: Hardware Utama Motherboard:  Biostar MCP6P M2+ Processor: AMD Athlon 3000+ RAM Memori: DDR2 1 GB (kingston) Hardisk 80 GB SATA (segeat) dan 40 GB ATA (maxtor) PSU: merk Nipon 450 Watt Hardware Tambahan Cardreader

29 October, 2014 11:32AM by Istana Media (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Didier Roche: Eclipse and android adt support now in Ubuntu Developer Tools Center

Eclipse and Android ADT support now in Ubuntu Developer Tools Center

Now that the excellent Ubuntu 14.10 is released, it's time to focus as part of our Ubuntu Loves Developers effort on the Ubuntu Developer Tools Center and cutting a new release, bringing numerous new exciting features and framework support!

0.1 Release main features

Eclipse support

Eclipse is now part of the Ubuntu Developer Tools Center thanks to the excellent work of Tin Tvrtković who implemented the needed bits to bring that up to our users! He worked on the test bed as well to ensure we'll never break unnoticed! That way, we'll always deliver the latest and best Eclipse story on ubuntu.

To install it, just run:

$ udtc ide eclipse

and let the system set it up for you!


Android Developer Tools support (with eclipse)

The first release introduced the Android Studio (beta) support, which is the default in UDTC for the Android category. In addition to that, we now complete the support in bringing ADT Eclipse support with this release.


It can be simply installed with:

$ udtc android eclipse-adt

Accept the SDK license like in the android studio case and be done! Note that from now on as suggested by a contributor, with both Android Studio and Eclipse ADT, we add the android tools like adb, fastboot, ddms to the user PATH.

Ubuntu is now a truly first-class citizen for Android application developers as their platform of choice!

Removing installed platform

As per a feature request on the ubuntu developer tools center issue tracker, it's now really easy to remove any installed platform. Just enter the same command than for installing, and append --remove. For instance:

$ udtc android eclipse-adt --remove
Removing Eclipse ADT
Suppression done

Enabling local frameworks

As requested as well on the issue tracker, users can now provide their own local frameworks, by using either UDTC_FRAMEWORKS=/path/to/directory and dropping those frameworks here, or in ~/.udtc/frameworks/.

On glorious details, duplicated categories and frameworks loading order is the following:

  1. UDTC_FRAMEWORKS content
  2. ~/.udtc/frameworks/ content
  3. System ones.

Note though that duplicate filenames aren't encouraged, but supported. This will help as well testing for large tests with a basic framework for all the install, reinstall, remove and other cases common in all BaseInstaller frameworks.

Other enhancements from the community

A lot of typo fixes have been included into that release thanks to the excellent and regular work of Igor Vuk, providing regular fixes! A big up to him :) I want to highlight as well the great contributions that we got in term of translations support. Thanks to everyone who helped providing or updating de, en_AU, en_CA, en_GB, es, eu, fr, hr, it, pl, ru, te, zh_CN, zh_HK support in udtc! We are eager to see what next language will enter into this list. Remember that the guide on how to contribute to Ubuntu Developer Tools Center is available here.

Exciting! How can I get it?

The 0.1 release is now tagged and all tests are passing (this release brings 70 new tests). It's available directly on vivid.

For 14.04 LTS and 14.10, use the ubuntu-developer-tools-center ppa where it's already available.


As you have seen above, we really listen to our community and implement & debate anything coming through. We start as well to see great contributions that we accept and merge in. We are just waiting for yours!

If you want to discuss some ideas or want to give a hand, please refer to this blog post which explains how to contribute and help influencing our Ubuntu loves developers story! You can as well reach us on IRC on #ubuntu-desktop on freenode. We'll likely have an opened hangout soon during the upcoming Ubuntu Online Summit as well. More news in the coming days here. :)

29 October, 2014 11:23AM

hackergotchi for Webconverger


New logo

Back in 2007 a daisy flower was hastily chosen as Webconverger's logo to express simplicity. Over time it has found to be a little too generic and probably not reflective of our "enterprise" positioning in the PC software market. Therefore we commissioned http://www.hawkenking.com/ to update the logo to be a little more unique and serious looking.

Update rationale

The old formats of Webconverger's logo are:

The new logo is:

  1. The petal symbolises a configured machine, typically deployed via USB stick, hence the shape
  2. The detached petal, demonstrates machines can be easily detached and reattached. i.e. moved between configurations or even to leave Webconverger services altogether, since we actively protect customers from vendor lock in
  3. The centre symbolises a customer configuration, e.g. https://config.webconverger.com/client/[CUSTOMER_EMAIL]

The font used is https://www.myfonts.com/fonts/mti/neo-sans/.

Incorporating Webconverger's new icon into the browser chrome

Currently users have no idea what kiosk software they are using. So with the logo embedded now a curious user can hover over the 35x35 flower icon embedded into the top right of the {webconverger,webcnoaddressbar} chrome to see the tooltip Webconverger. They cannot interact with the icon and cannot click away to browse https://webconverger.com/ for example.

Hopefully our new symbol can give confidence to the user that the are using a secure & privacy concious Web kiosk!

29 October, 2014 06:17AM

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

From Russia with love: Sofacy/Sednit/APT28 is in town

Yesterday, another cyber espionage group with Russian roots made it to the New York Times headlines again courtesy of FireEye and a new report they published.

FireEye did a pretty good job on attribution and giving some technical indicators; however, they neglected to reference previous work on this threat actor from companies like PWC, TrendMicro, ESET and others.

We have been tracking this threat actor (Sofacy) for a few years when it first appeared on our radar in one of the CVE-2012-0158/CVE-2010-3333 clusters. Based on the lure content contained in the malicious documents as well as the phishing campaigns we have seen in the past, this group tends to target NATO, Eastern Europe government and military institutions and defense contractors. We have seen lures related to Ukraine, Chechnya and Georgia that indicates one of the group's objectives is gathering geopolitical intelligence.

The techniques used by this group have evolved over the years.

- Spearphishing

Most of the Spearphishing campaigns launched by this group involve a malicious Word document exploiting one of the following vulnerabilities:

  • CVE-2010-3333
  • CVE-2012-0158
  • CVE-2014-1761

As described by FireEye and others, this group uses different payloads including a downloader and several second-stage backdoors and implants.

We cover these tools using the following rules with USM:

  • System Compromise, Targeted Malware, OLDBAIT - Sofacy
  • System Compromise, Targeted Malware, Chopstick - Sofacy
  • System Compromise, Targeted Malware, Coreshell - Sofacy
  • System Compromise, C&C Communication, Sofacy Activity

- Web compromises

The group has been seen infecting websites and redirecting visitors to a custom exploit kit being able to take advantage of the following vulnerabilities affecting Internet Explorer:

  • CVE-2013-1347
  • CVE-2013-3897
  • CVE-2014-1776

The following rule detects activity related to this exploit kit:

  • Exploitation & Installation, Malicious website - Exploit Kit, Sednit EK

- Phishing campaigns

This actor uses phishing campaigns to redirect victims to Outlook Web Access (OWA) portals designed to impersonate the legitimate OWA site of the victim's company. This technique is used to compromise credentials and access mailboxes and other services within the company.

Inspecting the content of the malicious redirect we can alert on this activity using the following rule:

  • Delivery & Attack, Malicious website, Sofacy Phishing


[1] http://pwc.blogs.com/files/tactical-intelligence-bulletin---sofacy-phishing-.pdf
[2] http://blog.trendmicro.com/trendlabs-security-intelligence/operation-pawn-storm-the-red-in-sednit/
[3] http://www.trendmicro.com/cloud-content/us/pdfs/security-intelligence/white-papers/wp-operation-pawn-storm.pdf
[4] http://www.welivesecurity.com/2014/10/08/sednit-espionage-group-now-using-custom-exploit-kit/
[5] http://malware.prevenity.com/2014/08/malware-info.html
[6] http://www.fireeye.com/resources/pdfs/apt28.pdf


29 October, 2014 04:30AM

October 28, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu App Developer Blog: How to add settings to your scope

A scope can provide persistent settings for simple customizations, such as allowing the user to configure an email address or select a distance unit as metric or imperial.

In this tutorial, you well learn how to add settings to your scope and allow users to customize their experience.


scope-settings_coffeenearby2 scope-settings_visitparis2 scope-settings_indieconcerts1

28 October, 2014 10:31PM

Svetlana Belkin: Personal Goals for Ubuntu 15.04

I thought I written post for 14.10 but I didn’t…I think real life was too stressful on me at that time, but I wrote one for 14.04.

Between the 14.04 cycle and the 14.10 cycle, I completed one of the six (6) or so goals and I’m working on one them, which   I understand why I was not able to complete the other four (4) (or so) goals and I will explain why:

Ubuntu Doc Team

I learned that the main focus of the the Doc team should be the desktop/server docs not the wiki.  But still, there should be a some group of people that should be the admins of the wiki.  What is really required is recruiting experts on the subject matter to update the wiki pages along with the wiki admins to rename and delete pages.

Ubuntu Ohio Team

I learned that most of the LoCo’s are dead and Ubuntu Ohio is one of them.  Or I am not putting in enough energy in recruiting people into the LoCo.  Or not networking enough.

Ubuntu Women

I learned that we don’t have resources to run an outreach program.  But I learned that there is other ways to do “outreach”.

Between the time that I started to get involved and now, I joined three teams and was elected as an Elected Leader and as a Memebership Board member.  I created new goals as I failed many of them to many factors.  These are:

Ubuntu Doc Team

Nothing for now.

Ubuntu Oho Team

Nothing for now.

Ubuntu Women

I have three (3) goals, two of which are sub-goals of the main goal: help get more women involved with Ubuntu and FOSS.  The other one is related to the main goal of help get more women involved but it’s a collaboration between another team.

The first two goals are finish the Orientation Quiz and publish it, and get Harvest developed enough for anyone to use.  The other goal is to start a collaboration Ubuntu Scientists since that was one thing that was brought up while working on the Orientation Quiz.

Ubuntu Scientists

The collaboration project mainly and perhaps one of the other goals of the team. And also try to get the team active.

Ubuntu Leadership

Most likely, the goal is to collect information on issues that leaders face and write those articles.  And also try to get the team active.



28 October, 2014 09:13PM

Ubuntu Kernel Team: Kernel Team Meeting Minutes – October 28, 2014

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20141028 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt

Status: Vivid Development Kernel

The Vivid kernel has been opened and master-next rebased to the lastest
v3.18-rc2 upstream kernel. We have witheld uploading to the archive
until we’ve progressed to a later -rc candidate.
Important upcoming dates:
The Vivid ReleaseSchedule has not yet been posted.

Status: CVE’s

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid -
  • Precise -
  • Trusty – Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html


    cycle: 10-Oct through 31-Oct
    8-Oct Last day for kernel commits for this cycle
    09-Oct – 10-Oct Kernel prep
    12-Oct – 18-Oct Bug verification & Regression testing.
    19-Oct – 25-Oct Bug verification & Regression testing.
    26-Oct – 01-Nov Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

28 October, 2014 05:50PM

hackergotchi for Blankon developers

Blankon developers

Rahman Yusri Aftian: Membuat Swapfile

Iya, habis pasang lupa ndak buat partisi swap, hardisk pasang lvm lagi, ya udah deh tinggal nonton mahabharat saja, tapi sebelum itu buat swapfile aja kali ye………………….

kita disini akan menggunakan 2 tool aplikasi yaitu dd dan mkswap

Langkah #1

sudo dd if=/dev/zero of=/swapfile1 bs=10240 count=5242888

eh kebanyakan ya ukurannya, ya atur sendiri saa deh sesuai keinginan dan selera.

Langkah #2

sudo mkswap /swapfile1

Langkah #3

sudo chown root.root /swapfile1

sudo chmod 0600 /swapfile

Langkah #4

sudo swapon /swapfile1

Langkah #5

sudo nano /etc/fsatab

isi dengan

/swapfile1 swap swap defaults 0 0

setelelah itu simpan dan tes dengan menggunakan

free -m

selamat menikmati hasilnya.:D


28 October, 2014 01:53PM

Sokhibi: Masalah pada Motherboard Samsung Harappa-12

Artikel ini masih ada hubungannya dengan artikel Mengenal Front Panel Motherboard karena masih menggunakan Motherboard yang sama. Sudah bukan rahasia lagi jika motherboard yang berasal dari PC Built-Up sering memiliki karakteristik yang berbeda dari motherboard PC rakitan. Begitupun dengan motherboard Samsung Harappa-12 yang memiliki sedikit perbedaan dengan motherboard biasa, jika hal

28 October, 2014 02:53AM by Istana Media (noreply@blogger.com)

Sokhibi: Review Laptop MSI CX420 @P6100

Beberapa bulan lalu Istana Media pernah dapat dagangan Laptop MSI CX420, awalnya kami mengira laptop tersebut sudah menggunakan processor seri i namun setelah diteliti lagi ternyata belum, karena saat cari info di internet dengan kata kunci MSI CX420, yang ditemukan sudah menggunakan processor seri i, ternyata perbedaannya terdapat pada seri dibelakangnya yaitu @P6100. Berikut

28 October, 2014 02:51AM by Istana Media (noreply@blogger.com)

Iang: JSON formatter bookmarklet

At work I need to read JSON document pretty often and making it pretty will help me pretty much. Last time I showed how to prettify a JSON document using Python script. But since I am working mostly in a browser, having to switch from browser to terminal is a bit cumbersome. There are a lot of online JSON formatter out there but I think it’s too much if I need to make a remote request just to do the formatting. Also the JSON document I need to format can be a sensitive document so it’s also not a good idea to use an online service.

Then I realized browser nowadays has in-browser JSON formatter function: JSON.stringify. I just need to find a way to make using it a bit easier. So I made the following bookmarklet.

Prettify JSON

Drag the link above to bookmark bar in your browser. If you need to format a JSON document, you can click it, then paste the JSON document (make sure it’s valid), and finally click the button there. Voila!

28 October, 2014 02:29AM

October 27, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 389

Welcome to the Ubuntu Weekly Newsletter. This is issue #389 for the week October 20 – 26, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Mathias Hellsten
  • Stephen Michael Kellat
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

27 October, 2014 10:23PM

Nicholas Skaggs: Sprinting in DC: Friday

This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

Friday brings an end to an exciting week, and the faces of myself and those around me reflect the discussions, excitement, fun and lack of sleep this week has entailed.

The first session of the day involved hanging out with the QA team while they heard feedback from various teams on issues with quality and process within there project. Always fun to hear about what causes different teams the most issues when it comes to testing.

Next I spent some time interviewing a couple folks for publishing later. In my case I interviewed Thomi from the QA team and Zoltan from the SDK team about the work going on within there teams and how the last cycle went. The team as a whole has been conducting interviews all week. Look for these interviews to appear on youtube in the coming weeks.

Thursday night while having a look through a book store, I came across an ad for ubuntu in Linux Voice magazine. It made me smile. The dream of running ubuntu on all my devices is becoming closer every day.

I'd like to thank all the community core app developers who joined us this week. Thanks for hanging out with us, providing feedback, and most of all for the creating the wonderful apps we have for the ubuntu phone. Your work has helped shaped the device and turn it into what it is today.

Looking back over the schedule there were sessions I wish I had been able to attend, and it was wonderful catching up with everyone. Sadly my flight home prevented me from attending the closing session and presumably getting a summary of some of these sessions. I can say I was delighted to talk and interact with the unity8 team on the next steps for unity8 on the desktop. I trust next cycle we as a community can do more around testing there work.

As I head to the airport for home, it's time to celebrate the release of utopic unicorn!

27 October, 2014 08:00AM by Nicholas Skaggs (noreply@blogger.com)

Valorie Zimmerman: Start your Season of KDE engines!

Season of KDE (#SoK2014) was delayed a bit, but we're in business now:


Please stop by the ideas page if you need an idea. Otherwise, contact a KDE devel you've worked with before, and propose a project idea.

Once you have something, please head over to the Season of KDE website: https://season.kde.org and jump in. You can begin work as soon as you have a mentor sign off on your plan.

Student application deadline: Oct 31 2014, 12:00 am UTC - so spread the word! #SoK2014

Go go go!

27 October, 2014 07:32AM by Valorie Zimmerman (noreply@blogger.com)

Valorie Zimmerman: Testing A11y in Plasma 5

I made the jump on all available computers, and am now running Plasma5 on the foundation of Kubuntu 14.10 everywhere. Once the upgrade work was done, I filed a few bugs, and then wanted to test accessibility (often abbreviated a11y), since Qt5 has a11y built-in.

Jpwhiting in #kde-accessibility found Frederik's blog: http://blogs.fsfe.org/gladhorn/, where I found that the key was setting up the environment to expose KDE software to the mostly-GNOME a11y applications. For now this must be done via the commandline:

gsettings set org.gnome.desktop.a11y.applications screen-reader-enabled true

It is a work-around, but it works!

Once orca is installed, and the environment is set up, you will hear each letter as you type, and when you open menus, they are read aloud. I understand it will read IRC channels aloud, but I don't want this! Some people use this method to study while they do something else, which sounds cool.

KDE developers, please test your ported code for accessibility. If you don't have time, please ask for testers to file bugs.

Distro testers, please test your install screens and upgrade paths for, at the very least, successful screen reading. There is really is now no excuse to keep blind users from using our wonderful software.

Qt 5 is accessible! Are we using it to serve our blind users?

27 October, 2014 07:09AM by Valorie Zimmerman (noreply@blogger.com)

hackergotchi for TurnKey Linux

TurnKey Linux

Tips for the Object Oriented Programming novice

The following is written for programmers who don't really understand object oriented programming yet. They probably understand the language semantics, but don't really understand how to use them correctly.

If you find yourself misusing object oriented semantics that probably means you don't have the skills to develop good software. This is a problem because bad software is much harder to develop and even harder to maintain.

I'll try to avoid reinventing the wheel. Most likely the mistakes you are making are so common that they have been immortalized as anti-patterns:

"Anti-patterns, ... [are] specific repeated practices that appear initially to be beneficial, but ultimately result in bad consequences that outweigh the hoped-for advantages."

In my experience, the following anti-patterns are especially common amongst Object Oriented beginners:

  • Big ball of mud

  • God object

  • Cargo cult programming

    Cargo cult programming happens when you mimic some of the superficial externalities of OOP without a real understanding of the underlying concepts.

    "The term 'cargo cult', as an idiom, typically refers to aboriginal religions which grew up in the South Pacific after World War II. The practices of these groups centered on building elaborate mock-ups of airplanes and military landing strips in the hope of summoning the god-like airplanes that had brought marvelous cargo during the war."

It's very important to understand that Object Oriented Programming isn't magic. Using the syntax / language facilities isn't enough to make your code truly object oriented, and it certainly isn't enough to make your code any good.

One of the most important principles you need to understand and master is called Separation of concerns. Without it, you programming methodology will be deeply flawed.

Another common pitfall is Programming by permutation without constant code refactoring. This usually results in code lacking in adequate abstraction which gradually accumulates a huge amount of accidental complexity.

Eventually bad programming tends to solidify into a Lava flow: "Retaining undesirable (redundant or low-quality) code because removing it is too expensive or has unpredictable consequences", is what you get when you put such code into production.

Dear novice, let me assure you that the smell of your code has nothing to do with your intelligence and everything to do with your lack of experience. All beginners make these mistakes, because they are attracted to the short term rewards while insufficiently aware of the long-term problems these anti-patterns introduce (famously called "the tar pit" in Brook's Mythical Man Month).

In fact, in the early days of software engineering, the entire field made these mistakes, resulting in the routine catastrophic failure of even the most well funded software development projects. This period was called the software crisis: "The causes of the software crisis were linked to the overall complexity of the software process and the relative immaturity of software engineering as a profession."

Anyhow, a lot of extremely smart people thought long and hard about these problems for many years and came up with various processes and methodologies that can help us tame the software crisis.

That means you don't have to reinvent the wheel here or discover everything on your own, but can stand on the shoulders of giants.

However you probably won't "get it" just by reading a few Wikipedia articles. You probably need some good old fashioned theory. In other words, you need to hit the books. I couldn't possibly hope to reproduce the quality and breadth you can find in the best literature on the subject. So I did the next best thing and compiled a list of a few good books:

  • The Object-oriented thought process.
  • Refactoring: improving the design of existing code.
  • Design patterns: elements of reusable object oriented software.
  • Code complete
  • The mythical man month.

A book, even a really good one is still no replacement for quality mentoring. After you do your homework, try to get a more experienced programmer to review your program's high-level design before you actually implement it. One way of doing that is to plan out your object schema in advance using CRC cards.

Here's a related, excellent tutorial on teaching object-oriented thinking to procedural programmers using CRC cards:


It starts out: "It is difficult to introduce both novice and experienced procedural programmers to the anthropomorphic perspective necessary for object-oriented design."

In other words: relax, at least you're in good company.

27 October, 2014 05:05AM by Liraz Siri

hackergotchi for Blankon developers

Blankon developers

Iang: Sertifikat SSL gratis dari Cloudflare

Setelah setahun, sertifikat SSL gratisan yang saya dapat dari StartSSL akhirnya kadaluarsa. Kalau mau lanjut, tentu saja saya harus pengajukan pembaruan dan memperbarui sertifikat yang sekarang dipakai. Proses ngurus sesuatu di StartSSL itu aga2.. er… ribet :/ jadi males xD

Namun.. berhubung sekarang Cloudflare menyediakan sertifikat SSL gratisan, saya memutuskan tuk pindah saja haha :D Kebetulan domain ini emang sudah ditangani oleh Cloudflare jadi mestinya saya cukup mengatur agar segala lalul intas melewati server Cloudflare dulu. Gak masalah lah ini :D


Tapi sebelum itu, saya jg harus mengaktifkan dukungan SSL di Cloudflare. Dari empat opsi penggunaan SSL yang ada, saya memilih Full SSL supaya koneksi dari Cloudflare ke server tempat saya naro blog ini juga tetap menggunakan HTTPS. Pakai sertifikat yang mana? tentu saja pakai yg sudah kadaluarsa itu hihi :D


Memakai sertifikat SSL gratisan dari Cloudflare ini bukan ngga ada resikonya. Salah satunya adalah browser pengunjung udah mesti punya SNI (Server Name Indication, bukan Standar Negara Indonesia :P). Walau udah banyak browser yang mendukung SNI, tapi si Cloudflare bilang cuma sekitar 52% pengunjung dari Indonesia yang memakai browser ber-SNI (eh?).

Selain itu, Cloudflare memasukkan lebih dari satu domain agar terdaftar dalam sertifikat yang sama. Jadi semacam bbrp orang berbagi KTP yang sama walau di KTP tersebut tertulis setiap nama orang yang ikutan berbagi.


Keuntungannya? Dapet wildcard certificate! Sub domain jadi bisa dipasangi HTTPS juga tanpa sertifikat tambahan!

27 October, 2014 02:29AM

October 26, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Randall Ross: Why Smart Phones Aren't - Reason #4

I used to believe that computer mediated communication made the world a better place...

Have you ever noticed a couple sitting together not being "together"? Or perhaps a group of friends eating in a restaurant or enjoying drinks in a bar, but largely not interacting with one another? In these situations, the people that seem to be the centre of the event are the people that aren't there.

"Smart" phones, you make me ill. You are incentivizing human disconnection. You are weakening the bonds between people that inhabit the same space.

You are the ultimate expression of design fail.

You see, computer mediated communications should not have a distance bias. Why only mediate conversation between people that are challenged by distance separation? By doing so, you are creating, or at least accelerating, a culture of "not being there."

You see, the most important aspect of being beside another human being is enjoying that person in the moment, with full attention. Phone, you are just too dumb to realize it. Or are you simple conveniently ignoring it for the sake of a sociopathic business model?

Guess what? You know the two people in the photo are beside each other. You also know that they are in each other's contact list. You may even know that they're on a beach. It's a romantic place. Put two and two together, please.

Figure it out, phone! For the sake of humanity, this is not the 80's. It's time to wise up. Prince Ea and I and our posse are on to you...


Our best chance at a phone that repects humanity is here:

More reasons "smart" phones aren't are here:

image by Leo Reynolds

26 October, 2014 10:18PM

Colin Watson: Moving on, but not too far

The Ubuntu Code of Conduct says:

Step down considerately: When somebody leaves or disengages from the project, we ask that they do so in a way that minimises disruption to the project. They should tell people they are leaving and take the proper steps to ensure that others can pick up where they left off.

I've been working on Ubuntu for over ten years now, almost right from the very start; I'm Canonical's employee #17 due to working out a notice period in my previous job, but I was one of the founding group of developers. I occasionally tell the story that Mark originally hired me mainly to work on what later became Launchpad Bugs due to my experience maintaining the Debian bug tracking system, but then not long afterwards Jeff Waugh got in touch and said "hey Colin, would you mind just sorting out some installable CD images for us?". This is where you imagine one of those movie time-lapse clocks ... At some point it became fairly clear that I was working on Ubuntu, and the bug system work fell to other people. Then, when Matt Zimmerman could no longer manage the entire Ubuntu team in Canonical by himself, Scott James Remnant and I stepped up to help him out. I did that for a couple of years, starting the Foundations team in the process. As the team grew I found that my interests really lay in hands-on development rather than in management, so I switched over to being the technical lead for Foundations, and have made my home there ever since. Over the years this has given me the opportunity to do all sorts of things, particularly working on our installers and on the GRUB boot loader, leading the development work on many of our archive maintenance tools, instituting the +1 maintenance effort and proposed-migration, and developing the Click package manager, and I've had the great pleasure of working with many exceptionally talented people.

However. In recent months I've been feeling a general sense of malaise and what I've come to recognise with hindsight as the symptoms of approaching burnout. I've been working long hours for a long time, and while I can draw on a lot of experience by now, it's been getting harder to summon the enthusiasm and creativity to go with that. I have a wonderful wife, amazing children, and lovely friends, and I want to be able to spend a bit more time with them. After ten years doing the same kinds of things, I've accreted history with and responsibility for a lot of projects. One of the things I always loved about Foundations was that it's a broad church, covering a wide range of software and with a correspondingly wide range of opportunities; but, over time, this has made it difficult for me to focus on things that are important because there are so many areas where I might be called upon to help. I thought about simply stepping down from the technical lead position and remaining in the same team, but I decided that that wouldn't make enough of a difference to what matters to me. I need a clean break and an opportunity to reset my habits before I burn out for real.

One of the things that has consistently held my interest through all of this has been making sure that the infrastructure for Ubuntu keeps running reliably and that other developers can work efficiently. As part of this, I've been able to do a lot of work over the years on Launchpad where it was a good fit with my remit: this has included significant performance improvements to archive publishing, moving most archive administration operations from excessively-privileged command-line operations to the webservice, making build cancellation reliable across the board, and moving live filesystem building from an unscalable ad-hoc collection of machines into the Launchpad build farm. The Launchpad development team has generally welcomed help with open arms, and in fact I joined the ~launchpad team last year.

So, the logical next step for me is to make this informal involvement permanent. As such, at the end of this year I will be moving from Ubuntu Foundations to the Launchpad engineering team.

This doesn't mean me leaving Ubuntu. Within Canonical, Launchpad development is currently organised under the Continuous Integration team, which is part of Ubuntu Engineering. I'll still be around in more or less the usual places and available for people to ask me questions. But I will in general be trying to reduce my involvement in Ubuntu proper to things that are closely related to the operation of Launchpad, and a small number of low-effort things that I'm interested enough in to find free time for them. I still need to sort out a lot of details, but it'll very likely involve me handing over project leadership of Click, drastically reducing my involvement in the installer, and looking for at least some help with boot loader work, among others. I don't expect my Debian involvement to change, and I may well find myself more motivated there now that it won't be so closely linked with my day job, although it's possible that I will pare some things back that I was mostly doing on Ubuntu's behalf. If you ask me for help with something over the next few months, expect me to be more likely to direct you to other people or suggest ways you can help yourself out, so that I can start disentangling myself from my current web of projects.

Please contact me sooner or later if you're interested in helping out with any of the things I'm visible in right now, and we can see what makes sense. I'm looking forward to this!

26 October, 2014 09:55PM

hackergotchi for Maemo developers

Maemo developers

2014-10-21 Meeting Minutes

Meeting held 2014-10-21 on FreeNode, channel #maemo-meeting (logs)

Attending: (xes), Gido Griese (Win7Mac),
Jussi Ohenoja (juiceme), Peter Leinchen (peterleinchen)

Partial: Sicelo Mhlongo (sicelo)

Absent: Philippe Coval (RzR)

Summary of topics (ordered by discussion):

  • Mailing list moderation
  • Dead/old/obsolete content on entry page http://maemo.org
  • Ongoing tasks: referendum, Code of Conduct, karma, e.V. sub pages, letter to Jolla

Topic (Mailing list moderation):

  • Jussi found out accidentally that the mailing list maemo-community bounces back a feedback to non-subscribed senders like "message is suspended until a moderator checks the content". But mails ending up in a queue that has not been checked for the last 1,5 years.
  • There was a short discussion about the way to handle:
    1.) we change the message to "posting is forbidden from unregistered accounts, please see bla bla bla..."
    2.) we actually get someone to check&moderate the postings. (council for example)
  • At the end juiceme took over the responsibility to check those logs from now on and moderate the mailing list(s).

Topic (Dead/old/obsolete content on entry page http://maemo.org):

  • On the top page of m.o are very old contents, like the announcements from 2013/2010. Furthermore the link to the "abandoned" Cordia project.
    And as I thought the Nokia links (which in fact are working and need to be there, at least the one to http://www.nokia.com/global/wayfinder).
    The one to https://developer.nokia.com is working, but not fully related to Maemo (but to MeeGo and therefore now MS).
  • So the council/board should decide what to do with that Cordia link. And if we keep the announcements ticker on entry page for e.V. reasons, or remove it and use t.m.o as announcement platform (as they belong together now).

Topics (Ongoing tasks: referendum, Code of Conduct, karma, e.V. sub pages, letter to Jolla):

  • Freemangordon showed up late and asked about contacting Jolla for fre(e)mantle source code support.
  • These topics were again shifted to be discussed in next week's meeting.

Action Items:
  • -- old items:
    • Check if karma calculation/evaluation is fixed. - Karma calculation should work, only wiki entries (according to Doc) not considered. To be cross-checked ...
    • NielDK to prepare a draft for letter to Jolla. - Obsolete
    • Sixwheeledbeast to clarify the CSS issue on wiki.maemo.org with techstaff. - Done
    • juiceme to create a wording draft for the referendum (to be counterchecked by council members). - See
    • Everybody to make up their own minds about referendum and give feedback.
    • Peterleinchen to announce resignation of DocScrutinizer*/joerg_rw from council. Done
  • -- new items:
    • Next weeks tasks: referendum, karma check, voting for Code of Conduct, sub pages on m.o for e.V., abandoned link/announcement ticker
0 Add to favourites0 Bury

26 October, 2014 09:19PM by Peter Leinchen (peterleinchen@t-online.de)

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Dangers of decorator-based registries in Python

So Flask has a really convenient mechanism for registering handlers, actions to be run before/after requests, etc. Using decorators, Flask registers these functions to be called, as in:

def homepage_handler():
    return 'Hello World'

def do_something_before_each_request():

This is pretty convenient, and works really well, because it means you don't have to list all your routes in one place (like Django requires) but it comes with a cost. You can end up with Python modules that are only needed for the side effects of importing them. No functions from those modules are directly called from your other modules, but they still need to be imported somewhere to get the routes registered.

Of course, if you import a module just to get its side effects, then pylint won't be aware you need this import, and will helpfully suggest that you remove it. This generally isn't too bad, if you remove a file with views defined in it, they'll just fail, you'll notice quickly, and readd the import.

On the other hand, if you're using a before_request function to, say, provide CSRF protection, then you'll have a serious problem. Of course, that's the case I found myself in. So, you'll want to make sure that doesn't occur and use a resource from the file or disable pylint.

26 October, 2014 06:51PM

Colin King: even more stress in stress-ng

Over the past few weeks in spare moments I've been adding more stress methods to stress-ng  ready for Ubuntu 15.04 Vivid Vervet.   My intention is to produce a rich set of stress methods that can stress and exercise many facets of a system to force out bugs, catch thermal over-runs and generally torture a kernel in a controlled repeatable manner.

I've also re-structured the tool in several ways to enhance the features and make it easier to maintain.  The cpu stress method has been re-worked to include nearly 40 different ways to stress a processor, covering:
  • Bit manipulation: bitops, crc16, hamming
  • Integer operations: int8, int16, int32, int64, rand
  • Floating point:  long double, double,  float, ln2, hyperbolic, trig
  • Recursion: ackermann, hanoi
  • Computation: correlate, euler, explog, fibonacci, gcd, gray, idct, matrixprod, nsqrt, omega, phi, prime, psi, rgb, sieve, sqrt, zeta
  • Hashing: jenkin, pjw
  • Control flow: jmp, loop
..the intention was to have a wide enough eclectic mix of CPU exercising tests that cover a wide range of typical operations found in computationally intense software.   Use the new --cpu-method option to select the specific CPU stressor, or --cpu-method all to exercise all of them sequentially.

I've also added more generic system stress methods too:
  • bigheap - re-allocs to force OOM killing
  • rename - rename files rapidly
  • utime - update file modification times to create lots of dirty file metadata
  • fstat - rapid fstat'ing of large quantities of files
  • qsort - sorting of large quantities of random data
  • msg - System V message sending/receiving
  • nice - rapid re-nicing processes
  • sigfpe - catch rapid division by zero errors using SIGFPE
  • rdrand - rapid reading of Intel random number generator using the rdrand instruction (Ivybridge and later CPUs only)
Other new options:
  • metrics-brief - this dumps out only the bogo-op metrics that are relevant for just the tests that were run.
  • verify - this will sanity check the stress results per iteration to ensure memory operations and CPU computations are working as expected. Hopefully this will catch any errors on a hot machine that has errors in the hardware. 
  • sequential - this will run all the stress methods one by one (for a default of 60 seconds each) rather than all in parallel.   Use this with the --timeout option to run all the stress methods sequentially each for a specified amount of time. 
  • Specifying 0 instances of any stress method will run an instance of the stress method on all online CPUs. 
The tool also builds and runs on Debian kFreeBSD and GNU HURD kernels although some stress methods or stress options are not included due to lack of support on these other kernels.
The stress-ng man page gives far more explanation of each stress method and more detailed examples of how to use the tool.

For more details, visit here or read the manual.

26 October, 2014 03:56PM by Colin Ian King (noreply@blogger.com)

hackergotchi for Blankon developers

Blankon developers

hackergotchi for Maemo developers

Maemo developers

Agreement between Nokia Corporation and Hildon Foundation announced

Nokia Corporation (“Nokia”) and Hildon Foundation (“Hildon”) have announced an agreement regarding assigning Nokia’s Maemo trademarks, domain names and trademark applications to Hildon. The agreement includes the Maemo community website, www.maemo.org.

Nokia has been the owner of the features of the Maemo brand that have been used in connection with mobile devices and software distributed by Nokia, as well as supporting the maintenance of the Maemo Website for the Maemo community. Nokia has transferred the Maemo brand features to Hildon, who will continue to support the Maemo community.

Hildon shall assume the full responsibility and liability for the maintenance and support of all the activity that is and will be on-going on the Maemo Website. For clarity, Hildon is not responsible for customer support for Nokia mobile devices using Maemo, such as N900 and/or N9. Following the acquisition of substantially all of Nokia’s Devices & Services business by Microsoft in April 2014, Microsoft is now responsible for the support of Nokia branded mobile devices. Local contact details can be found at www.nokia.com/global/wayfinder.

0 Add to favourites0 Bury

26 October, 2014 01:34PM by Hildon Foundation (board@hildonfoundation.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Costales: Ubuntu MATE & Folder Color

Now you can use Folder Color in Ubuntu MATE 14.10 too!

Caja & Folder Color
Just run these commands into a Terminal:
sudo add-apt-repository ppa:costales/folder-color
sudo apt-get update
sudo apt-get install python-caja gir1.2-caja caja libgtk2.0-bin folder-color-caja

If your Ubuntu is 32bits:
sudo cp /usr/lib/i386-linux-gnu/girepository-1.0/Caja-2.0.typelib /usr/lib/girepository-1.0/

If your Ubuntu is 64bits:
sudo cp /usr/lib/x86_64-linux-gnu/girepository-1.0/Caja-2.0.typelib /usr/lib/girepository-1.0/

Finally, restart your file browser:
caja -q

PS: If you're using the Nemo file browser (Usually with Cinnamon) see how to install here.

26 October, 2014 10:16AM by Marcos Costales (noreply@blogger.com)

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Komitmen

Tulisan ini berupa opini pribadi saya tentang suatu komitmen manusia terhadap yang pernah diucapkannya, tulisan ini saya rangkum berdasarkan kisah nyata yang saya alami, tentu saja dengan penambahan beberapa kalimat pemanis yang dalam kenyataannya tidak terucap secara nyata. Dulu teman jauh saya yang pengangguran, setelah selesai sholat jumat sempat ngomong ke saya, jika dia dapat

26 October, 2014 02:54AM by Istana Media (noreply@blogger.com)

Iang: Reverse Proxy from HTTP to HTTPS with Apache

The other day, I had problem accessing an HTTPS site from a Python script. Since I had no time to spend figuring out why (it was for a personal project anyway), I decided to make a reverse proxy using Apache. However, unlike the commonly setup reverse proxy, this one is to make an HTTPS site available as HTTP site.

This is what I needed to put in my Apache config.

    SSLProxyEngine On
    ProxyPass / https://that.secure.site/
    ProxyPassReverse / https://that.secure.site/

I also had to enable mod_proxy and mod_ssl which can be done easily (on Debian based system) by running the following command

# a2enmod proxy ssl

Then reload or restart Apache

# service apache2 reload

The most important bit in the config above is SSLProxyEngine On. Without this the proxy would not work!

26 October, 2014 02:29AM

October 25, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Lubuntu Blog: The blue Unicorn set free!

After the success[1] of their first Long-Term-Support (LTS) version in April this year, the Head of the Developer Team, Julien Lavergne, has finished work on the Utopic Unicorn which can now be downloaded at https://help.ubuntu.com/community/Lubuntu/GetLubuntu.

Acting Release Manager, Walter Lapchynski, shortly after the release: “This cycle we mainly focused on fixing known bugs. But”, he adds “there is a downside, too: due to several serious bugs, we had to skip PPC versions of the Unicorn. We recommend using of the LTS version for now and do hope, that we are able to present a PPC Version in April next year. For the moment we are still working on our plans to implement LXQt in either 15.04 or 15.10.”

Read more »

25 October, 2014 10:05PM by Rafael Laguna (noreply@blogger.com)

Joe Liau: Documenting the Death of the Dumb Telephone – Part 3: Unintelligiable

R2-D2 is not dumb. But my phone is. “[It] talks in maths. [It] buzzes like a fridge. [It's] like a detuned radio.”1

My phone has a communication problem. It beeps and boops, and sometimes screams to let me know that something is going on, but something is missing there. It’s all a bunch of noise. What exactly are you telling me, phone? Yes, there are some custom notifications to a certain degree, but normally they under the rules of a 3rd party. How do I know the difference between an emergency, an update, or an unimportant piece of information without constantly having to look at my phone? The answer is NOT a watch. In that case, maybe my phone shouldn’t have notifications at all!

Is it possible to tell me who is contacting, by what means, the type of information, and deliver the message at an appropriate time and in an appropriate fashion?
Is it possible to communicate with my digital, social, and spacial environments and tell my when my ship’s hyperdrive has been deactivated BEFORE I attempt to make the jump to lightspeed?

A *smart* phone could do that.

Dumb phone, you can beep and boop all you want, but you’re not the phone I’m looking for. Into the garbage chute!


[1] Radio Head – Karma Police

25 October, 2014 08:35PM

hackergotchi for Blankon developers

Blankon developers

Herpiko Dwi Aguno: Mengapa Saya Suka Psikologi dan Membenci Psikolog

Bukan bidang ilmu yang saya tekuni, hanya saja saya senang membaca-baca tentang psikologi. Penyakit-penyakit psikologi begitu menarik.

Namun saya selalu panik bila bertemu psikolog, karena mereka seolah-olah terlalu banyak tahu tentang diri saya. Rasa-rasanya mereka sulit dikelabui. Ini rasanya seperti pelanggaran privasi kan? #loh

25 October, 2014 02:39PM

Herpiko Dwi Aguno: Memigrasikan Database Eplaq ke GNU/Linux

Perhatian : Eplaq secara resmi dirilis untuk sistem operasi Windows. Meskipun secara teknis hal ini sama saja, penulis tidak bertanggung jawab terhadap kesalahan yang mungkin terjadi.


Yak, ini kelanjutan dari catatan yang ini.

Jadi ceritanya banyak keluhan sejak server tersebut menggunakan virtualisasi. Boro-boro Windows Server, kami pakai Windows 7 bajakan yang notabene untuk desktop. Maksa banget ya? Masalah utamanya bagi saya adalah, Windows tidak nyaman untuk di-remote dari jarak jauh. Secara natif, tidak ada SSH, SCP, dan utilitas khas Unix lainnya. Sebenarnya masih banyak alasan-alasan lain mengapa saya ngebet ke GNU/Linux. Tapi mari langsung saja.

Eqvet lebih mudah dimigrasikan ke GNU/Linux karena password root databasenya sudah diketahui. Sementara Eplaq tidak demikian. User dan password databasenya diproteksi sehingga kita tidak mungkin mengekspor dan mengimpor database dengan cara normal.

Tapi kalau saya tetap ngotot pindahin ke GNU/Linux?

Pakai cara kasar. Bawa saja semua berkas SQL-nya mentah-mentah. Eplaq versi sekarang (per tanggal catatan ini ditulis) menggunakan MySQL versi 4.1. Lawas banget, tidak tersedia di distribusi yang umum sekarang. Jadi silakan siapkan distribusi apa saja, hapus semua paket yang berkaitan dengan X. Di sini saya menggunakan Ubuntu 12.04.

$ sudo apt-get remove xserver-xorg-core


Unduh MySQL 4.12 dan kompail. Saya memasangnya di /usr/local/mysql, jadi :

wget https://downloads.mariadb.com/archives/mysql-4.1/mysql-4.1.22.tar.gz
tar -xvf mysql*
cd mysql*
./configure --prefix=/usr/local/mysql/


Di saya, ada galat berikut saat configure :

./configure --prefix=/usr/local/mysql/




No curses/termcap library found


Pasang saja paket libncurses5-dev dari lumbung resmi. Berikutnya, atur-atur semuanya sampai MySQL-nya siap pakai (termasuk passowrd root dan segala macam). Setelah MySQL-nya oke, stop service-nya, kemudian salin berkas MySQL dari Eplaq yang sekarang (Windows). Mestinya terletak di C:\Program Files\MySQL\MySQL-versi\data\. Salin semua isi direktori data tersebut ke /usr/local/mysql/var/

Setelah disalin, perbaiki permissionnya :

$ sudo chown -R mysql:mysql /usr/local/mysql/var/


Jalankan lagi service MySQL-nya. Yey, sekarang database Eplaq sudah up di GNU/Linux (meskipun tidak bisa login).

Mungkin di saat ngatur-ngatur pertama kali MySQL-nya, anda sempat menyalin berkas my.cnf ke /etc/ atau menggunakan file konfigurasi bawaan sebelumnya (jika sebelumnya sempat memasang MySQL dari lumbung resmi, bukan versi 4.1). Pilih salah satunya, hapus yang lain, kemudian sunting. Ada baris yang penting untuk ditambahkan di sini :

Ini berkaitan dengan case sensitive di nama tabel. Eplaq dikembangkan di Windows, mungkin pengembangnya tidak terlalu peduli dengan perbedaan huruf besar kecil antara query di kode dengan nama tabel sebenarnya di database. Tapi saat dibawa ke Unix-like, hal ini tidak bisa ditoleransi.



Meskipun kita tidak bisa masuk ke MySQL-nya (karena user passwordnya diproteksi dari pusat), untungnya databasenya sudah dikonfigurasi supaya bisa di-remote dari mana saja.



Baris di atas penting untuk masalah performa, silakan baca-baca di sini : http://dev.mysql.com/doc/refman/5.0/en/host-cache.html


Sekarang mari kita pindah ke Windows sebentar (client / PC Pelayanan). Pada aplikasi Eplaq, arahkan databasenya ke IP server tersebut. Voila! :D

Memperbarui Database dari Pusat

Kendala yang mungkin timbul setelah dimigrasikan ke GNU/Linux adalah update database dari pusat. Selama ini, updatenya dirilis dengan berkas executable (*.exe) yang akan menyalin beberapa berkas sql secara mentah (*.frm) ke tempat semestinya.

Jadi berikut prosedur yang mungkin dilakukan untuk menganggulangi kendala ini :

0. Update dirilis

1. Stop service MySQL di GNU/Linux, salin berkas mentah sql ke Windows virtual.

2. Jalankan update di Windows virtual

3. salin berkas mentah sql di Windows virtual ke GNU/Linux.

4. Jalankan service MySQL di GNU/Linux

5. Selesai. Repot ya?

Jadi saya sengaja tidak menghapus Windows virtualnya agar bisa digunakan sewaktu-waktu.


Update :

Eplaq versi sekarang sudah bisa mengupdate database langsung dari aplikasi, langsung memodifikasi database. Tidak lagi menggunakan berkas executable terpisah. Eh, tapi nggak tau deh nantinya bagaimana. :P

25 October, 2014 12:55PM

hackergotchi for Ubuntu developers

Ubuntu developers

Sujeevan Vijayakumaran: Review of Ubucon 2014 Germany

Last weekend the german Ubucon took place in the town of Katlenburg-Lindau. It was the 8th Ubucon in Germany and it was the third time that I attented an Ubucon as a visitor and as a speaker.


It was the second time that I participated in the organisation of the Ubucon. Last year I was part of the organisation team for the event in Heidelberg. This time the organisation was rather „silent“. In my opinion it was sometimes too silent on the mailing list. The town where the event took place was rather small, therefore there were fewer speakers and also fewer visitors compared to the last two years. First I didn't expect that the event would be great, but luckily I was wrong!

The event


The first day of the event was friday. All visitors and speakers got a name plate with their full name and their nickname at the front desk. Last year, my name was actually too long. This time only one character was missing. Atleast I got used to mistakes in my name. :-)

The opening keynote was hold by Torsten Franz who was also the head of the organisation team. After this, he was talking about „10 years Ubuntu, 10 years Community“. Later a part of the visitors went to the first Social-Event which took place at a castle next to the school. Personally I didn't go to this event.


The second day started at 10 o'clock in the morning. It was the first time that I did a workshop, which also started on that time. I talked about „Git for Beginners“. At the beginning we had a few issues with the Wifi. This also affected my workshop, because it took a rather long time for the participants to download and install git. Therefore, I changed a few things of my workshop, so afterwards the participants didn't need a working internet connection. I planned about 3 hours, but we finished after about 2,5h.

On the rest of the day, I didn't attend any talks. I rather talked to all the other nice people :-). At the evening we had two Social-Events. A big part went to „Theater der Nacht“ („Theatre of the night“). The other smaller part stayed at the school, where two persons played Live-Music. The Live-Music was quiet good, but all the other people who went to the Theatre said, that it was really great. It seems that I missed something. Bernhard took a few really nice photos in there.


On Sunday I only attended to talks. The first one was about LVM, the other one about systemd. Both talks were hold by Stefan J. Betz. and they were really informative and also a bit funny.

At the afternoon the Ubucon ended. We had really many people who helped to clean up and pack everything. Therefore, many people could leave earlier than expected.

The location

The location was great! I didn't expect that a primary school was a good place for a Ubucon, but it is! The technical infrastructure was really good. The school had several „Smartboards“ with projectors. At the entrance area there was a big hall, where you can sit and talk if you're not in hearing a talk. In this hall there were several computers with different Linux-Distributions and Desktop-Environments.

It was the first time that we had a Gaming-Lounge. There were two rooms which contained four Ubuntu-PCs with large TVs and also two Table football. The idea was great and also the rooms were nice. There were many people who played games there. I hope that we will have a similar Gaming-Lounge on future Ubucons.

All speakers got a nice gift-bag from the local organisation team. This bag mainly contained several items of the region. In my bag there were a few sausages, wine, beer and a sauce. Personally I don't eat and drink that stuff, but it was a really good idea and gesture!

On all our Ubucons, the entrance fee of 10€ includes the money for food and drinks. On the last few years we had only two or three different types of bread roll. This time we also had bread rolls, but we also had Bockwurst and different types of soups. All of them were really tasty and everybody had a bigger choice to eat something which they like.


This years Ubucon was great! Compared to last years Ubucon we had a smaller amount of attendees but this time the organisation team in Katlenburg was really good. They had different really good ideas, like the Gaming-Lounge and the gift-bag for all speakers.

I simply hope that next years Ubucon will as good as this years Ubucon. The place is not fixed yet, we are going to search for another place for next years Ubucon. By shifting the place of the Ubucon every year, all attendees will see different cities and you can also meet different new nice people. The latter reason is my main reason why I attend and help to organize the Ubucon.

If you're looking for a few nice photos of this years Ubucon, have a look here. Bernhard Hanakam took some really good photos.

25 October, 2014 12:00PM

hackergotchi for Grml developers

Grml developers

Frank Terbeck: Names - You need one.

I'm a bit of a fan of Hewlett-Packard. Mainly for their excellent hand-held calculators and for their equally awesome laboratory equipment, like function generators, oscilloscopes and signal analyzers. And they were known for their quality. In 1999 it was decided to throw away that good name and move all non-computing products to a spin-off company called “Agilent”. And by now, that name basically rings as well as the old HP one.

But guess what, someone with Agilent seems to have run weary of that name as well: I was looking for Agilent's entry level oscilloscopes and couldn't find any with the usual distributors. What came up were scopes by a company called “Keysight”. I was like “Wat. Are those knock-offs!?” - But no. That company is a spin-off off of Agilent, that takes care of the electronic measurement products. Meh.

25 October, 2014 10:53AM

hackergotchi for Ubuntu developers

Ubuntu developers

Joe Liau: Turning the Page with Cory Doctorow

I’m still buzzing from this morning. No, it’s not because of the “crystal meth”1; nor is it because of the amazing cold brew coffee2 that’s sitting in my fridge. I’m on a mental high from listening to a great mind. This morning I went to see Cory Doctorow at the Vancouver Writer’s Fest, and I’m a better person because of it.

I’ll admit that I wasn’t initially too keen on attending the Writer’s Fest, but I said to myself, “hey, this is Cory Doctorow.”
In fact, I’m not really that into books and reading much3… but this is Cory Doctorow.
And, I’m really not that entertained by copyright talk… but, hey, this is Cory Doctorow.

If it wasn’t obvious already, I’m a pretty big fan of Cory Doctorow. He’s kind of an Alchemist of the Internet Age, except that he’s not afraid to share his knowledge. I had followed him for a while on boingboing, and I was inspired enough to read Little Brother. (Before doing so, I thought I should read George Orwell’s 1984, and so I did … for the first time. Yes, I’m not very well-read… yet). Little Brother was so impressive that I continued to buy the audiobook of Homeland. I didn’t have to pay for it, but I chose to because I valued the author and his work, which completely supports Doctorow’s Laws for the Internet Age.

At the Vancouver Writer’s Fest, Cory Doctorow gave an overview of his new book, by eloquently summarizing three laws that he had come up with for the Internet Age. It was followed by a discussion on some of the values discussed in his writing. When asked about his views on “free and open source software,” Cory was quite excited to share Ubuntu with the crowd :)

The entire discussion was probably one of the best overviews of Internet freedom that I have ever heard, and having such a master-of-language deliver the message made it all the better. I was educated, entertained, and encouraged to read and write more freely. You might say that I have turned over a new page with regards to information.


I’m still buzzing.

If you get a chance to see Cory Doctorow during his current tour, then by all means do so, because, hey, it’s Cory Doctorow!



[1] Those who attended the event will get the inside joke.
[2] I learned about this from Cory Doctorow via Little Brother.
[3] Irlen Syndrome

25 October, 2014 05:20AM

hackergotchi for HandyLinux developers

HandyLinux developers

les joies du dev en vbox

j'en parlais encore tout à l'heure, je suis passé sur Debian Jessie :) cool, sauf qu'officiellement, je développe pour HandyLinux. du coup, HandyLinux passe en version Jessie de suite ? euh... non ... je vais pas balancer les débutants sur une Debian testing pour le fun... même si c'est fun dans l'idée .... mais non.

bref, pour dev une distro et des paquets, il faut un environnement propre et identique (de préférence) à l'environnement pour lequel tu dev. comme j'ai un système en 64b, pour dev les paquets 32b d'handylinux, j'ai fait un chroot en 32 avec debootstrap dans lequel je travaille. mais j'ai changé de méthode.

les capacités techniques du cadeau de wiscot me permettent d'utiliser virtualbox de façon transparente. j'ai donc un système pour moi, le principal, avec un livarp-dev en jessie (+mes applis et des tonnes de wms à tester), une vbox avec HandyLinux-dev installé, garnies de tous les outils nécessaires (dpkg-dev, build-essential, reprepro, dh-make et autres paquets rigolos), et une vbox avec une handylinux 'vierge' pour les tests.

les échanges et tests se font via un dossier partagé entre toutes les machines (hôte et invitées) permettant ainsi une vision direct. cela peut paraître stupide aux développeurs chevronnés, mais pour un autodidacte (en tout cas moi), la seule façon de progresser est d'expérimenter. la virtualisation m'apporte un confort de développement nouveau, ayant construis le livarp sur un IBM x31... autant oublier la vbox dessus :D

bref, dans la rubrique 'journal d'un apprenti dev debian', c'était arpi au pays dla vbox :P



ps : merci encore wiscot :D

25 October, 2014 02:46AM by arpinux

hackergotchi for Blankon developers

Blankon developers

hackergotchi for HandyLinux developers

HandyLinux developers

livarp Debian Jessie sans systemd


nuit blanche... donc g33keries...

Handylinux-1.7.1 est disponible, pas d'autres versions de prévues avant HandyLinux-2.0 qui sortira avec Debian Jessie... ça veut dire pause du dev :D sauf que ça fait pas de pause un g33k, ça s'aère avec autre  chose. À peine HandyLinux quitte mon esprit que livarp déboule. et avec lui ma question du moment, systemd.
j'en ai parlé dans mon article précédent... en parler c'est bien, mais il faut aussi tester avant de prendre une décision... alors en avant pour les tests d'un livarp en version Jessie sans systemd. faisable ? fonctionnel ? et bien, avant de vous faire mon rapport complet, je peux déjà vous annoncer que les utilisateurs du livarp pourront se passer de systemd sans aucun soucis ! :D

systemd s'occupe de l'init mais s'invite sur les services... j'aime pas ça. mais oh joie, le livarp est une distribution qui se veut minimale, KISS style, sans DE, gestionnaire de connexion graphique ou autres joyeusetés. Ce qui veut dire que la construction même du livarp (ou de tout système Debian minimal) le rend quasiment insensible à cette vilaine bête qu'est systemd (pour l'instant). Voici la procédure à appliquer sur livarp pour les curieux qui veulent passer à Jessie (inspiré de cet article). j'ai effectué le test sur un IBM Thinkpad T60 doté d'un SSD (mais aussi sur le Dell M4400 qui tourne maintenant avec le kernel 3.16.2-amd64).

phase 1 : on vire les dépendances qui vont appeler systemd lors d'un passage à Jessie.

car oui, l'upgrade vous colle automatiquement systemd dans les pattes... pas d'alternative proposée : l'init est maintenant géré par un méta-paquet 'init' qui dépend en priorité de systemd (mais pas que :) ). donc il faut tout simplement empêcher systemd d'être appelé, que ce soit par un paquet ou par apt.

1.1 remplacer network-manager-gnome par wicd : livarp utilise nm-applet pour la gestion simplifié du wifi. mais cette application fait partie du bureau Gnome, et gnome est particulièrement dépendant de systemd. afin de préserver les capacités de gestion du wifi, il suffit d'installer wicd, puis de désinstaller network-manager-gnome avant de lancer l'upgrade. ce qui donne :

sudo apt-get update && sudo apt-get install wicd

si vous utilisez une connexion sans-fil, vous perdrez votre liaison le temps de désinstaller network-manager-gnome. vous récupérerez votre liaison avec wicd juste après.

sudo apt-get autoremove --purge network-manager-gnome

hop, plus de connexion, donc on lance le client wicd afin de rétablir le jus

wicd-gtk -t &

il ne vous reste qu'à remplir le champ de votre interface wifi (wlan0, eth1, selon votre configuration ..) ainsi que votre clé wpa ou autre (non je vais pas détailler, livarp demande un peu de recherche perso... 'wicd' c'est facile à trouver.) .

1.2 empêcher l'installation de systemd : apt dispose d'un outil, le pinning, permettant de gérer ses préférences en matière de paquets. c'est cool, et c'est pratique. tu veux pas systemd ? bah tu lui dit à apt : il faut éditer un fichier approprié :

sudo vim /etc/apt/preferences.d/no-systemd

et utiliser la formule magique qui empêchera l'installation de tout les trucs systemd*  et laissera l'init à sysvinit :

Package: systemd*
Pin: release o=Debian
Pin-Priority: -1

phase 2 : passage à Jessie.

et c'est parti pour une dist-upgrade qu'il est hors de question de faire avec synaptics ! munis-toi dons de ton terminal !

on commence par modifier les dépôts pour ajouter les dépôts Jessie. "ajouter ??" oui. je préfère ajouter des dépôts plutôt que les remplacer, afin de pouvoir continuer de profiter des mises à jour des paquets uniquement intégrés sur Wheezy. donc édition de /etc/apt/sources.list :

sudo vim /etc/apt/sources.list

pour parvenir à ça :

## DEBIAN WHEEZY Main Repo - please, try to stay free :) ##
deb http://ftp.fr.debian.org/debian/ wheezy main
deb http://security.debian.org/ wheezy/updates main
deb http://ftp.fr.debian.org/debian/ wheezy-updates main

## DEBIAN JESSIE Main Repo - please, try to stay free :) ##
deb http://ftp.fr.debian.org/debian/ jessie main
deb http://security.debian.org/ jessie/updates main
deb http://ftp.fr.debian.org/debian/ jessie-updates main

sauvegarder, quitter, lancer la commande et aller faire le thé :D

sudo apt-get update && sudo apt-get dist-upgrade

chez moi, c'est 750 paquets mis à jour, 204 nouvellement installés, 14 à virer et 1 non mis à jour (avec un livarp tout frais à jour au 25/10) ... 469 Mb téléchargés et 589 Mb d'espace en plus requis ... oui, elle fait mal celle-là de dist-upgrade, mais bon... quand faut y aller :D . pour info, avec une connexion 2Mb, lancement de la commande à 2h40, fin de la mise à jour à 3h ... 20 min pour une mise à jour de version... et un petit 'apt-get autoremove --purge' vous libérera au moins 130 M :D

attention : lors de l'upgrade, apt vous demande si il faut remplacer le fichier pam-d bidule (enfin le truc qui fait que vous pouvez monter/démonter les volumes externes) : répondez non. il vous demandera la même chose pour /etc/issue et issue.net mais là, c'est comme vous voulez :)

le bilan au reboot

ce qui ne fonctionne plus :

  • exit le tty1 : il passe directement en mode graphique. vous n'avez plus accès aux logs du startx. il doit y avoir un moyen simple de rétablir la session graphique en tty7 mais j'ai pas encore cherché.
  • erreur avec uptime command not found . car maintenant, la commande se nomme uptime.orig.procps ... me demandez pas pourquoi...
  • exit la session awesome ... ils ont encore changé la syntaxe, un habitude entre deux version chez awesome, du coup, la config d'Aphelion ne fonctionne plus.
  • une tite erreur due à un changement de syntaxe dans la config ranger. il suffit de lire le message pour effectuer les changements appropriés.
  • session wmfs : les conkys ne sont pas lancés, on trouvera pourquoi ... rien de grave
  • session spectrwm : pas de initscreen.sh, du coup, la barre est out .
  • pensez à remplacer 'nm-applet' par 'wicd-gtk -t' dans les scripts de lancement de session ;)
  • doit certainement y avoir d'autres petites choses, mais rien qui empêche le fonctionnement du système ... et toujours pas de systemd à l'appel ;) il me restera quelques réglages à faire. de votre côté, une remise à zéro de la conf des wms suffit à évacuer les erreurs "cosmétiques".

ce qui fonctionne mieux :

  • ça démarre vite :) surtout à partir du deuxième. et c'est 60~70 M de ram au démarrage selon la session ;)

l'avenir pour livarp & Debian

je me suis posé pal mal de questions, surtout suite au commentaire de Guilhem sur mon précédent post, m'invitant à visiter Slackware. je me suis baladé du côté de boycottsystemd aussi (merci Mr S.). mais je me dis que si les opposants à systemd se cassent tous de chez Debian, Debian n'aura d'autres choix que d'adopter systemd en bloc, sans même proposer d'alternative.

en revanche, si une partie de la communauté reste sur Debian mais n'utilise pas systemd, Debian sera bien obligé d'en tenir compte dans son processus de développement.

alors je fais le choix de rester sur Debian, de continuer de proposer mes services et mes contributions, mais le tout sans systemd (sur livarp*)



25 October, 2014 02:08AM by arpinux

October 24, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: Positive results from Outreach Program for Women

In 2013, Debian participated in both rounds of the GNOME Outreach Program for Women (OPW). The first round was run in conjunction with GSoC and the second round was a standalone program.

The publicity around these programs and the strength of the Google and Debian brands attracted a range of female candidates, many of whom were shortlisted by mentors after passing their coding tests and satisfying us that they had the capability to complete a project successfully. As there are only a limited number of places for GSoC and limited funding for OPW, only a subset of these capable candidates were actually selected. The second round of OPW, for example, was only able to select two women.

Google to the rescue

Many of the women applying for the second round of OPW in 2013 were also students eligible for GSoC 2014. Debian was lucky to have over twenty places funded for GSoC 2014 and those women who had started preparing project plans for OPW and getting to know the Debian community were in a strong position to be considered for GSoC.

Chandrika Parimoo, who applied to Debian for the first round of OPW in 2013, was selected by the Ganglia project for one of five GSoC slots. Chandrika made contributions to PyNag and the ganglia-nagios-bridge.

Juliana Louback, who applied to Debian during the second round of OPW in 2013, was selected for one of Debian's GSoC 2014 slots working on the Debian WebRTC portal. The portal is built using JSCommunicator, a generic HTML5 softphone designed to be integrated in other web sites, portal frameworks and CMS systems.

Juliana has been particularly enthusiastic with her work and after completing the core requirements of her project, I suggested she explore just what is involved in embedding JSCommunicator into another open source application. By co-incidence, the xTuple development team had decided to dedicate the month of August to open source engagement, running a program called haxTuple. Juliana had originally applied to OPW with an interest in financial software and so this appeared to be a great opportunity for her to broaden her experience and engagement with the open source community.

Despite having no prior experience with ERP/CRM software, Juliana set about developing a plugin/extension for the new xTuple web frontend. She has published the extension in Github and written a detailed blog about her experience with the xTuple extension API.

Participation in DebConf14

Juliana attended DebConf14 in Portland and gave a presentation of her work on the Debian RTC portal. Many more people were able to try the portal for the first time thanks to her participation in DebConf. The video of the GSoC students at DebConf14 is available here.

Continuing with open source beyond GSoC

Although GSoC finished in August, xTuple invited Juliana and I to attend their annual xTupleCon in Norfolk, Virginia. Google went the extra mile and helped Juliana to get there and she gave a live demonstration of the xTuple extension she had created. This effort has simultaneously raised the profile of Debian, open source and open standards (SIP and WebRTC) in front of a wider audience of professional developers and business users.

Juliana describes her work at xTupleCon, Norfolk, 15 October 2014

It started with OPW

The key point to emphasize is that Juliana's work in GSoC was actually made possible by Debian's decision to participate in and promote Outreach Program for Women in 2013.

I've previously attended DebConf myself to help more developers become familiar with free and open RTC technology. I wasn't able to get there this year but thanks to the way GSoC and OPW are expanding our community, Juliana was there to help out.

24 October, 2014 11:53PM

Svetlana Belkin: Don’t Feed the Giant Octopus!

This Giant Octopus that I’m talking about is GOOGLE.  Google has it’s giant arms everywhere in the tech world and it’s mind is only on one thing: PRIVACY INVASION.

Today, I read a post by Oli Warner about Paypal’s app on the android and the permissions that it requires the user to accept when installing or updating (see image on right, credit Oil).  Google is the only one that tells the developers that you must allow these permissions when the app is installed.  This allows developers to easily take your data, or even a hacker, and use that data and do whatever they want with it.  That is a huge risk that people are taking when they don’t read the permissions when they install/update.

I ask to protect from Google’s evil and use CyanogenMod with it’s Privacy Guard or some other app that protects you.  Or even better, install F-droid and go Google free. Also, please use Firefox, not Chrome.

There are other evils that Google has but that will be another post for another day.

P.S Read THIS also.

P.S.S.: I want to thank Oli for posting his post.  It’s one thing that I was ranted on but never really wrote a post about the issue.

24 October, 2014 09:03PM

Randall Ross: On Changes, Ubuntu, "Magic Spells", and Real Power

In a previous blog post, I hinted at a recent happy development in my life/career that I would like to share with you today...

Many of you know me from my involvement in building local communities that are passionate about Ubuntu. I've been at this for nearly 7 years now as a volunteer and it's something I'm very passionate about. (Note: Friends and family sometimes use different adjectives.)


Over this time, I've had the privilege to meet and to work with many brilliant people in Vancouver BC, the community-at-large and also in the part of the community that is Canonical. (Yes, it's all community.) I've met rock stars, both literally and figuratively. They've encouraged and inspired me and finally opportunity knocked, and I answered.

I am happy to announce that I am Ubuntu's newest Community Manager.

My focus (at least initially) will be growing a large and thriving community around the architecture that powers the world's fastest computers. Think really big iron. Think Watson. Think chess. But more than that, think solving real-world problems the fastest way possible, with Power!

Ubuntu already has the beginnings of a great story on Power. I am tremendously excited about the potential of the "magic" that is Ubuntu with Juju and MaaS to launch solutions on Power hardware nearly effortlessly. I'm here to help the community that wants to change the world make that happen.

Click me: Push the button to see Power!Click me: Push the button to see Power!

Please join me. If you're a Power advocate, developer, architect, systems administrator, researcher, or anyone who's just interested in Ubuntu on Power, please send me a note and introduce yourself. Let's work together!

randall AT ubuntu DOT com


Note: I'm not replacing Jono Bacon, As many of you know, he's moved on to solve some world problems that are "not just software" and Dave Planella and team are filling those big shoes.

image by Thom Watson
and modified by me.

24 October, 2014 07:38PM

Jorge Castro: Juju in Ubuntu 14.10 highlights

This title is kind of a misnomer, as of course, all this goodness is available to Ubuntu 14.04 users, so it’s more of a “Things that happen to line up with” Ubuntu 14.10.

More new items next week, hint hint!

24 October, 2014 05:35PM

Alan Pope: Sprinting in DC

For the last week I’ve been working with 230 other Ubuntu people in Washington, DC. We have sprints like this pretty frequently now and are a great way to collaborate and Get Things Done™ at high velocity.

This is the second sprint where we’ve invited some of the developers who are blazing a trail with our Core Apps project. Not everyone could make it to the sprint, and those who didn’t were certainly missed. These are people who give their own time to work on some of the featured and default apps on the Ubuntu Phone, and perhaps in the future on the converged desktop.

It’s been a busy week with discussion & planning punctuating intense hacking sessions. Once again I’m proud of the patience, professionalism and and hard work done by these guys working on bringing up our core apps project on a phone that hasn’t event shipped a single device yet!

We’ve spent much of the week discussing and resolving design issues, fixing performance bugs, crashers and platform integration issues, as well as the odd game of ‘Cards Against Humanity’ & ‘We Didn’t Playtest This At All’ in the bar afterwards.

Having 10 community developers in the same place as 200+ Canonical people accelerates things tremendously. Being able to go and sit with the SDK team allowed Robert Schroll to express his issues with the tools when developing Beru, the ebook reader. When Filippo Scognamiglio needed help with mouse and touch input, we could grab Florian Boucault and Daniel d’Andrada to provide tips. Having Renato Filho nearby to fix problems in Evolution Data Server allowed Kunal Parmar and Mihir Soni to resolve calendar issues. The list goes on.

All week we’ve been collaborating towards a common goal of high quality, beautiful, performant and stable applications for the phone today, and desktop of the future. It’s been an incredibly fun and productive week, and I’m a little sad to be heading home today. But I’m happy that we’ve had this time together to improve the free software we all care deeply about.

The relationships built up during these sprints will of course endure. We all exchange email addresses and IRC nicknames, so we can continue the conversation once the sprint is over. Development and meetings will continue beyond the sprint, in the virtual world of IRC, hangouts and mailing lists.

24 October, 2014 05:17PM

Kubuntu: Kubuntu Shirts are Back

Kubuntu T-shirts and Polo Shirts are available again. This time our supplier is HelloTux who are working with the Hungarian Ubuntu LoCo. $3 from each shirt goes to Kubuntu and $1.5 to the Hungarian LoCo team.

24 October, 2014 03:51PM

Thomas Ward: NGINX Webserver Admins: Don’t Use SSLv3 in Your SSL-Enabled Sites!

The SSLv3 “POODLE” Vulnerability.

Most of us are aware of the recent protocol flaw vulnerability in SSLv3. Officially designated CVE-2014-3566, it is more commonly referred to as the “POODLE” (Padding Oracle On Downgraded Legacy Encryption) vulnerability.

The vulnerability is a result of a flaw in the way that the (now old) SSLv3 protocol behaves and operates. There is a Ubuntu-specific question on the POODLE vulnerability on Ask Ubuntu (link) which answers common questions on it. There is also a more general question on the POODLE vulnerability on the Information Security Stack Exchange site (link) with more general details on the POODLE vulnerability. If you would like more details, you should refer to those sites, or read the OpenSSL Whitepaper on the POODLE vulnerability (link).

As this is a protocol flaw in SSLv3, ALL implementations of SSLv3 are affected, so the only way to truly protect against POODLE is to disable SSLv3 protocol support in your web application, whether it be software you write, or hosted by a web server.

Disable SSLv3 in nginx:

Since the recommendation is to no longer use SSLv3, the simplest thing to do is disable SSLv3 for your site. In nginx, this is very simple to achieve.

Typically, one would have SSL enabled on their site with the following protocols line or similar if using the example in the default-shipped configuration files (in latest Debian or the NGINX PPAs, prior to the latest updates that happened in the past week or so):
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;

To resolve this issue and disable SSLv3 support, we merely need to use the following instead to use only TLS:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Note that on really old implementations of OpenSSL, you won’t be able to get TLSv1.1 and TLSv1.2, so at the very least you can just have TLSv1 on the ssl_protocols line. You should probably consider updating to a more recent version of OpenSSL, though, because of other risks/issues in OpenSSL.

Update OpenSSL to get TLS_FALLBACK_SCSV Support:

More importantly than just disabling SSLv3, you should definitely update your OpenSSL, or whatever SSL implementation you use, to receive support for TLS_FALLBACK_SCSV. There is an attack vector that would make you vulnerable to POODLE by starting a TLS session, but then falling back to SSLv3, and then open you to the POODLE vulnerability. By updating, and then having the use of TLS_FALLBACK_SCSV, you will be protecting yourself from protocol downgrading attacks which would also make you vulnerable to POODLE.

Ubuntu Users:


Fortunately for all users of Ubuntu, the OpenSSL packages were updated to protect against SSL downgrade attacks. This is detailed in “USN-2385-1: OpenSSL vulnerabilities” (link). Simply running sudo apt-get update with the security repositories enabled should get you the OpenSSL update to address this.

nginx from the Ubuntu Repositories:

Due to the vulnerability, and Debian already having these changes done, I was able to get in a last-minute update (courtesy of the Ubuntu Security Team and the Ubuntu Release Team), into the nginx package for the Utopic (14.10) release, which happened officially yesterday (October 23, 2014). In Utopic, the nginx package’s default config does NOT have SSLv3 on the ssl_protocols line. All other supported versions of Ubuntu do not have this change (this means that Precise and Trusty are both affected).

PPA Users:

Of course, many users of Ubuntu and nginx like the newer features of the latest nginx Stable or Mainline releases. This is why the nginx PPAs exist. Originally maintained by some of the Debian maintainers of the nginx package, I’ve taken primary responsibility of updating the nginx packages, and keeping them in sync (as close as I can) to the Debian nginx packaging.

As of today (October 24, 2014), both the Stable and Mainline PPAs have been updated to be in sync with the latest Debian packaging of the nginx package. This includes the removal of SSLv3 from the default ssl_protocols line.

Debian Users:


Fortunately, like Ubuntu, Debian has also updated the OpenSSL packages to protect against SSL downgrade attacks. This is detailed in “DSA-3053-1 openssl — security update” (link). Like in Ubuntu, this can be fixed by running sudo apt-get update or similar to update your packages.

nginx in the Debian Repositories:

If you are on Debian Unstable, you are in luck. The Debian package in Unstable has this change in it already.

If you are on Debian Testing or Debian Stable or Debian Old Stable, you’re unfortunately out of luck, this change isn’t in those versions of the package yet. You can easily do the aforementioned changes, though, and fix your configs to disable SSLv3.

24 October, 2014 02:54PM

Oli Warner: Hey Paypal, why do you need access to my microphone, camera and photos?

Who actually checks the permissions of applications they're installing? A little while ago a Paypal update stalled because it required extra permissions. This is what happens if an app you have already installed wants more power. I was more than a little surprised with what I found.

Update: Unfortunately some of the /r/Android and /r/Technology readers don't seem to be making it past the title. Rather than repeatedly telling me why Paypal might occasionally need access to my camera, perhaps consider why I need to give it permanent access. And why do I have to give something access for features I don't use. This —as you'll see if you keep reading— can be solved by both Paypal and Google.

It's easy to overlook app permissions. After all, you want something, and if there's no tangible sacrifice attached to it, people don't see the problem.

I do. I look after a few servers; security is something that's always in or around my consciousness. The prime tenet of data security is to only give access to things that need it, ideally only when they need it.

The Paypal app can, as it turns out, do a raft of things that include your peripheral hardware. Like magnetic stripe readers, scanning credit cards and OCRing cheques. I've still no idea why it needs SMS/MMS, calendar, location and app inspection access... So answers on a postcard.

That isn't really the point. My first problem comes in that Paypal are normalising applications doing a permission land-grab at install time. Something that was installed to let me do lightweight management of my account (and get notifications) has mutated into this beast that wants permanent access to my physical life.

Now, you can probably trust Paypal; they've only been shown to be moderately evil in the past... But who is to say that will always be true. They could decide to monetise this access. Or they could get hacked. Or another app could manipulate it to escalate its own privileges. In any case the result is the same: it can track you, it can watch you, it can hear you and it can smuggle data off your phone without you ever realising. You're installing the perfect tracking, wiretapping bug.

There is an argument that Android should be marshalling access to privileges better but before I get there, Paypal could and should be more considerate about what they're asking users to hand over. They could easily split the application out into plugins and distribute those in separate packages with their own privileges. It would leave the core application svelte, concentrated on core functionality, allowing cranky old users like me their simple, secure access and giving coffee-shop-hopping Alice and Bob all the naff features they want to trade for their privacy.

But the biggest issue -- as comments are highlighting-- is how Android allows developers to request permissions. It all has to be done at install-time and it all or nothing. If the user won't accept it, they can't install or update. They have to uninstall or ignore the updates... Which is obviously another massive security issue.

If an iOS app wants to use the camera, you're asked when it wants to use the camera. That might seem like Vista's UAC all over again, but that's the call here... And I think Apple are on the better side. Android needs to start thinking about permissions in an interactive sense.

Back to Paypal. Given I only use the Paypal app to manage my Paypal account, I decided to uninstall it.

There has been a great discussion following this on Hacker News. I particularly like some of the interface suggestions on how this could work without being annoying. Google could learn something from this dialogue.

24 October, 2014 01:56PM