November 22, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: Blog Moved

KDE Project:

I've moved my developer blog to my vanity domain jriddell.org, which has hosted my personal blog since 1999 (before the word existed). Tags used are Planet KDE and Planet Ubuntu for the developer feeds.

Sorry no DCOP news on jriddell.org.

22 November, 2014 03:21PM

Rafael Carreras: Release party in Barcelona

15794067981_0d173ce352_z

Another time, and there has been 16, ubuntaires celebrated the release party of the next Ubuntu version, in this case, 14.10 Utopic Unicorn.

This time, we went to Barcelona, at Raval, at the very centre, thanks to our friends of the TEB.

As always, we started with explaining what Ubuntu is and how our Catalan LoCo Team works and later Núria Alonso from the TEB explained the Ubuntu migration done at the Xarxa Òmnia.

15797518182_0a05d96fde_z

The installations room was plenty from the very first moment.

15611105340_1de89d36b4_z

There also was a very profitable auto-learning workshop on how to do an Ubuntu metadistribution.

15772275826_99d1a77d8b_z

 

And in another room, there were two Arduino workshops.

15610528118_927a8d7cc2_z15794076701_cc538bf9ba_z

 

And, of course, ubuntaires love to eat well.

 

15615259540_76daed408b_z 15614277959_c98bda1d33_z

 

Pictures by Martina Mayrhofer and Walter García, all rights reserved.

 
 

22 November, 2014 02:32PM

Jonathan Riddell: Blog Move, Bug Squashing Party in Munich

Welcome to my blog on the updated jriddell.org, now featuring my personal blog (which has existed for about 15 years or at least before the word blog existed) together with my developer blog previously on blogs.kde.org.

I’m at the Bug Squashing Party in Munich, the home of KDE and Plasma and Kubuntu rollouts in the public sector. There’s a bunch of Kubuntu people here too as well as folks from Debian, KDE PIM and LibreOffice.

So far Christian and Aaron (yes that Aaron) have presented their idea for re-writing Akonadi.

And I’ve sat down with the guys from LibreOffice and worked out why Qt4 themeing isn’t working under Plasma 5, I’m about to submit my first bugfix to Libreoffice! Next step Breeze icon theme then Qt 5 support, scary.

IMG 20141121 224006 Kubuntu People
IMG 20141121 225014 It can only be Harald
IMG 20141121 172556Akonadi: Lots of Bad
IMG 20141121 172609 Let’s re-write Akonadi!

facebooktwittergoogle_pluslinkedinby feather

22 November, 2014 12:26PM

Valorie Zimmerman: The Community Working Group needs you?

Hi folks,

Our Community Working Group has dwindled a bit, and some of our members have work that keeps them away from doing CWG work. So it is time to put out another call for volunteers.

The KDE community is growing, which is wonderful. In spite of that growth, we have less "police" type work to do these days. This leaves us more time to make positive efforts to keep the community healthy, and foster dialog and creativity within our teams.

One thing I've noticed is that listowners, IRC channel operators and forum moderators are doing an excellent job of keeping our communication channels friendly, welcoming and all-around helpful. Each of these leadership roles is crucial to keeping the community healthy.

Also, the effort to create the KDE Manifesto has adjusted KDE infrastructure to be directly and consciously supporting community values. The commitments section is particularly helpful.

Please write us at Community-wg@kde.org if you would like to become a part of our community gardening work.




22 November, 2014 05:35AM by Valorie Zimmerman (noreply@blogger.com)

Bryan Quigley: Would you crowdfund a $500 Ubuntu “open to the core” laptop?

With Jolla have success with crowdfunding a tablet, it’s a good time to see if we can’t get some mid-range Ubuntu laptops for sale to consumers in the US.  I’d like to get some idea if there is enough demand for a very open $500 Ubuntu laptop.

Would you crowdfund this? (Core Goals)

  • 15″ 1080p Matte Screen
  • 720p Webcam with microphone
  • Spill-resistant and nice to type on keyboard
  • Intel i3+ or AMD A6+
  • Built-in Intel or AMD graphics with no proprietary firmware at all
  • 4 GB Ram
  • 128 GB SSD (this would be the one component that might have proprietary as I’m not aware of another option here)
  • Ethernet 10/100/1000
  • Wireless up to N
  • HDMI
  • SD card reader
  • CoreBoot (No proprietary BIOS)
  • Ubuntu 14.04 preloaded of course
  • Agreement with manufacturer to continue selling this laptop (or similar one) with Ubuntu preloaded to consumers in the US for at least 3 years.

Stretch Goals? Or should they be core goals?

Will only be added if they don’t push the cost up significantly (or if everyone really wants them) and can be done with 100% open source software/firmware.

  • Touchscreen
  • Convertible to Tablet
  • GPS
  • FM Tuner (and built-in antenna)
  • Digital TV Tuner (and built-in antenna)
  • Ruggedized
  • Direct sunlight readable screen
  • “Frontlight” tech.  (think Amazon PaperWhite)
  • Bluetooth
  • Backlit keyboard
  • USB Power Adapter

Take my quick survey if you want to see this happen.  If I get at least 1000 Yes’s I’ll approach manufacturers.   The first version might just end up being a Chromebook modified with better specs, but I think that would be fine.  Let’s get Ubuntu on mid-range consumer devices in the US!

Link to survey – http://goo.gl/forms/bwmBf92O1d

 
Loading…

22 November, 2014 05:27AM

Joe Liau: Documenting the Death of the Dumb Telephone – Part 5: Touch-heavy

 

"U can't touch this" Source

“U can’t touch this”[4] Source

“Touch-a touch-a touch-a touch me. I wanna be dirty.”[1] — Love, Your Dumb Phone

It’s not a problem with a dirty touch screen; that would be a stretch for an entire post. It’s a problem with the dirty power[2]: perhaps an even farther stretch. But, “I’m cold on a mission, so pull on back,”[4] and stretch yourself for a moment because your phone won’t stretch for you.

We’re constantly trying to stretch the battery life of our phones, but the phones keep demanding to be touched, which drains the battery. Phones have this “dirty power” over us, but maybe there are also some “spikes” in the power management of these dumb devices. The greatest feature is also the greatest flaw in the device. It is the fact that it has to be touched in order to react. Does it even react in the most effective way? What indication is there to let you know how the phone has been touched? Do the phone reduce the amount of touches in order so save battery power? If it is not smart enough to do so, then maybe it shouldn’t have a touch screen at all!

Auto-brightness. “Can’t touch this.”[4]
Lock screen. “Can’t touch this.”[4]
Phone clock. “Can’t touch this.”[4]

Yes, your phone has these things, but they never seem to work at the right time. Never mind that I have to turn on the screen to check the time. These things currently seem to follow one set of rules instead of knowing when to activate. So when you “move slide your rump,”[4] you still end up with the infamous butt dial, and the “Dammit, Janet![1] My battery is about to die” situation.

There are already developments in these areas, which indicate that the dumb phone is truly on its last legs. “So wave your hands in the air.”[4] But, seriously, let’s reduce the number of touches, “get your face off the screen”[3] and live your life.

“Stop. Hammer time!”[4]

sop

[1] Song by Richard O’Brien
[2] Fartbarf is fun.
[3] Randall RossCommunity Leadership Summit 2014
[4] Excessively touched on “U Can’t Touch This” by MC Hammer

22 November, 2014 03:36AM

Elizabeth K. Joseph: My Vivid Vervet has crazy hair

Keeping with my Ubuntu toy tradition, I placed an order for a vervet stuffed toy, available in the US via: Miguel the Vervet Monkey.

He arrived today!

He’ll be coming along to his first Ubuntu event on December 10th, a San Francisco Ubuntu Hour.

22 November, 2014 02:57AM

Sam Hewitt: The Year of the Linux Desktop [Schism]

The phrase "The Year of the Linux Desktop" is one we see being used by hopefuls, to describe a future in which desktop Linux has reached the masses.

But I'm more pragmatic, and would like to describe the past and tweak this phrase to (I believe) accurately surmise 2011 as "The Year of the Linux Desktop Schism".

So let me tell you a little story of about this schism.

A Long Time Ago in 2011

The Linux desktop user base was happily enjoying the status quo. We had (arguably) two major desktops: GNOME and KDE , with a few smaller, less popular or used desktops as well (mostly named with initialisms).

It was the heyday of GNOME 2 on the desktop, being the default desktop used in many of the major distributions. But bubbling out of the ether of the GNOME Project was this idea for a new shell and a overhaul of GNOME, so GNOME 2 was brought to a close and GNOME Shell was born as the future of GNOME.

The Age of Dissent, Madness & Innovation

GNOME 3 and its new Shell did not sit well with everyone and many in the great blogosphere saw it as disastrous for GNOME and for users.

Much criticism was spouted and controversy raised and many started searching for alternatives. But there were those who stood by their faithful project and seeing the new version for what it was: a new beginning for GNOME and they knew that beginnings are not perfect.

Nevertheless, with this massive split in the desktop market we saw much change. There came a rapid flurry of several new projects and a moonshot from one for human beings.

The Ubuntu upgraded their fledging "netbook interface" and promoted it to the desktop, calling it Unity and it was took off down a path to unite the desktop with other emerging platforms yet to come.

There was also much dissatisfaction with the abandonment of GNOME 2, and the community they decided to lower their figurative pitchforks and use them to do some literal forking. They took up the remenants of this legacy desktop and used it to forge a new project. This project was to be named MATE and was to continue in the original spirit of GNOME 2.

The Linux Mint team, unsure of their future with GNOME under the Shell created the "Mint GNOME Shell Linux Mint Extension Pack of Extensions for GNOME Shell". This addon to the new GNOME experience would eventually lead to the creation Cinnamon, which itself was a fork of GNOME 3.

Despite being a relatively new arrival, the ambitious elementary and its team was developing the Pantheon desktop in relative secrecy for use in future versions of their OS, having previously relied on a slimmed-down GNOME 2. They were to become one of the most polished of them all.

And they all live happily ever since.

The end.

The Moral of the Story

All of these projects have been thriving in these 3 years hence, and why? Because of their communities.

All that has occurred is what the Linux community is and it is exemplary of the freedom that it and the whole of open source represents. We have the freedom in open source to exact our own change or act upon what may not agree with. We are not confined to a set of strictures, we are able to do what we feel is right and find other people who do as well.

To deride and belittle others for acting in their freedom or because they may not agree with you is just wrong and not keeping with the ethos of our community.

22 November, 2014 12:00AM

November 21, 2014

hackergotchi for Whonix

Whonix

sudo apt-get install whonix-gateway-ova and so forth idea

Then one could run "sudo apt-get install whonix-gateway-ova" (or workstation, or qcow2) and so forth. The ova or qcow2 image could then be available from /usr/share/whonix/whonix...ova etc. An alternative method to downloading images using the download page. Additionally that package could depend on virtualbox/kvm and perhaps also ship a wizard that does importing if that would simplify things.

The post sudo apt-get install whonix-gateway-ova and so forth idea appeared first on Whonix.

21 November, 2014 11:49PM by Patrick Schleizer

hackergotchi for Ubuntu developers

Ubuntu developers

Matt Bruzek: Writing tests for charms (is not that hard)

To ensure quality of the Juju charm store there are automatic processes that test charms on multiple cloud environments. These automated tests help identify the charms that need to be fixed. This has become so useful that charm tests are a requirement to become a recommended charm in the charm store for the trusty release.

What are the goals of charm testing?

For Juju to be magic, the charms must always deploy, scale and relate as they were designed. The Juju charm store contains over 200 charms and those charms can be deployed to more than 10 different cloud environments. That is a lot of environments to ensure charms work, which is why tests are now required!

Prerequisites

The Juju ecosystem team has created different tools to make writing tests easier. The charm-tools package has code that generates tests for charms. Amulet is a python 3 library that makes it easier to programatically work with units and whole deployments. To get started writing tests you will need to install charm-tools and amulet packages:

sudo add-apt repository -y ppa:juju/stable
sudo apt-get update
sudo apt-get install -y charm-tools amulet

Now that the tools are installed, change directory to the charm directory and run the following command:

juju charm add tests

This command generates two executable files 00-setup and 99-autogen into the tests directory. The test are prefixed with a number so they are run in the correct lexicographical order.

00-setup

The first file is a bash script that adds the Juju PPA repository, updates the package list, and installs the amulet package so the next tests can use the Amulet library.

99-autogen

This file contains python 3 code that uses the Amulet library. The class extends a unittest class that is a standard unit testing framework for python. The charm tool test generator creates a skeleton test that deploys related charms and adds relations, so most of the work done already.

This automated test is almost never a good enough test on its own. Ideal tests do a number of things:

  1. Deploy the charm and make sure it deploys successfully (no hook errors)
  2. Verify the service is running as expected on the remote unit (sudo service apache2 status).
  3. Change configuration values to verify users can set different values and the changes are reflected in the resulting unit.
  4. Scale up. If the charm handles the peer relation make sure it works with multiple units.
  5. Validate the relationships to make sure the charm works with other related charms.

Most charms will need additional lines of code in the 99-autogen file to verify the service is running as expected. For example if your charm implements the http interface you can use the python 3 requests package to verify a valid webpage (or API) is responding.

def test_website(self):  
    unit = self.deployment.sentry.unit['<charm-name>/0']
    url = 'http://%s' % unit['public-address']
    response = requests.get(url)
    # Raise an exception if the url was not a valid web page.
    response.raise_for_status()

What if I don't know python?

Charm tests can be written in languages other than python. The automated test program called bundletester will run the test target in a Makefile if one exists. Including a 'test' target would allow a charm author to build and run tests from the Makefile.

Bundletester will run any executable files in the tests directory of a charm. There are example tests written in bash in the Juju documentation. A test fails if the executable returns a value other than zero.

Where can I get more information about writing charm tests?

There are several videos on youtube.com about charm testing:
Charm testing video
Documentation on charm testing can be found here:
* https://juju.ubuntu.com/docs/authors-testing.html
Documentation on Amulet:
* https://juju.ubuntu.com/docs/tools-amulet.html
Check out the lamp charm as an example of multiple amulet tests:
* http://bazaar.launchpad.net/~charmers/charms/precise/lamp/trunk/files

21 November, 2014 11:15PM

Duncan McGreggor: ErlPort: Using Python from Erlang/LFE

This is a short little blog post I've been wanting to get out there ever since I ran across the erlport project a few years ago. Erlang was built for fault-tolerance. It had a goal of unprecedented uptimes, and these have been achieved. It powers 40% of our world's telecommunications traffic. It's capable of supporting amazing levels of concurrency (remember the 2007 announcement about the performance of YAWS vs. Apache?).

With this knowledge in mind, a common mistake by folks new to Erlang is to think these performance characteristics will be applicable to their own particular domain. This has often resulted in failure, disappointment, and the unjust blaming of Erlang. If you want to process huge files, do lots of string manipulation, or crunch tons of numbers, Erlang's not your bag, baby. Try Python or Julia.

But then, you may be thinking: I like supervision trees. I have long-running processes that I want to be managed per the rules I establish. I want to run lots of jobs in parallel on my 64-core box. I want to run jobs in parallel over the network on 64 of my 64-core boxes. Python's the right tool for the jobs, but I wish I could manage them with Erlang.

(There are sooo many other options for the use cases above, many of them really excellent. But this post is about Erlang/LFE :-)).

Traditionally, if you want to run other languages with Erlang in a reliable way that doesn't bring your Erlang nodes down with badly behaved code, you use Ports. (more info is available in the Interoperability Guide). This is what JInterface builds upon (and, incidentally, allows for some pretty cool integration with Clojure). However, this still leaves a pretty significant burden for the Python or Ruby developer for any serious application needs (quick one-offs that only use one or two data types are not that big a deal).

erlport was created by Dmitry Vasiliev in 2009 in an effort to solve just this problem, making it easier to use of and integrate between Erlang and more common languages like Python and Ruby. The project is maintained, and in fact has just received a few updates. Below, we'll demonstrate some usage in LFE with Python 3.

If you want to follow along, there's a demo repo you can check out:
Change into the repo directory and set up your Python environment:
Next, switch over to the LFE directory, and fire up a REPL:
Note that this will first download the necessary dependencies and compile them (that's what the [snip] is eliding).

Now we're ready to take erlport for a quick trip down to the local:
And that's all there is to it :-)

Perhaps in a future post we can dive into the internals, showing you more of the glory that is erlport. Even better, we could look at more compelling example usage, approaching some of the functionality offered by such projects as Disco or Anaconda.


21 November, 2014 11:08PM by Duncan McGreggor (noreply@blogger.com)

Nicholas Skaggs: Virtual Hugs of appreciation!

Because I was asleep at the wheel (err, keyboard) yesterday I failed to express my appreciation for some folks. It's a day for hugging! And I missed it!

I gave everyone a shoutout on social media, but since planet looks best overrun with thank you posts, I shall blog it as well!

Thank you to:

David Planella for being the rock that has anchored the team.
Leo Arias for being super awesome and making testing what it is today on all the core apps.
Carla Sella for working tirelessly on many many different things in the years I've known her. She never gives up (even when I've tried too!), and has many successes to her name for that reason.
Nekhelesh Ramananthan for always being willing to let clock app be the guinea pig
Elfy, for rocking the manual tests project. Seriously awesome work. Everytime you use the tracker, just know elfy has been a part of making that testcase happen.
Jean-Baptiste Lallement and Martin Pitt for making some of my many wishes come true over the years with quality community efforts. Autopkgtest is but one of these.

And many more. Plus some I've forgotten. I can't give hugs to everyone, but I'm willing to try!

To everyone in the ubuntu community, thanks for making ubuntu the wonderful community it is!

21 November, 2014 10:09PM by Nicholas Skaggs (noreply@blogger.com)

Duncan McGreggor: The Secret History of Lambda

Being a bit of an origins nut (I always want to know how something came to be or why it is a certain way), one of the things that always bothered me with regard to Lisp was that no one seemed to talking about the origin of lambda in the lambda calculus. I suppose if I wasn't lazy, I'd have gone to a library and spent some time looking it up. But since I was lazy, I used Wikipedia. Sadly, I never got what I wanted: no history of lambda. [1] Well, certainly some information about the history of the lambda calculus, but not the actual character or term in that context.

Why lambda? Why not gamma or delta? Or Siddham ṇḍha?

To my great relief, this question was finally answered when I was reading one of the best Lisp books I've ever read: Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. I'll save my discussion of that book for later; right now I'm going to focus on the paragraph at location 821 of my Kindle edition of the book. [2]

The story goes something like this:
  • Between 1910 and 1913, Alfred Whitehead and Bertrand Russell published three volumes of their Principia Mathematica, a work whose purpose was to derive all of mathematics from basic principles in logic. In these tomes, they cover two types of functions: the familiar descriptive functions (defined using relations), and then propositional functions. [3]
  • Within the context of propositional functions, the authors make a typographical distinction between free variables and bound variables or functions that have an actual name: bound variables use circumflex notation, e.g. x̂(x+x). [4]
  • Around 1928, Church (and then later, with his grad students Stephen Kleene and J. B. Rosser) started attempting to improve upon Russell and Whitehead regarding a foundation for logic. [5]
  • Reportedly, Church stated that the use of  in the Principia was for class abstractions, and he needed to distinguish that from function abstractions, so he used x [6] or ^x [7] for the latter.
  • However, these proved to be awkward for different reasons, and an uppercase lambda was used: Λx. [8].
  • More awkwardness followed, as this was too easily confused with other symbols (perhaps uppercase delta? logical and?). Therefore, he substituted the lowercase λ. [9]
  • John McCarthy was a student of Alonzo Church and, as such, had inherited Church's notation for functions. When McCarthy invented Lisp in the late 1950s, he used the lambda notation for creating functions, though unlike Church, he spelled it out. [10] 
It seems that our beloved lambda [11], then, is an accident in typography more than anything else.

Somehow, this endears lambda to me even more ;-)



[1] As you can see from the rest of the footnotes, I've done some research since then and have found other references to this history of the lambda notation.

[2] Norvig, Peter (1991-10-15). Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp (Kindle Locations 821-829). Elsevier Science - A. Kindle Edition. The paragraph in question is quoted here:
The name lambda comes from the mathematician Alonzo Church’s notation for functions (Church 1941). Lisp usually prefers expressive names over terse Greek letters, but lambda is an exception. Abetter name would be make - function. Lambda derives from the notation in Russell and Whitehead’s Principia Mathematica, which used a caret over bound variables: x( x + x). Church wanted a one-dimensional string, so he moved the caret in front: ^ x( x + x). The caret looked funny with nothing below it, so Church switched to the closest thing, an uppercase lambda, Λx( x + x). The Λ was easily confused with other symbols, so eventually the lowercase lambda was substituted: λx( x + x). John McCarthy was a student of Church’s at Princeton, so when McCarthy invented Lisp in 1958, he adopted the lambda notation. There were no Greek letters on the keypunches of that era, so McCarthy used (lambda (x) (+ xx)), and it has survived to this day.
[3] http://plato.stanford.edu/entries/pm-notation/#4

[4] Norvig, 1991, Location 821.

[5] History of Lambda-calculus and Combinatory Logic, page 7.

[6] Ibid.

[7] Norvig, 1991, Location 821.

[8] Ibid.

[9] Looking at Church's works online, he uses lambda notation in his 1932 paper A Set of Postulates for the Foundation of Logic. His preceding papers upon which the seminal 1932 is based On the Law of Excluded Middle (1928) and Alternatives to Zermelo's Assumption (1927), make no reference to lambda notation. As such, A Set of Postulates for the Foundation of Logic seems to be his first paper that references lambda.

[10] Norvig indicates that this is simply due to the limitations of the keypunches in the 1950s that did not have keys for Greek letters.

[11] Alex Martelli is not a fan of lambda in the context of Python, and though a good friend of Peter Norvig, I've heard Alex refer to lambda as an abomination :-) So, perhaps not beloved for everyone. In fact, Peter Norvig himself wrote (see above) that a better name would have been make-function.


21 November, 2014 09:12PM by Duncan McGreggor (noreply@blogger.com)

Duncan McGreggor: Maths and Programming: Whence Recursion?

As a manager in the software engineering industry, one of the things that I see on a regular basis is a general lack of knowledge from less experienced developers (not always "younger"!) with regard to the foundations of computing and the several related fields of mathematics. There is often a great deal of focus on what the hottest new thing is, or how the industry can be changed, or how we can innovate on the decades of profound research that has been done. All noble goals.

Notably, another trend I've recognized is that in a large group of devs, there are often a committed few who really know their field and its history. That is always so amazing to me and I have a great deal of admiration for the commitment and passion they have for their art. Let's have more of that :-)

As for myself, these days I have many fewer hours a week which I can dedicate to programming compared to what I had 10 years ago. This is not surprising, given my career path. However, what it has meant is that I have to be much more focused when I do get those precious few hours a night (and sometimes just a few per week!). I've managed this in an ad hoc manner by taking quick notes about fields of study that pique my curiosity. Over time, these get filtered and a few pop to the top that I really want to give more time.

One of the driving forces of this filtering process is my never-ending curiosity: "Why is it that way?" "How did this come to be?" "What is the history behind that convention?" I tend to keep these musings to myself, exploring them at my leisure, finding some answers, and then moving on to the next question (usually this takes several weeks!).

However, given the observations of the recent years, I thought it might be constructive to ponder aloud, as it were. To explore in a more public forum, to set an example that the vulnerability of curiosity and "not knowing" is quite okay, that even those of us with lots of time in the industry are constantly learning, constantly asking.

My latest curiosity has been around recursion: who first came up with it? How did it make it's way from abstract maths to programming languages? How did it enter the consciousness of so many software engineers (especially those who are at ease in functional programming)? It turns out that an answer to this is actually quite closely related to a previous post I wrote on the secret history of lambda. A short version goes something like this:

Giuseppe Peano wanted to establish a firm foundation for logic and maths in general. As part of this, he ended up creating consistent axioms around the hard-to-define natural numbers, counting, and arithmetic operations (which utilized recursion).  While visiting a conference in Europe, Bertrand Russell was deeply impressed by the dialectic talent of Peano and his unfailing clarity; he queried Peano as to his secret for success (Peano told him) and them asked for all of his published works. Russell proceeded to study these quite deeply. With this in his background, he eventually co-wrote the Principia Mathematica. Later, Alonzo Church (along with his grad students) sought to improve upon this, and in the process Alonzo Church ended up developing the lambda calculus. His student, John McCarthy, later created the first functional programming language, Lisp, utilizing concepts from the lambda calculus (recursion and function composition).

In the course of reading between 40-50 mathematics papers (including various histories) over the last week, I have learned far more than I had originally intended. So much so, in fact, that I'm currently working on a very fun recursion tutorial that not only covers the usual practical stuff, but steps the reader through programming implementations of the Peano axioms, arithmetic definitions, the Ackermann function, and parts of the lambda calculus.

I've got a few more blog post ideas cooking that dive into functions, their history and evolution. We'll see how those pan out. Even more exciting, though, was having found interesting papers discussing the evolution of functions and the birth of category theory from algebraic topology. This, needless to say, spawned a whole new trail of research, papers, and books... and I've got some great ideas for future blog posts/tutorials around this topic as well. (I've encountered category theory before, but watching it appear unsearched and unbidden in the midst of the other reading was quite delightful).

In closing, I enjoy reading not only the original papers (and correspondence between great thinkers of a previous era), but also the meanderings and rediscoveries of my peers. I've run across blog posts like this in the past, and they were quite enchanting. I hope that we continue to foster that in our industry, and that we see more examples of it in the future.

Keep on questing ;-)


21 November, 2014 09:10PM by Duncan McGreggor (noreply@blogger.com)

Randall Ross: Why Smart Phones Aren't - Reason #5

The quotes below are real(ish).

"Hi honey, did you just call me? I got a weird message that sounded like you were in some kind of trouble. All I could hear was traffic noise and sirens..."

"I'm sorry. I must have dialed your number by mistake. I'm not in the habit of dialing my ex-boyfriends, but since you asked, would you like to go out with me again? One more try?"

"Once a friend called me and I heard him fighting with his wife. It sounded pretty bad."

"I got a voicemail one time and it was this guy yelling at me in Hindi for almost 5 minutes. The strange thing is, I don't speak Hindi."

"I remember once my friend dialed me. I called back and left a message asking whether it was actually the owner or...

...the butt."


It's called "butt dialing" in my parts of the world, or "purse dialing" (if one carries a purse), or sometimes just called pocket dialing: That accidental event where something presses the phone and it dials a number in memory without the knowlege of its owner.

After hearing these phone stories, I'm reminded that humanity isn't perfect. Among other things, we have worries, regrets, ex's, outbursts, frustrations, and maybe even laziness. One might be inclined to write these occurrences off as natural or inevitable. But, let's reflect a little. Were the people that this happened to any happier for it? Did it improve their lives? I tend to think it created unnecessary stress. Were they to blame? Was this preventable?

"Smart" phones. I'm inclined to call you what you are: The butt of technology.

We're not living in the 90's anymore. Sure, there was a time when phones had real keys and possibly weren't lockable and maybe were even prone to the occasional purse dial. Those days are long gone. "Smart" phones, you know when you're in a pocket or a purse. Deal with it. You are as dumb as my first feature phone. Actually, you are dumber. At least my first feature phone had a keyboard cover.

Folks, I hope that in my lifetime we'll actually see a phone that is truly smart. Perhaps the Ubuntu Phone will make that hope a reality.

I can see the billboards now. "Ubuntu Phone. It Will Save Your Butt." (Insert your imagined inappropriate billboard photo alongside the caption. ;)

Do you have a great butt dialing story? Please share it in the comments.

--


No people were harmed in the making of this article. And not one person who shared their story is or was a "user". They are real people that were simply excluded from the decisions that made their phones dumb.

Image: Gwyneth Anne Bronwynne Jones (The Daring Librarian), CC BY-SA 2.0
https://www.flickr.com/photos/info_grrl/

21 November, 2014 07:00PM

Walter Lapchynski: git your blog

So I deleted my whole website by accident.

Yep, it wasn't very fun. Luckily, Linode's Backup Service saved the day. Though they backup the whole machine, it was easy to restore to the linode, change the configuration to use the required partition as a block device, reboot, and then manually mount the block device. At that point, restoration was a cp away.

The reason why this all happened is because I was working on the final piece to my ideal blogging workflow: putting everything under version control.

The problem came when I tried to initialize my current web folder. I mean, it worked, and I could clone the repo on my computer, but I couldn't push. Worse yet, I got something scary back:

remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error:
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error:
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.

So in the process of struggling with this back and forth between local and remote, I killed my files. Don't you usually panic when you get some long error message that doesn't make a darn bit of sense?

Yeah, well, I guess I kind of got the idea, but it wasn't entirely clear. The key point is that we're trying to push to a non-bare folder— i.e. one that includes all the tracked files— and it's on a main branch.

So let's move to the solution: don't do this. You should either push to a different branch and then manually merge on remote, but merges aren't always guaranteed. Why not do something entirely different? Something more proper.

First, start with the remote:

# important: make a new folder!
git init --bare ~/path/to/some/new/folder

Then local:

git clone user@server:path/to/the/aforementioned/folder
cd folder
# make some changes
git add -A
git commit -am "initial commit or whatever you want to say"
git push

If you check out what's in that folder on remote you'll find it has no tracked files. Basically, a bare repo is basically just an index. It's a place to pull and push to. You're not goinng to go there and start changing files and getting git all confused.

Now here's the magic part: in the hooks subfolder of your folder, create a new executable file called post-receive containing the following:

#!/usr/bin/env sh
export GIT_WORK_TREE=/path/to/your/final/live/folder
git checkout -f master
# add any other commands that need to happen to rebuild your site, e.g.:
# blogofile build

Assuming you've already committed some changes, go ahead and run it and check your website.

Pretty cool, huh? Well, it gets even better. Next push you do will automatically update your website for you. So now for me, an update to the website is just a local push away. No need to even login to the server anymore.

There are other solutions to this problem but this one seems to be the most consistent and easy.

21 November, 2014 03:42PM

David Henningsson: PulseAudio buffers and protocol

This is a technical post about PulseAudio internals and the upcoming protocol improvements in the upcoming PulseAudio 6.0 release.

PulseAudio memory copies and buffering

PulseAudio is said to have a “zero-copy” architecture. So let’s look at what copies and buffers are involved in a typical playback scenario.

Client side

When PulseAudio server and client runs as the same user, PulseAudio enables shared memory (SHM) for audio data. (In other cases, SHM is disabled for security reasons.) Applications can use pa_stream_begin_write to get a pointer directly into the SHM buffer. When using pa_stream_write or through the ALSA plugin, there will be one memory copy into the SHM.

Server resampling and remapping

On the server side, the server might need to convert the stream into a format that fits the hardware (and potential other streams that might be running simultaneously). This step is skipped if deemed unnecessary.

First, the samples are converted to either signed 16 bit or float 32 bit (mainly depending on resampler requirements).
In case resampling is necessary, we make use of external resampler libraries for this, the default being speex.
Second, if remapping is necessary, e g if the input is mono and the output is stereo, that is performed as well. Finally, the samples are converted to a format that the hardware supports.

So, in worst case, there might be up to four different buffers involved here (first: after converting to “work format”, second: after resampling, third: after remapping, fourth: after converting to hardware supported format), and in best case, this step is entirely skipped.

Mixing and hardware output

PulseAudio’s built in mixer multiplies each channel of each stream with a volume factor and writes the result to the hardware. In case the hardware supports mmap (memory mapping), we write the mix result directly into the DMA buffers.

Summary

The best we can do is one copy in total, from the SHM buffer directly into the DMA hardware buffer. I hope this clears up any confusion about what PulseAudio’s advertised “zero copy” capabilities means in practice.

However, memory copies is not the only thing you want to avoid to get good performance, which brings us to the next point:

Protocol improvements in 6.0

PulseAudio does pretty well CPU wise for high latency loads (e g music playback), but a bit worse for low latency loads (e g VOIP, gaming). Or to put it another way, PulseAudio has a low per sample cost, but there is still some optimisation that can be done per packet.

For every playback packet, there are three messages sent: from server to client saying “I need more data”, from client to server saying “here’s some data, I put it in SHM, at this address”, and then a third from server to client saying “thanks, I have no more use for this SHM data, please reclaim the memory”. The third message is not sent until the audio has actually been played back.
For every message, it means syscalls to write, read, and poll a unix socket. This overhead turned out to be significant enough to try to improve.

So instead of putting just the audio data into SHM, as of 6.0 we also put the messages into two SHM ringbuffers, one in each direction. For signalling we use eventfds. (There is also an optimisation layer on top of the eventfd that tries to avoid writing to the eventfd in case no one is currently waiting.) This is not so much for saving memory copies but to save syscalls.

From my own unscientific benchmarks (i e, running “top”), this saves us ~10% – 25% of CPU power in low latency use cases, half of that being on the client side.

21 November, 2014 03:36PM

Sam Hewitt: Biscotti

I'm not anyone's Nonna but I can still make good biscotti, which is something I enjoy with a nice espresso.

If you don't know what biscotti is, it's a sweet, Italian twice-baked cookie/biscuit ("biscotti" literally means twice cooked, so does "biscuit" for that matter) typically made with a strong spice, like cinnamon or anise, and with a nut/seed, typically almonds, in it.

There are a bunch of variations of biscotti, with different flavour combinations –like chocolate– but I don't like things to be too sweet so I keep it simple with a few flavours.

    Ingredients

  • 1 cup sugar
  • 1 cup butter
  • 1 cup whole or coarsely chopped almonds –toasted*, if you like.
  • 1 teaspoon ground anise
  • 1 teaspoon lemon zest –about as much as you would get from zesting half a lemon
  • 1 teaspoon vanilla extract
  • 3 eggs
  • 2 teaspoons baking powder
  • 2 3/4 cups flour –you can cut this with 1/4 cup almond flour, if you like or have some.

*to toast the almonds (or any nut) simply heat a dry pan to quite hot & toss the nuts in, now while shaking to prevent burning, cook until they are fragrant.

    Directions

  1. In a large bowl, combine the flour and baking powder
  2. Combine the sugar with the ground anise and lemon zest, then mash it into the butter –here, I deviate from tradition in the method.
  3. Add the eggs and mix those in.
  4. Add the egg-butter-sugar mixture to the dry ingredients and bring together into a smooth dough.
  5. Wrap in plastic wrap and chill in the fridge for at least 30 minutes.
  6. Preheat an oven to 350 degrees Fahrenheit.
  7. Remove the dough from the fridge and divide in half.
  8. Moisten your hands with water and form each half into a long thin log –approx. 8x32x4 centimeters.
  9. Place each log apart from each other on a clean baking sheet, and bake for 30 minutes until golden brown.
  10. Remove from the oven and let cool on a wire rack for 15 minutes or so.
  11. Cut** each log into 2 cm slices and arrange them on the baking sheet with the cut side face down/up.
  12. Return to the oven and bake for another 25 to 30 minutes, until golden brown.
  13. Remove finished biscotti from the oven and let cool on a wire rack before eating.

**to get that angled cut like seen here, slice the biscotti log on a ~45 degree angle, what is typically called "slicing on a bias".

21 November, 2014 03:00PM

Daniel Pocock: PostBooks 4.7 packages available, xTupleCon 2014 award

I recently updated the PostBooks packages in Debian and Ubuntu to version 4.7. This is the version that was released in Ubuntu 14.10 (Utopic Unicorn) and is part of the upcoming Debian 8 (jessie) release.

Better prospects for Fedora and RHEL/CentOS/EPEL packages

As well as getting the packages ready, I've been in contact with xTuple helping them generalize their build system to make packaging easier. This has eliminated the need to patch the makefiles during the build. As well as making it easier to support the Debian/Ubuntu packages, this should make it far easier for somebody to create a spec file for RPM packaging too.

Debian wins a prize

While visiting xTupleCon 2014 in Norfolk, I was delighted to receive the Community Member of the Year award which I happily accepted not just for my own efforts but for the Debian Project as a whole.

Steve Hackbarth, Director of Product Development at xTuple, myself and the impressive Community Member of the Year trophy

This is a great example of the productive relationships that exist between Debian, upstream developers and the wider free software community and it is great to be part of a team that can synthesize the work from so many other developers into ready-to-run solutions on a 100% free software platform.

Receiving this award really made me think about all the effort that has gone into making it possible to apt-get install postbooks and all the people who have collectively done far more work than myself to make this possible:

Here is a screenshot of the xTuple web / JSCommunicator integration, it was one of the highlights of xTupleCon:

and gives a preview of the wide range of commercial opportunities that WebRTC is creating for software vendors to displace traditional telecommunications providers.

xTupleCon also gave me a great opportunity to see new features (like the xTuple / Drupal web shop integration) and hear about the success of consultants and their clients deploying xTuple/PostBooks in various scenarios. The product is extremely strong in meeting the needs of manufacturing and distribution and has gained a lot of traction in these industries in the US. Many of these features are equally applicable in other markets with a strong manufacturing industry such as Germany or the UK. However, it is also flexible enough to simply disable many of the specialized features and use it as a general purpose accounting solution for consulting and services businesses. This makes it a good option for many IT freelancers and support providers looking for a way to keep their business accounts in a genuinely open source solution with a strong SQL backend and a native Linux desktop interface.

21 November, 2014 02:12PM

Walter Lapchynski: happy Ubuntu Community Appreciation Day, phillw!

I swear, I find out about some new event Ubuntu does every day. How is it that I've been around Ubuntu for as long as I have and I've only now heard about this?

Well, in any case, today is Ubuntu Community Appreciation Day, where we give thanks to the humans (remember, ubuntu means humanity!) that have so graciously donated their time to make Ubuntu a reality.

I have a lot of people to thank in the community. We have some really exceptional people about. I really feel like I could make the world's longest blog post just trying to list them all. Several folks already have!

Instead, I'll point out a major player in the community who is pretty unseen these days.

Phill Westside was a major contributor to Lubuntu. He was there when I first came to #lubuntu so many moons ago. His friendly, inviting demeanour was one of the things that kept me sticking around after my support request was met. Phill took it upon himself to encourage me just as he had with others and slowly I came to contribute more and more.

Sadly, some people in high rankings in the community failed to see Phill's value for whatever reason. I'm not sure I totally understand but I think the barrage of opinions that came from Jono Bacon's call for reform in Ubuntu governance may offer some hint. Phill's no longer an Ubuntu member and is rarely seen in the typical places in the community.

Yet he still helps out on #lubuntu, still helps with Lubuntu ISO testing, still reposts Lubuntu news on Facebook, still contributes to the Lubuntu mailing lists, still tries to help herd the cats as it were, though he's handed off titles to others (that's how I'm the Release Manager and Head of QA!). tl;dr, Phill is still a major contributor to Ubuntu.

Did I mention he's a great guy to hang out with, too? I've never met him face to face, but I'm sure if I did, I'd give him one heck of a big ole hug.

Thanks, Phill!

21 November, 2014 07:55AM

Charles Profitt: Communinty Appreciation Day: Humanity at its Finest!

Today is Ubuntu Community Appreciation Day and I wanted to recognize several people who have helped me along my journey within the Ubuntu Community.

Elizabeth Krumbach Joseph
Lyz has been a friend for years. We met when I was just transitioning from using Windows to using Linux. The Ubuntu New York LoCo was holding its bi-annual release part at the Holiday Inn located in Waterloo, NY on November, 8th 2009. Lyz gave a presentation “Who Uses and Contributes to Open Source Projects (And how you can too!)” that day and helped serve as a guide for the New York LoCo team as it sought to become an approved LoCo team. Lyz is an amazing person who has given me advice over the last five years. She contributes her energies to the Ubuntu project with a commitment and passion that has both my respect and admiration.

Thank you for all you have done Lyz!

Jorge Castro at FOSSCON in Rochester, NY

Jorge Castro at FOSSCON in Rochester, NY

Jorge Castro
Jorge is the first ‘Ubuntu celebrity’ that I interacted with. When I was helping to organize FOSSCON at RIT in Rochester, NY I contacted Jorge to ask if he would attend and present at the conference. I think Jorge’s participation helped us attract attendees the first year and I was grateful that he was willing to attend. FOSSCON has become a successful conference under the guidance of my friend Jonathan Simpson. Jorge also encouraged me to apply for sponsorship to an Ubuntu Developer Summit which culminated in my being sponsored and attending my first UDS. Jorge is a person that is always willing to help others with great energy and a smile. He is an awesome contributor to the Ubuntu Community and I am thankful that I have met him in person.

Jorge you inspire us with your advice to Just Do It!

Jono Bacon
At my first UDS I was in awe of the people around me. They were brilliant high energy people committed to Ubuntu and open source. There was a fantastic energy and passion in every session I attended. While I had offered what thoughts I had and signed up to undertake work items in many sessions I felt like a small fish in a sea of very big fish. It was Jono who took the time to let me know that he was impressed with my willingness to speak up, volunteer to undertake work and get things done. He made me feel as though my contributions were appreciated. It is an awesome feeling I will remember for the rest of my life. He inspired me that day to continue to contribute and to help others do the same.

Jono, you have my utmost respect for your ability to inspire people to take on important work and make the world a better place.

Mark poses with a student from Poland

Mark with a student from Poland

Mark Shuttleworth
While many would thank Mark for his unique vision for Ubuntu or his massive contribution of money to fund the project, I would like to thank him for the personal touch he exhibits to members of the community. Mark took the time to autograph a picture for my young son who was impressed that I knew a person who had been in space. To this day my son tells his peers at school about the picture and keeps it on his night stand. I also remember a young man at his first UDS that had a great idea and wanted to present it to Mark. I mentioned this to Mark and he immediately made time to meet the young man and listened intently to his idea. The young man felt he had a limited ability to impact the project as a college student from Poland, but after speaking with Mark he was inspired and felt that he could make a difference in his local community and in the Ubuntu Project. To this day I am amazed at the passion to do good that I have seen Mark exhibit.

Thanks for creating the project Mark; you are truly amazing.

Laura Czajkowski
I have worked with Laura on the LoCo Council and on the Community Council and she is a fantastically dedicated hard working person who is very passionate about Ubuntu LoCo Teams. She is an advocate for women in technology and open source. Laura has helped move many projects along and one of the hardest working people I have ever met. It is amazing how much work she does behind the scenes without ever seeking recognition or thanks.

Thank your Laura for all your hard work and dedication to the Ubuntu Community.

Brian Neil
Brian is one of the first New York Ubuntu LoCo members I met. We met at Wegman’s in Rochester, NY on November 6th, 2008 with the intention of reviving the NY LoCo team. Over the next several years Brian played a key role in helping me expand the activities of the team. He helped organize the launch parties, presentations, irc meetings and other activities. Brian helped man many booths at local technology events and was instrumental in getting the team copies of CDs before we were eligible to receive them from Canonical.

Thank you Brian!

Daniel Holbach
What a truly amazing person! Daniel is very thoughtful and understanding when dealing with important issues in the Ubuntu community. He takes on multiple tasks with ease and is always cheerful and energetic. He helps to keep the Community Council organized and on task. When Daniel contributes his thoughts they are always well thought out and of high value.

Daniel you are awesome my friend!

The Ubuntu Community is filled with unique, intelligent and amazing people. There is not enough space to mention everyone, but I truly feel enriched for having met many of you either in-person or online. Each and every one of you help make the Ubuntu Community amazing!


21 November, 2014 03:26AM

José Antonio Rey: Community Appreciation Day

And again, I don’t know how to start a blog post. I believe that one of my weak points is that I don’t know how to start redacting stuff. But meh, we’re here because it’s the Ubuntu Community Appreciation Day. And here am I, part of this huge community for more than three years. It’s been an awesome experience ever since I joined. And I am grateful to a whole bunch of people.

I know it may sound like a cliché, but seriously, listing all the people who I have met and contributed with in the community would be basically impossible for me. It would be a never-ending list! All I can say right now is that I am so, so thankful for crossing paths with so many of them. From developers, translators, designers and more, the Ubuntu community is such a diverse one, with people united by one thing: Ubuntu.

When I joined the community I was a kind-of disoriented 14-year old guy. As time passed, the community has helped me develop skills from improving my English (Spanish is my native language for those who didn’t know) to starting me in programming (thing that I didn’t know about a couple years ago!). And I’ve formed great friendships around.

Again, all I can say is I am forever grateful to all those people who I have worked with, and to those who I haven’t too. We are working on what’s the future of open computing, and all of this wouldn’t be possible without you. Whether you have contributed in the past are or still contributing to the community, rest assured that you have helped build this huge community.

Thank you. Sincerely, thank you.


21 November, 2014 02:48AM

Valorie Zimmerman: Ubuntu Community Appreciation Day - Thank you!

See https://wiki.ubuntu.com/UCADay for more about this lovely initiative.

Thank you maco/Mackenzie Morgan for getting me involved in Ubuntu Women and onto freenode.

Thank you akk/Akkana Peck, Pleia2/Lyz Joseph, Pendulum/Penelope Stow, belkinsa/Svetlana Belkin and so many more of the Ubuntu Women for being calm and competent, and energizing the effort to keep Ubuntu welcoming to all.

Thank you to my Kubuntu team, Riddell/Jonathan Riddell, apachelogger/Harald Sitter, shadeslayer/Rohan Garg, yofel/Philip Muscovak, ScottK/Scott Kitterman and sgclark/Scarlett Clark for your energy, intelligence and wonderful work. Your packaging and tooling makes it all happen. The great people who help users on the ML and in IRC and on the forums keep us going as well. And the folks who test, who are willing to break their systems so the rest of us don't have to: thank you!

There are so many people (some of the same ones!) to thank in KDE, but that's a separate blogpost. Your software keeps me working and online.

What a great community working together for the betterment of humanity.

Ubuntu: human kindness.

21 November, 2014 12:41AM by Valorie Zimmerman (noreply@blogger.com)

Benjamin Kerensa: Community Appreciation Day: You All Rock!

Ubuntu Community Appreciation DayToday is Ubuntu Community Appreciation Day and I wanted to quickly recognize the following people, but before doing so, I want to thank all the contributors that make the Ubuntu Community what it is.

Elizabeth Krumbach Joseph

Elizabeth is a stellar community contributor who has provided solid leadership and mentorship to thousands of Ubuntu Contributors over the years. She is always available to lend an ear to a Community Contributor and provide advice. Her leadership through the Community Council has been amazing and she has always done what is in the best interest of the Community.

Charles Profitt

Charles is a friend of the Community and long time contributor who is always providing excellent and sensical feedback as we have discussions in the community. He is among a few who will always call it how he sees it and always has the community’s best interest in mind. For me he was very helpful when I first started building communities in Ubuntu and shared his own experiences and how to get through bureaucracy and do awesome.

Michael Hall

Michael is a Canonical Employee who started as a Community Contributor and I think of all the employees I have met that work for Canonical it is Michael who has always seemed to be able to balance his role at Canonical and contributing best. He is always fair when dealing with contributors and has an uncanny ability to see things through the Community lenses which I think many at Canonical cannot. I appreciate his leadership on the Community Council.

Thanks again to all those who make Ubuntu one of the best linux distros available for Desktop, Server and Cloud! You all rock!

21 November, 2014 12:16AM

November 20, 2014

Svetlana Belkin: Community Appreciation Day 2014

In the light of Community Appreciation Day, I would like to thank everyone in the Ubuntu Community for doing a great job contributing to Ubuntu- from promoting it to fixing bugs or from leading events to teaching others.  There are two people and one group that I would like to really, really thank.

The first one is Elizabeth Krumbach Joseph of Ubuntu Women.  She was the first one who interacted with me when I first started last year.  From that point on, she mentored (not formally, but in a organic way) me on how to do thing within the Community, like how to reply in mail-lists in a way where it’s readable.  She also supported me with the various ideas that I came up with.

Ubuntu Ohio Team’s very fine leader, Stephen Michael Kellat, is the next one.  He mentored me on how to deal with the state of our LoCo and how to think in a different way on certain topics.

The group of people who I want to thank is Phil Whiteside and the Lubuntu Community (mainly the folks of the Lubuntu Admins team).

P.S. I would like to thank Michael Hall for his blog post.


20 November, 2014 09:46PM

Michael Hall: Community Appreciation Day

When things are moving fast and there’s still a lot of work to do, it’s sometimes easy to forget to stop and take the time to say “thank you” to the people that are helping you and the rest of the community. So every November 20th we in Ubuntu have a Community Appreciation Day, to remind us all of the importance of those two little words. We should of course all be saying it every day, but having a reminder like this helps when things get busy.

Like so many who have already posted their appreciation have said, it would be impossible for me to thank everybody I want to thank. Even if I spent all day on this post, I wouldn’t be able to mention even half of them.  So instead I’m going to highlight two people specifically.

First I want to thank Scarlett Clark from the Kubuntu community. In the lead up to this last Ubuntu Online Summit we didn’t have enough track leads on the Users track, which is one that I really wanted to see more active this time around. The track leads from the previous UOS couldn’t do it because of personal or work schedules, and as time was getting scarce I was really in a bind to find someone. I put out a general call for help in one of the Kubuntu IRC channels, and Scarlett was quick to volunteer. I really appreciated her enthusiasm then, and even more the work that she put in as a first-time track lead to help make the Users track a success. So thank you Scarlett.

Next, I really really want to say thank you to Svetlana Belkin, who seems to be contributing in almost every part of Ubuntu these days (including ones I barely know about, like Ubuntu Scientists). She was also a repeat track lead last UOS for the Community track, and has been contributing a lot of great feedback and ideas on ways to make our amazing community even better. Most importantly, in my opinion, is that she’s trying to re-start the Ubuntu Leadership team, which I think is needed now more than ever, and which I really want to become more active in once I get through with some deadline-bound work. I would encourage anybody else who is a leader in the community, or who wants to be one, to join her in that. And thank you, Svetlana, for everything that you do.

It is both a joy and a privilege to be able to work with people like Scarlett and Svetlana, and everybody else in the Ubuntu community. Today more than ever I am reminded about how lucky I am to be a part of it.

20 November, 2014 08:44PM

Jorge Castro: I appreciate Jose Antonio Rey!

For this year’s Ubuntu Community Appreciation day I’d like to thank Jose Antonio Rey for his tireless contribution to Juju Charms and for running Ubuntu on Air.

20 November, 2014 08:29PM

hackergotchi for Whonix

Whonix

Installing VirtualBox Guest Addition by Default?

We've been recommending against installing VirtualBox Guest Addition for a while now. It's time to reconsider this.

The post Installing VirtualBox Guest Addition by Default? appeared first on Whonix.

20 November, 2014 07:54PM by Patrick Schleizer

hackergotchi for Xanadu developers

Xanadu developers

hackergotchi for Ubuntu developers

Ubuntu developers

Nekhelesh Ramananthan: Appreciation for Riccardo Padovani

This is my first time participating in the Ubuntu Community Appreciation Day. I think it is a great idea to publicly acknowledge the work of others and thank them for their work to improve Ubuntu. After all, Ubuntu is a community where people come together to collaborate, have fun and bring technology to the masses in a humanly fashion.

Anyway, the person I like to thank is Riccardo Padovani whose contributions spread across several apps like Reminders, Ubuntu Browser, Clock, Calculator etc and various other personal projects. In particular, it shows how one can get involved and work on the applications that you use daily and improve them. Riccardo becomes a beacon of inspiration for others including myself.

It is definitely a challenge to juggle University and open-source work, and by the looks of it, he seems to have achieved a perfect equilibrium.

Thanks Riccardo for everything and keep up the good work!

20 November, 2014 02:37PM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Review Notebook HP Compaq NC4400

Sudah lumayan lama kami tidak menulis review hardware komputer yang support GNU/Linux, alasannya karena kami belum menemukan hardware yang sedikit unik untuk kami review. Kali ini kami melakukan review Notebook HP Compaq NC4400, sebenarnya sudah beberapa kali kami memiliki hardware seri ini, namun karena adanya keterbatasan waktu maka kami belum bisa melakukan review dan baru kali ini kami

20 November, 2014 02:33PM by Istana Media (noreply@blogger.com)

hackergotchi for Xanadu developers

Xanadu developers

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: Is Amnesty giving spy victims a false sense of security?

Amnesty International is getting a lot of attention with the launch of a new tool to detect government and corporate spying on your computer.

I thought I would try it myself. I went to a computer running Microsoft Windows, an operating system that does not publish its source code for public scrutiny. I used the Chrome browser, users often express concern about Chrome sending data back to the vendor about the web sites the users look for.

Without even installing the app, I would expect the Amnesty web site to recognise that I was accessing the site from a combination of proprietary software. Instead, I found a different type of warning.

Beware of Amnesty?

Instead, the only warning I received was from Amnesty's own cookies:

Even before I install the app to find out if the government is monitoring me, Amnesty is keen to monitor my behaviour themselves.

While cookies are used widely, their presence on a site like Amnesty's only further desensitizes Internet users to the downside risks of tracking technologies. By using cookies, Amnesty is effectivley saying a little bit of tracking is justified for the greater good. Doesn't that sound eerily like the justification we often hear from governments too?

Is Amnesty part of the solution or part of the problem?

Amnesty is a well known and widely respected name when human rights are mentioned.

However, their advice that you can install an app onto a Windows computer or iPhone to detect spyware is like telling people that putting a seatbelt on a motorbike will eliminate the risk of death. It would be much more credible for Amnesty to tell people to start by avoiding cloud services altogether, browse the web with Tor and only use operating systems and software that come with fully published source code under a free license. Only when 100% of the software on your device is genuinely free and open source can independent experts exercise the freedom to study the code and detect and remove backdoors, spyware and security bugs.

It reminds me of the advice Kim Kardashian gave after the Fappening, telling people they can continue trusting companies like Facebook and Apple with their private data just as long as they check the privacy settings (reality check: privacy settings in cloud services are about as effective as a band-aid on a broken leg).

Write to Amnesty

Amnesty became famous for their letter writing campaigns.

Maybe now is the time for people to write to Amnesty themselves, thank them for their efforts and encourage them to take more comprehensive action.

Feel free to cut and paste some of the following potential ideas into an email to Amnesty:


I understand you may not be able to respond to every email personally but I would like to ask you to make a statement about these matters on your public web site or blog.

I understand it is Amnesty's core objective to end grave abuses of human rights. Electronic surveillence, due to its scale and pervasiveness, has become a grave abuse in itself and in a disturbing number of jurisdictions it is an enabler for other types of grave violations of human rights.

I'm concerned that your new app Detekt gives people a false sense of security and that your campaign needs to be more comprehensive to truly help people and humanity in the long term.

If Amnesty is serious about solving the problems of electronic surveillance by government, corporations and other bad actors, please consider some of the following:

  • Instead of displaying a cookie warning on Amnesty.org, display a warning to users who access the site from a computer running closed-source software and give them a link to download an open source web browser like Firefox.
  • Redirect all visitors to your web site to use the HTTPS encrypted version of the site.
  • Using spyware-free open source software such as the Linux operating system and LibreOffice for all Amnesty's own operations, making a public statement about your use of free open source software and mentioning this in the closing paragraph of all press releases relating to surveillance topics.
  • Encouraging Amnesty donors, members and supporters to choose similar software especially when engaging in any political activities.
  • Make a public statement that Amnesty will not use cloud services such as SalesForce or Facebook to store, manage or interact with data relating to members, donors or other supporters.
  • Encouraging the public to move away from centralized cloud services such as those provided by their smartphone or social networks and use de-centralized or federated services such as XMPP chat.

Given the immense threat posed by electronic surveillance, I'd also like to call on Amnesty to allocate at least 10% of annual revenue towards software projects releasing free and open source software that offers the public an alternative to the centralized cloud.


While publicity for electronic privacy is great, I hope Amnesty can go a step further and help people use trustworthy software from the ground up.

20 November, 2014 12:48PM

Daniel Holbach: Appreciation for Michael Hall

Today marks another Ubuntu Community Appreciation Day, one of Ubuntu’s beautiful traditions, where you publicly thank people for their work. It’s always hard to pick just one person or a group of people, but you know what – better appreciate somebody’s work than nobody’s work at all.

One person I’d like to thanks for their work is Michael Hall. He is always around, always working on a number of projects, always involved in discussions on social media and never shy to add yet another work item to his TODO list. Even with big projects on his plate, he is still writing apps, blog entries, charms and hacks on a number of websites and is still on top of things like mailing list discussions.

I don’t know how he does it, but I’m astounded how he gets things done and still stays friendly. I’m glad he’s part of our team and tirelessly working on making Ubuntu a better place.

I also like this picture of him.

cat5000

Mike: keep up the good work! :-)

20 November, 2014 12:13PM

Jonathan Riddell: Kubuntu CI: the replacement for Project Neon

KDE Project:

Kubuntu CI
[thanks to jens for the lovely logo]

Many years ago Ubuntu had a plan for Grumpy Groundhog, a version of Ubuntu which was made from daily packages of free software development versions. This never happened but Kubuntu has long provided Project Neon (and later Project Neon 5) which used launchpad to build all of KDE Software Compilation and make weekly installable images. This is great for developers who want to check their software works in a final distribution or want to develop against the latest libraries without having to compile them, but it didn't help us packagers much because the packaging was monolithic and unrelated to the packages we use in Kubuntu real.

Recently Harald has been working on a replacement, Kubuntu Continuous Integration (Kubuntu CI) which makes packages fresh each day from KDE Git for Frameworks and Plasma and crucially uses the Kubuntu packaging branches. There are three PPAs, unstable (unchecked), unstable daily (with some automated checking to ensure it builds) and unstable weekly (with some manual checking)

At the same time he's been hard at work making weekly Kubuntu CI images which can be run as a live image or installable. They include the latest KDE Frameworks, Plasma 5 and a few other apps.

We've moved our packaging into Debian Git because it'll make merges ever so much easier and mean we can share fixes faster.

The Kubuntu CI Jenkins setup has the reports on what has built and what needs fixed.

Ahora es la hora

20 November, 2014 10:46AM

hackergotchi for Maemo developers

Maemo developers

Replacing your desktop laptop with a ITX workstation

If you use your laptop as a desktop replacement, you will at some point get an external display and a mouse/ keyboard for more convenient usage.
At this point the laptop becomes only a small case of non-upgradable components.

Now you could as well replace your laptop by a real case of comparable size.  This will make your PC not only easily upgradable, but allow higher-end components while being more silent at the same time.

 

Lets start by a quick size comparison between the  Fractal Design Node 304 mini ITX case and a standard 15.4″ wide-screen laptop:

the footprint is about the same as a 15.4" laptop.. ..and the height is even a little less

As you see the case has about the same footprint and is even less high compared to the open laptop.

Of course there are many choices of mini ITX cases – and some of them are even small than the Node 304. However I think the Node has a good balance between size and room for standard components; you will not need a ITX sized graphics card or a low profile CPU cooler. See this review for details of the case layout.

Now lets discuss what we can actually fit inside this small cube to see whether my claims about performance and volume can hold.

As it was one of the main reasons to select the case lets start with the

CPU Cooler

For CPU coolers the credo is: the bigger the better.  If the heat is spread over a large area it means we need to move less air to carry it away. This again means the CPU fan does not need to spin that fast and our machine is quieter. (or can go faster at the same volume level)

The Node 304 fits CPU coolers up to 165mm height. This allows to use a tower cooler with a 140mm fan. When aligned with the 140mm rear fan of the case this will give us a nice air-flow to cool the CPU.

To use up the available space I mounted the Alpenföhn Brocken 2 which is exactly 165mm high. The included fan which spins up to 1100rpm which is in the same order as the 1000rpm rear fan of the case. Generally it is regarded as one of the best and quietest coolers in the 35€ class.

Mainboard

The main question when selecting the mainboard is where the CPU socket is located; a bad placement might block the PCIE slot and ideally the cooler should be aligned with the rear fan.

Furthermore the mainboard should include as much I/O as possible as we only have one PCIE expansion slot which will be occupied by the graphics card.

Lets take an exemplary look on the current H97 ITX board lineup. As you might have guessed, I will go with Intel, but the basic discussion also applies to AMD boards. Note that I did not bother including Asus as their board does not come with Wifi.

Brocken 2 blocks the PCIE slot. So the MSI board is out. the ASRock would basically work.. ..but Gigabyte got the perfect layout.

Green marks the extent of the LGA1150 mount holes used as reference. Red marks the extent of the Brocken 2 Cooler and pink marks the extent of the corresponding fan.

Basically both Gigabyte and ASRock would work. However the more central socket placement of Gigabyte aligns the cooler with the rear case fan which improves airflow.

there cooler is nicely aligned with the rear fan

When comparing the boards with their previous H87 iteration, it is worth noting that only Gigabyte took the chance to improve their board layout. With with their H87 board they had the socket at the same bad spot as MSI.

Graphics Card

As we are building  a workstation here we use a discrete gaphics card. There are several 17cm sized graphics boards available – however the small size allows using only one fan and thus the ITX sized graphics boards are considerably louder than their larger brothers. For the GTX760 the difference under load is up to 10db. Basically the same argument as with the CPU coolers applies: the bigger the better.
The Node 304 can hold graphics cards up to 300mm length. So basically we can use any graphics card we want.

Looking at the current Nvidia GTX900 series, MSI got the best cooler design with its Twin Frozr v5. The two 100mm fans are completely off when idle and still very quiet under load.

Power Supply Unit

You can use the be quiet! PSU calculator to find out the concrete PSU requirements. I ended up with 263W for the configuration described in this post. As the PSU have maximum efficiency at about 50% load, one should go with a 500-550W PSU here.

However the Node 304 imposes a restriction on the PSU length when using a discrete GPU: it may not be longer the 160mm. This means that if the PSU has cable management it must be shorter than 160mm so the cable management plugs do not interfere.

the CS550M is only 140mm deep.. ..but it still gets quite crowded

Unfortunately (for be quiet) be quiet does not offer PSUs shorter than 160mm so we go with the Corsair CS550 instead which is impressively short at 140mm while still being quiet and having a 80Plus Gold certificate.

Assembly Notes

align the clips like this when mounting the cooler you should install the RAM before mounting the fan remove the crossbar for easy mounting

Other components

The discussion above should give you enough hints to build your own Node 304 based workstation. But in case you are interested in my complete build or need some more hints here are the other used components

CPU

I go with Intel here because I need CPU power for compiling. If you just want to build a gaming machine/ SteamBox a AMD CPU is probably going to be enough.

My requirements are a quad-core with hyperthreading which basically boils down to a core i7. However there is also the Xeon E3-1231v3 which is basically a i7 without the integrated graphics. This saves 4W TDP and some money as we use a dedicated GPU anyway.

Chipset

Chipsets really do not matter much  any more. The performance critical part of  the chipset, the northbridge, is now on the CPU. The remaining differences between H97 and Z97 are overclocking and SLI. However SLI is moot on ITX boards anyway and overclocking is usually added by the manufacturer via their custom BIOS.

The main reason for picking H97 over H87 is that the hardware vendors had another try to improve over their previous H87 boards.

Then there is also the B85 chipset, which additionally lacks software RAID support, but otherwise would be sufficient. However none of the B85 ITX boards comes with Wifi which cancels out their price advantage. For more details on the differences between B85, H87 and Z87 see here.

0 Add to favourites0 Bury

20 November, 2014 10:22AM by Pavel Rojtberg (pavel@rojtberg.net)

November 19, 2014

hackergotchi for Cumulus Linux

Cumulus Linux

The Proof Is in The Facebook Data Center Pudding

In case you missed it, Wired just exposed the elephant in the room with last week’s article on the next generation Facebook data center.

For years, anyone who’s had to build out or run a network has handed over large sums of money to the networking hardware titans, without the freedom to choose what to run on that hardware. But I’m sure if you’re someone who placed one of those orders, the thought crossed your mind if this was always going to be the norm.

Every time before you clicked or signed on that dotted line, you wondered whether it’s worth buying from the incumbents and playing in their locked-in world. Maybe deep down you had some burning desire to break away, but were afraid to stray from the blue chip way of life.

I feel your pain and it’s okay because we all want to maximize the value of our dollar. That’s why we all shop for the best choice and at the best price point; otherwise, we will just wait and buy another day.

I mean, you have the freedom to buy the servers you want, so why not have the freedom to buy the network gear that you want?flybefree - Open Data Center

Frankly, the new Facebook data center just proved that you can without waiting for a Black Friday or Cyber Monday fire sale at Cisco, Juniper, or Arista.

Tucked away in Altoona, Iowa, is more than a building with optics and networking gear inside. It is undeniable proof that you can build your network tailored to your needs and at the price point that you like.

So break out the champagne and enjoy some pudding too.

You, your team, and your company can embrace an unlocked world.

Your network no longer needs to be cumbersome, expensive, and out of your control.

We are living in a brave new ONIE world and the best part about it is you don’t need to be Facebook to live in this world too.

Cumulus Networks and our partners have already done the legwork for you with a global supply chain that brings the box from the factory floor to your door.

So relax, it’s easier than going to IKEA because we offer world-class support.

Buy > Boot > Install > Buckle Up for Hyper Growth

Open Network Install Environment (ONIE) s an open source initiative that defines an open “install environment” for bare metal network switches. Open Network Install Environment (ONIE) s an open source initiative that defines an open “install environment” for bare metal network switches.

The post The Proof Is in The Facebook Data Center Pudding appeared first on Cumulus Networks Blog.

19 November, 2014 04:00PM by Filipp Ovchinnikov

hackergotchi for Ubuntu developers

Ubuntu developers

Alan Pope: Scopes Contest Mid-way Roundup

I recently blogged about my Ubuntu Scopes Contest Wishlist after we kicked off the Scopes Development Competition where Ubuntu Phone Scope developers can be in with a chance of winning cool devices and swag. See the above links for more details.

As a judge on that contest I’ve been keeping an eye out for interesting scopes that are under development for the competition. As we’re at the half way point in the contest I thought I’d mention a few. Of course me mentioning them here doesn’t mean they’re favourites or winners, I’m just raising awareness of the competition and hopefully helping to inspire more people to get involved.

Developers have until 3rd December to complete their entry to be in with a chance of winning a laptop, tablet and other cool stuff. We’ll accept new scopes in the Ubuntu Click Store at any time though :)

Robert Schroll is working on a GMail scope giving fast access to email.

gmail

Bogdan Cuza is developing a Mixcloud scope making it easy to search for cool songs and remixes.

Screenshot from 2014-11-10 18:57:32

Sam Segers has a Google Places scope making it easy to find local businesses.

Places-scope002

Michael Weimann has been working on a Nearby Scope and has been blogging about his progress.

Main

Dan has also been blogging about the Cinema Scope.

img2

Finally Riccardo Padovani has been posting screenshots of his Duck Duck Go Scope which is already in the click store.

divergentduck

I’m sure there there are other scopes I’ve missed. Feel free to link to them in the comments. It’s incredibly exciting for me to see early adopter developers embracing our fast-moving platform to realise their ideas.

Good luck to everyone entering the contest.

19 November, 2014 12:03PM

Rhonda D'Vine: The Pogues

Actually I was working already on a different music blog entry, but I want to get this one out. I was invited to join the Organic Dancefloor last thursday. And it was a really great experience. A lot of nice people enjoying a dance evening of sort of improvisational traditional folk dancing with influences from different parts of europe. Three bands playing throughout the evening. I definitely plan to go there again. :)

Which brings me to the band I want to present you now. They also play sort-of traditional songs, or at least with traditional instruments, and are also quite danceable to. This is about The Pogues. And these are the songs that I do enjoy listening to every now and then:

  • Medley: Don't meddle with the Medley. Rather dance to it.
  • Fairytale of New York: Well, we're almost in the season for it. :)
  • Streams of Whiskey: Also quite the style of song that they are known for and party with at concerts.

Like always, enjoy!

/music | permanent link | Comments: 2 | Flattr this

19 November, 2014 11:10AM

Walter Lapchynski: fullscreen slides in Hangouts workaround

More to come on the Ubuntu Online Summit soon but in the interim, I wanted to bring up something I learned a little too late.

Google Hangouts is handy little tool. Outside of providing an alternative to the likes of Skype, it also features some useful apps for using with a virtual tech conference like UOS really is. One of them is called Hangout Toolbox and has a feature called "Lower Third" that will allow you a pretty logo-ized tag line.

But the thing really useful to something like UOS is the default Screenshare app. Clicking on it, you get the option of either sharing the entire screen or one of your windows. So logically, you open the window with your presentation and start the full screen slide show, right?

Yeah, not exactly. I did that and I was going along talking away for my first presentation and no one could see anything beyond the first slide. I was manually advancing and it looked good on my side, but to everyone else, it was just frozen on the title. Since I was in full screen mode and no one had yet joined the Hangout, I had no idea what was going on though people were trying to get my attention on IRC.

I discovered a solution rather quickly: a windowized presentation. That is what I ended up doing, but things can look kind of tiny. Still, there are hoardes of posts out there on people doing similar things in PowerPoint and Keynote.

This is not the right way, though. We want full screen. So how do we do that?

It's simple, really. You share the full screen— not a window— then navigate to your presentation and start the slide show.

I guess that Hangouts thinks that the full screen window is not the same as the app window itself. Which is strange because, according to xprop -root | grep ^_NET_CLIENT_LIST, it's not a different window.

Unfortunately, unlike a lot of the other tools that the community uses, Google Hangouts is not open source. We can't just file a bug report and get to work on it (though you can file a bug report). This is really contrary to the spirit of Ubuntu. In fact, there's already a bug report for this very problem, the proprietary nature of Hangouts (not the first such issue, either).

It seems that especially for this public gathering of Ubuntu contributors and users, that this would be the most important place to put our best open source foot forward. That being said, I encourage you to confirm the bug and participate in helping to find a cure (rather than a mere treatment) to the malady of copyright.

19 November, 2014 12:40AM

Eric Hammond: AWS Lambda: Pay The Same Price For Faster Execution

multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are poportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

For example, if you allocate 128 MB of memory and your Lambda function takes 10 seconds to run, then you might be able to allocate 640 MB and see it complete in about 2 seconds.

At current AWS Lambda pricing, both of these would cost about $0.02 per thousand invocations, but the second one completes five times faster.

Things that would cause the higher memory/CPU option to cost more in total include:

  • Time chunks are rounded up to the nearest 100 ms. If your Lambda function runs near or under that in less memory, then increasing the CPU allocated will make it return faster, but the rounding up will cause the resulting cost to be more expensive.

  • Doubling the CPU allocated to a Lambda function does not necessarily cut the run time in half. The code might be accessing external resources (e.g., calling S3 APIs) or interacting with disk. If you double the requested CPU, then those fixed time actions will end up costing twice as much.

If you have a slow Lambda function, and it seems that most of its time is probably spent in CPU activities, then it might be worth testing an increase in requested memory to see if you can get it to complete much faster without increasing the cost by much.

I’d love to hear what practical test results people find when comparing different memory/CPU allocation values for the same Lambda function.

Original article: http://alestic.com/2014/11/aws-lambda-speed

19 November, 2014 12:01AM

November 18, 2014

Charles Profitt: Kali Linux Network Scanning Cookbook Review

kaliChapter 1: Getting Started
Good detailed coverage of setting up VMWare Player (Windows) or VMWare Fusion (Mac OS X). I would have seen the author at least cover VirtualBox as it works on Windows, OS X and Linux.

The discussion on having vulnerable targets to work with covers Metasploitable which is an excellent choise. I am glad the point was stressed to not expose a Metasploitable system to any untrusted network. While I appreciate learning on Windows XP I would have expected a cook book to focus on either the latest Windows OS (8.1) or the most used Windows OS (Windows 7).

Chapter 2: Discovery
For some IT professionals the review of the OSI model is potentially redundant, but for many it is essential to truly understand the process of scanning a network. The discussion on layer 2 vs layer 3 vs layer 4 discovery was very clear and effective.

I like the depth given for each of the chosen tools (Scapy, ARPing, Nmap, NetDiscover, Metasploit, ICMP ping, fping, and hping3). I have not made much use of Scapy, but I think I will be adding it to my tool bag due to the excellent python examples given making use of it.

Chapter 3: Port Scanning
This chapter was well done with coverage of Scapy, Nmap, Metasploit, Hping3, Dmitry and Netcat. Nmap is always a favorite of mine, but I was particularly impressed by the coverage of Scapy scripts used for scanning for zombies.

Chapter 4: Fingerprinting
The tools covered in this chapter are Netcat, Python sockets, Dmitry, Nmap NSE, Amap, xProbe2, pOf, Onesixtyone and SNMPwalk. I think the best part about this chapter is the explanation of how the various programs identify (fingerprint) the target. In particular explaining how xProbe2 can claim that several identifications are 100% when there can obviously really be only one that is accurate.

Chapter 5: Vulnerability Scanning
This chapter covered Nmap scripting Engine, MSF auxillary modules, Nessus, HTTP interaction and ICMP interaction. I liked the python scripts and use of wget in the sections of HTTP interaction. I would have liked to see the chapter deal with openVAS in addition to Nessus.

Chapter 6: Denial of Service and Chapter 7: Web Application Scanning
These chapters are both covering areas I do not have much opportunity to play with. I did like the covereage of the Burp Suite. For people interested in looking at these areas there is a wealth of knowledge here.

Chapter 8: Automating Kali Tools
This is the chapter that reveals the pay off of using a Linux based security tool. The ease of scripting each process. I particularly liked the discussion on how to analyze Nmap output with grep.

Overall, I feel the book is a solid addition to the libraries of Systems Administrator and Penetration testers from novie to intermediate.


18 November, 2014 11:11PM

Eric Hammond: lambdash: AWS Lambda Shell Hack

I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

Setup

Define the basic parameters.

# Replace with your bucket name
bucket_name=lambdash.alestic.com

function=lambdash
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
log_group_name=/aws/lambda/$function

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
      }]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. Log to Cloudwatch and upload files to a specific S3 bucket/location.

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": [ "logs:*" ],
          "Resource": "arn:aws:logs:*:*:*"
      }, {
          "Effect": "Allow",
          "Action": [ "s3:PutObject" ],
          "Resource": "arn:aws:s3:::'$bucket_name'/'$function'/*"
      }]
  }'

Grab the current Lambda function JavaScript from the Alestic lambdash GitHub repository, create the ZIP file, and upload the new Lambda function.

wget -q -O$function.js \
  https://raw.githubusercontent.com/alestic/lambdash/master/lambdash.js
npm install async fs tmp
zip -r $function.zip $function.js node_modules
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --runtime nodejs \
  --mode event \
  --handler "$function.handler" \
  --role "$lambda_execution_role_arn" \
  --timeout 60 \
  --memory-size 256

Invoke the Lambda function with the desired command and S3 output locations. Adjust the command and repeat as desired.

cat > $function-args.json <<EOM
{
    "command": "ls -laiR /",
    "bucket":  "$bucket_name",
    "stdout":  "$function/stdout.txt",
    "stderr":  "$function/stderr.txt"
}
EOM

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-args.json"

Look at the Lambda function log output in CloudWatch.

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Get the command output.

aws s3 cp s3://$bucket_name/$function/stdout.txt .
aws s3 cp s3://$bucket_name/$function/stderr.txt .
less stdout.txt stderr.txt

Clean up

If you are done with this example, you can delete the created resources. Or, you can leave the Lambda function in place ready for future use. After all, you aren’t charged unless you use it.

aws s3 rm s3://$bucket_name/$function/stdout.txt
aws s3 rm s3://$bucket_name/$function/stderr.txt
aws lambda delete-function \
  --function-name "$function"
aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"
aws iam delete-role \
  --role-name "$lambda_execution_role_name"
aws logs delete-log-group \
  --log-group-name "$log_group_name"

Requests

What command output would you like to see in the Lambda environment?

Original article: http://alestic.com/2014/11/aws-lambda-shell

18 November, 2014 09:21PM

Ubuntu LoCo Council: Regular LoCo Council Meeting for November 2014

Meeting information

Meeting summary

Opening Business

The discussion about “Opening Business” started at 20:01.

  • Listing of Sitting Members of LoCo Council (20:01)
    • For the avoidance of uncertainty and doubt, it is necessary to list the members of the council who are presently serving active terms.
    • Marcos Costales, term expiring 2015-04-16
    • Jose Antonio Rey, term expiring 2015-10-04
    • Pablo Rubianes, term expiring 2015-04-16
    • Sergio Meneses, term expiring 2015-10-04
    • Stephen Michael Kellat, term expiring 2015-10-04
    • There is currently one vacant seat on LoCo Council
  • Roll Call (20:01)
    • Vote: LoCo Council Roll Call (All Members Present To Vote In Favor To Register Attendance) (Denied)

Referral of matters to Council for disposition outside this meeting due to a lack of quorum

The discussion about “Referral of matters to Council for disposition outside this meeting due to a lack of quorum” started at 20:04.

  • ACTION: ubuntu-lococouncil To handle the Re-Verification application of Ubuntu Oregon via bugmail
  • ACTION: skellat To impose the case management bug for Ubuntu Oregon and notify Point of Contact for Ubuntu Oregon
  • ACTION: ubuntu-lococouncil To receive further After-Action Reports on Ubuntu Online Summit 1411 via e-mail at loco-council@lists.ubuntu.com
  • Ubuntu Online Summit was fairly busy for the LoCo Council even though, again, we had difficulty attending.
  • Multiple workitems were picked up by the Council during the event.
  • LINK: https://blueprints.launchpad.net/ubuntu/+spec/development-1411-iso-l10n-uefi
  • LINK: http://summit.ubuntu.com/uos-1411/meeting/22380/development-1411-iso-l10n-uefi/
  • ISO Localization — Survey LoCo teams as to tools being used for localization. How many times has the wheel been re-invented? What changes are being made?
  • ISO Localization — Contact ubuntu-devel for the three issues (1. use of ubuntu-defaults-builder to make 64-bit signed ISOs; 2. how to simplify use for LoCos; 3. how do we integrate the various hacks)
  • LINK: https://blueprints.launchpad.net/ubuntu/+spec/community-1411-planning-v-cycle-events
  • LINK: http://summit.ubuntu.com/uos-1411/meeting/22366/community-1411-planning-v-cycle-events/
  • Community Events During the Vivid Cycle — Develop in concert with Canonical Community Team appropriate feedback mechanism and measurements tools for assessment of UGJ
  • Community Events During the Vivid Cycle — Assess the state of social media usage by LoCo Teams and look at developing best practices.
  • Community Events During the Vivid Cycle — Develop review of question: How do we “follow the zeitgeist” in terms of maximizing use of social networks in approaching users
  • Community Events During the Vivid Cycle — Develop plan for AskUbuntu Patrol game/project/exercise including challenge coins for UGJ 1 to 2 during 15.04 cycle
  • Community Events During the Vivid Cycle — Cooperate with Canonical Community Team in developing developing AskUbuntu documentation for Patrol game/project/exercise
  • Relevant sessions where we should have had attendance but did [not] include: LoCo Team Activity Review, Promoting the Ubuntu phone in LoCos, Ubuntu Oregon LoCo meet and greet and planning, Transparency and Participation
  • ACTION: ubuntu-lococouncil To review the state of community teams being verified
  • As per their request made by blog post, Ubuntu Vancouver has been removed from the ~locoteams set on Launchpad
  • LINK: http://randall.executiv.es/we-are-NOT-loco
  • The next regular meeting is scheduled for December 15, 2014. The meeting will be convened at 2000 UTC.
  • All persons with questions, concerns, or business to come before LoCo Council before the next regularly scheduled meeting should write to the Council at loco-council@lists.ubuntu.com so that we may be made aware of concerns and potentially proceed to action.
  • ACTION: skellat to post report of meeting to blog and loco-contacts@lists.ubuntu.com

Any Other Business

The discussion about “Any Other Business” started at 20:07.

Vote results

Action items, by person

  • skellat
    • skellat To impose the case management bug for Ubuntu Oregon and notify Point of Contact for Ubuntu Oregon
    • skellat to post report of meeting to blog and loco-contacts@lists.ubuntu.com
  • **UNASSIGNED**
    • ubuntu-lococouncil To handle the Re-Verification application of Ubuntu Oregon via bugmail
    • ubuntu-lococouncil To receive further After-Action Reports on Ubuntu Online Summit 1411 via e-mail at loco-council@lists.ubuntu.com
    • ubuntu-lococouncil To review the state of community teams being verified

Done items

  • (none)

People present (lines said)

18 November, 2014 08:38PM

Ubuntu Kernel Team: Kernel Team Meeting Minutes – November 18, 2014

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141118 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

The master-next branch of our Vivid kernel has been rebased to the
v3.18-rc5 upstream kernel. We have still withheld uploading to
the archive until we’ve progressed to a later -rc candidate.
—–
Important upcoming dates:
Thurs Dec 18 – Vivid Alpha 1 (~4 weeks away)
Thurs Jan 22 – Vivid Alpha 2 (~9 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today (18-Nov):

  • Lucid – Verification & Testing
  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 31-Oct through 22-Nov
    ====================================================================
    31-Oct Last day for kernel commits for this cycle
    02-Nov – 08-Nov Kernel prep week.
    09-Nov – 15-Nov Bug verification & Regression testing.
    16-Nov – 22-Nov Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

18 November, 2014 05:21PM

Walter Lapchynski: fullscreen slides in Hangouts workaround

More to come on the Ubuntu Online Summit soon but in the interim, I wanted to bring up something I learned a little too late.

Google Hangouts is handy little tool. Outside of providing an alternative to the likes of Skype, it also features some useful apps for using with a virtual tech conference like UOS really is. One of them is called Hangout Toolbox and has a feature called "Lower Third" that will allow you a pretty logo-ized tag line.

But the thing really useful to something like UOS is the default Screenshare app. Clicking on it, you get the option of either sharing the entire screen or one of your windows. So logically, you open the window with your presentation and start the full screen slide show, right?

Yeah, not exactly. I did that and I was going along talking away for my first presentation and no one could see anything beyond the first slide. I was manually advancing and it looked good on my side, but to everyone else, it was just frozen on the title. Since I was in full screen mode and no one had yet joined the Hangout, I had no idea what was going on though people were trying to get my attention on IRC.

I discovered a solution rather quickly: a windowized presentation. That is what I ended up doing, but things can look kind of tiny. Still, there are hoardes of posts out there on people doing similar things in PowerPoint and Keynote.

This is not the right way, though. We want full screen. So how do we do that?

It's simple, really. You share the full screen— not a window— then navigate to your presentation and start the slide show.

I guess that Hangouts thinks that the full screen window is not the same as the app window itself. Which is strange because, according to xprop -root | grep ^_NET_CLIENT_LIST, it's not a different window.

Unfortunately, unlike a lot of the other tools that the community uses, Google Hangouts is not open source. We can't just file a bug report and get to work on it (though you can file a bug report). This is really contrary to the spirit of Ubuntu. In fact, there's already a bug report for this very problem, the proprietary nature of Hangouts (not the first such issue, either).

It seems that especially for this public gathering of Ubuntu contributors and users, that this would be the most important place to put our best open source foot forward. That being said, I encourage you to confirm the bug and participate in helping to find a cure (rather than a mere treatment) to the malady of copyright.

18 November, 2014 04:40PM

Charles Butler: A Laymans Guide to the "Big Data" Ecosystem

Big Data Wordcloud "Big Data" is now synonymous with marketing, and buzzword bingo. As a layman getting started in the ecosystem I found it truly difficult to really grasp what it was, and where I should be looking to get started. This will be the first post in a multi-post series breaking down the big-data stack, leveraging examples with Juju.

Disambiguation of the term 'Hadoop'

I dont know about you but when I think "Big Data" - I think of one thing. The 800lb Gorilla in the room and that's Hadoop. It's become synonymous with crunching petabytes of data in the name of everything from medical research, market analysis, to crunching website results powering your favorite search engine and computing trends over long periods of time. But that's kind of a misnomer, as there is an entire ecosystem around this software; and I think Wikipedia has defined this better than I ever could:

Apache Hadoop is an open-source software framework for distributed storage and distributed processing of Big Data on clusters of commodity hardware. Its Hadoop Distributed File System (HDFS) splits files into large blocks (default 64MB or 128MB) and distributes the blocks amongst the nodes in the cluster. For processing the data, the Hadoop Map/Reduce ships code (specifically Jar files) to the nodes that have the required data, and the nodes then process the data in parallel. This approach leverages data locality, in contrast to conventional HPC architecture which usually relies on a parallel file system (compute and data separated, but connected with high-speed networking).

Source: Wikipedia

So, in summation - Hadoop is really an ecosystem of applications and utilities (despite the core map-reduce engine being titled 'hadoop'). To further confuse and complicate things there are several vendors creating Hadoop application stacks.

How do you know which one to pick? "Which one makes my job easier?" you might ask. At the end of the day each vendor mixes in their own patches and special flavor of management on top of the Vanilla Apache Hadoop. Some give the patches back, some keep them proprietary to their distribution. It's all about preference, which approach appeals to you, and how much time you want to spend getting started. For the sake of brevity, and attention - I'll pick a middle of the road candidate and take a look at just the major applications in the stack and give illustrations using the Hortonworks flavor.

Map Reduce Engines

Hadoop

The core component(s)

Dancing hadoop elephants

The base Apache Hadoop framework (as of v2) is composed of the following modules:

Hadoop Common – contains libraries and utilities needed by other Hadoop modules.

Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster.

Hadoop YARN – a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users' applications.

Hadoop MapReduce – a programming model for large scale data processing.

How to deploy Hadoop Core quickly with juju as a reference architecture

juju quickstart bundle:hdp-core-batch-processing

Hadoop Core Bundle Depiction from Juju

Tez

High Preformance Bach Processing Engine

Tez generalizes the MapReduce paradigm to a more powerful framework based on expressing computations as a dataflow graph. Tez is not meant directly for end-users – in fact it enables developers to build end-user applications with much better performance and flexibility. Hadoop has traditionally been a batch-processing platform for large amounts of data. However, there are a lot of use cases for near-real-time performance of query processing. There are also several workloads, such as Machine Learning, which do not fit will into the MapReduce paradigm. Tez helps Hadoop address these use cases.

How to deploy Tez quickly with juju as a reference architecture

juju quickstart bundle:high-performance-batch-processing

Tez Bundle Depiction from Juju

Distributed Stream Processing

Storm

Real Time Processing of Data Streams

Storm is a distributed computation framework written predominantly in the Clojure programming language. It uses custom created "spouts" and "bolts" to define information sources and manipulations to allow batch, distributed processing of streaming data.

A Storm application is designed as a topology in the shape of a directed acyclic graph (DAG) with spouts and bolts acting as the graph vertices. Edges on the graph are named streams, and direct data from one node to another. Together, the topology acts as a data transformation pipeline. At a superficial level the general topology structure is similar to a MapReduce job, with the main difference being that data is processed in real-time as opposed to in individual batches. Additionally, Storm topologies run indefinitely until killed, while a MapReduce job DAG must eventually end.

How to deploy Storm quickly with juju as a reference architecture

juju quickstart bundle:realtime-analytics-with-storm

Storm Bundle Depiction from Juju

Client Libraries and Supporting Applications for writing Map/Reduce

Hive

Write Map/Reduce applications with a variant of SQL

Hadoop Hive Logo

Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. Hive supports analysis of large datasets stored in Hadoop's HDFS and compatible file systems such as Amazon S3 filesystem. It provides an SQL-like language called HiveQL with schema on read and transparently converts queries to map/reduce, Apache Tez and in the future Spark jobs. All three execution engines can run in Hadoop YARN. To accelerate queries, it provides indexes, including bitmap indexes.

How to deploy Hive quickly with juju as a reference architecture

juju quickstart bundle:data-analytics-with-sql-like

Hive Bundle Depiction from Juju

Pig

The Rapid Latin Language of Big Data

Pig the rapid map reduce writer

Pig is a high-level platform for creating MapReduce programs used with Hadoop. The language for this platform is called Pig Latin.Pig Latin abstracts the programming from the Java MapReduce idiom into a notation which makes MapReduce programming high level, similar to that of SQL for RDBMS systems. Pig Latin can be extended using User Defined Functions which the user can write in Java, Python, JavaScript, Ruby or Groovy and then call directly from the language.

How to deploy Pig quickly with juju as a reference architecture

juju quickstart bundle:data-analytics-with-pig-latin

Pig Bundle Depiction from Juju

Breaking down comprehension on Pig and Hive - and how they work in the ecosystem

Diagram of Pig vs Hive - credit: http://www.bigdatatrendz.com/2013/10/introduction-to-apache-hive-and-pig.html

Pig and Hive both bundle client side application libraries, and deployed daemon components that bolt on additional functionality for the data scientist working with the data in HDFS or SQL tables. This allows a powerful combination for end-users to write map/reduce applications rapidly.

While they are not stand-alone entities in the hadoop bundle, they do provide a lower barrier to entry for end-users looking to get into the ecosystem without learning all the intricacies of learning Map/Reduce programming with just the core Hadoop stack.

Both of these applications communicate with the yarn-master to load a JIT compiled map/reduce application. Hive and Pig both have their own syntax, and translate the queries to a respective Map/Reduce jar that is then distributed to do the queries.

With the core components broken down - we're ready to take a look at the new kid on the block in the next post in the series: Spark for the layman.

18 November, 2014 04:05PM

hackergotchi for Cumulus Linux

Cumulus Linux

Using New Relic Server Monitoring With Cumulus Linux

One of the most visible trends on the Web today is the “SaaS-ification” of the enterprise. Major productivity functions like email and calendaring, customer relationship management (CRM), and IT systems management are gaining greater value by being deployed as cloud-based services. IT and systems monitoring companies like New Relic are thriving in the cloud as well.

Another major trend, one that Cumulus Networks is at the forefront of, is the transformation of the “switch as a server.” If you aren’t familiar, check out Cumulus Networks engineer Leslie Carr’s excellent PuppetConf 2014 presentation. Since Cumulus Linux supports Debian-based packages out of the box, we decided to take New Relic’s Server Monitoring product for a spin. We wanted to see how Cumulus Linux extends Server Monitoring’s functionality to monitoring switches.

Once logged into Cumulus Linux, installing the server agent takes just a few minutes, as expected. Leveraging the documentation and installation guide allowed us to get up and running in minutes.

Since it’s SaaS, there is obviously no server deployment required, so all you have to do is to log in to your New Relic account and start looking at the performance data that is automagically pushed to your dashboard. Here’s a snapshot of the New Relic Servers console that was configured to receive data from a 2-leaf configuration, provisioned on the Cumulus Workbench:

Cumulus Workbench | Cumulus NetworksSnapshot of the New Relic Servers console

Digging into the network tab, we see all the ports that have been configured. These include eth0 on the management network and the four links between the two switches: swp1, swp2, swp3 and swp4. We generated some traffic between the two switches, which is visible on the highlighted port swp2.

New Relic Console

A common concern with SaaS based cloud services is security. New Relic has implemented HTTPS-based communications as a default setting for all their agents since 2013, providing users the comfort of knowing that all communications are secure.

You can configure the agent to accommodate your networking and firewall setup, and if you are curious about the data being collected by the server agent and how to optimize your use of the Server UI, read New Relic’s documentation.

Another important security aspect worth highlighting is the separation between the control and data plane that Cumulus Linux offers. The server agent is installed to run on the switch CPU and is only able to access control plane data. Since it isn’t able to view the data plane, application and user data running through the switch is untouched. Read this Cumulus Networks knowledge base article to learn more.

One of the greatest benefits for customers is that New Relic offers Server Monitoring for free. In the future, we will explore how to enhance the data being collected and plug-ins to get greater value from this service.

The post Using New Relic Server Monitoring With Cumulus Linux appeared first on Cumulus Networks Blog.

18 November, 2014 04:00PM by Shishir Garg

hackergotchi for Ubuntu developers

Ubuntu developers

Stuart Langridge: Making a static build of sox

This is super-technical. Caveat lector.

Sox, “the Swiss Army knife of sound processing programs”, has a jolly useful feature where it can read an audio file and dump out the contents as data; exactly what you need to create a waveform image of an mp3. I needed to do this for a particualr client project; there are a whole bunch of pages explaining how to create a waveform of an mp3 in PHP, and they pretty much all rely on using lame to convert the mp3 to a wav file and then reading and decoding the wav file with PHP. That seems like a waste to me, since I don’t care about the wav file, and sox does precisely what I want.

However, if you’re writing PHP apps, it’s quite possible that you (as I am, for this project) are hosting the web app in some sort of shared hosting environment, which is fairly sharply limited. So what I wanted was a static build of sox; that is, a single binary with the dependencies I need (mp3-reading, primarily) compiled into it. I managed to put together a static Linux binary by (roughly) following the ggkarman instructions (which are for OS X, but that doesn’t matter here). However, once I had said binary, it didn’t run on the hosting environment; I got the error ./sox: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.14 not found (required by ./sox). What this means is that the binary was built against the version of glibc, the system C library on which basically everything depends, that was on the machine I built it on — my Ubuntu 14.04 desktop. The hosting service has an older version of glibc, and so things didn’t work.

At this point I went to ask questions on #ubuntu-uk, and popey said: can’t you just build it in an older version of Linux — say, a Debian wheezy chroot? I said, dunno, can I? And popey, hero of the revolution that he is, had a wheezy chroot lying around the place and so tried it, and it works.

Below is how we built the static binary. There are probably a bunch of complexities in here that will make it work for you; in particular, the web hosting machine was 64-bit and so was popey’s build environment, and if that’s not the case for you then epic fail lies ahead. But it may help.


# First, create a debian wheezy chroot
sudo debootstrap --arch=amd64 wheezy ./wheezy/ \
    http://ftp.de.debian.org/debian/
# enter it, thus
sudo  mount -o bind /dev wheezy/dev
sudo mount -t proc none wheezy/proc
sudo chroot wheezy
cd /tmp # somewhere to do this

# update our debian system, and install libmad, 
# which is what we need for mp3 support in sox
apt-get update
apt-get install build-essential
apt-get install libmad0-dev
apt-get install realpath

# now grab sox and its dependencies
mkdir -p deps
mkdir -p deps/unpacked
mkdir -p deps/built
mkdir -p deps/built/libmad
mkdir -p deps/built/sox
mkdir -p deps/built/lame
wget -O deps/sox-14.4.1.tar.bz2 "http://downloads.sourceforge.net/project/sox/sox/14.4.1/sox-14.4.1.tar.bz2?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fsox%2Ffiles%2Fsox%2F14.4.1%2F&ts=1416316415&use_mirror=heanet"
wget -O deps/libmad-0.15.1b.tar.gz "http://downloads.sourceforge.net/project/mad/libmad/0.15.1b/libmad-0.15.1b.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fmad%2Ffiles%2Flibmad%2F0.15.1b%2F&ts=1416316482&use_mirror=heanet"
wget -O deps/lame-3.99.5.tar.gz "http://downloads.sourceforge.net/project/lame/lame/3.99/lame-3.99.5.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flame%2Ffiles%2Flame%2F3.99%2F&ts=1416316457&use_mirror=kent"

# unpack the dependencies
pushd deps/unpacked
tar xvfpj ../sox-14.4.1.tar.bz2
tar xvfpz ../libmad-0.15.1b.tar.gz
tar xvfpz ../lame-3.99.5.tar.gz
popd

# build libmad, statically
pushd deps/unpacked/libmad-0.15.1b
./configure --disable-shared --enable-static --prefix=$(realpath ../../built/libmad)
# Patch makefile to remove -fforce-mem
sed s/-fforce-mem//g < Makefile > Makefile.patched
cp Makefile.patched Makefile
make
make install
popd

# build lame, statically
pushd deps/unpacked/lame-3.99.5
./configure --disable-shared --enable-static --prefix=$(realpath ../../built/lame)
make
make install
popd

# build sox, statically
pushd deps/unpacked/sox-14.4.1
./configure --disable-shared --enable-static --prefix=$(realpath ../../built/sox) \
    LDFLAGS="-L$(realpath ../../built/libmad/lib) -L$(realpath ../../built/lame/lib)" \
    CPPFLAGS="-I$(realpath ../../built/libmad/include) -I$(realpath ../../built/lame/include)" \
    --with-mad --with-lame --without-oggvorbis --without-oss --without-sndfile --without-flac  --without-gomp
make -s
make install
popd

cp deps/built/sox/bin/sox .
rm -rf deps/built
rm -rf deps/unpacked

And now you have a sox binary, which can be used to convert an mp3 to numeric data with sox whatever.mp3 output.dat, and doesn’t need other libraries so you can just drop it into your web hosting and run it with PHP exec(). Hooray.

18 November, 2014 03:35PM

hackergotchi for SPACEflight

SPACEflight

Hallo Welt!

Willkommen zur deutschen Version von WordPress. Dies ist der erste Beitrag. Du kannst ihn bearbeiten oder löschen. Um Spam zu vermeiden, geh doch gleich mal in den Pluginbereich und aktiviere die entsprechenden Plugins. So, und nun genug geschwafelt – jetzt nichts wie ran ans Bloggen!

18 November, 2014 12:48PM by Effingo: Near Duplicate Detection

hackergotchi for SolydXK

SolydXK

ISOs 201411

The Home Editions were upgraded to the latest Upgrade Pack and the Business Editions were upgraded with the latest security updates. This time I will not list the version changes of the major applications, but limit myself to the most important changes.

The Live Installer now supports multiple drives which will give you the ability to install the home directory to a separate drive rather then another partition.

 

You can find more information, and download the ISOs on our product pages:
SolydX Business Edition: http://solydxk.com/business/solydxbe/
SolydK Business Edition: http://solydxk.com/business/solydkbe/
SolydK Back Office: http://solydxk.com/business/solydkbo/
SolydX: http://solydxk.com/homeedition/solydx/
SolydK: http://solydxk.com/homeedition/solydk/

For any questions or issues, please visit our forum: http://forums.solydxk.com/

 

18 November, 2014 10:18AM by Schoelje

hackergotchi for Ubuntu developers

Ubuntu developers

Michael Hall: Economic warfare in FOSS

Or, How to destroy a project rather than compete with it.

Whether they are conscious of it or not, many parts of our community have engaged in it, and it’s hurting us all. When a project comes along that some people don’t like, but they can’t (or won’t) compete with it, they will too often revert into a series of attacks that systematically tear that project down.

Those steps, as I have observed them, are recorded below. I do this not to instruct people on how to do it (nobody ever needed it described to them in detail in order to participate in it) but rather in the hope that it will help the rest of us identify it when it starts to happen, and call it our for what it is.

Step 1: Demonize the project

These things always start with an attack on the character of the project itself. The groups that strongly oppose it are, at this point, always too small for their opinion to change anything. But they are almost always vocal enough to be heard, and that’s all that is needed. As post after post, comment after comment, starts to saturate communication channels, they setup the meme that this project is not just technically bad, but morally bad. It shouldn’t exist, the people who built it shouldn’t have built it, and people that use it shouldn’t be using it. They will use emotionally charged language, hyperbole or outright lies to turn people against the very idea of the project existing. They won’t turn everybody, in fact they won’t even turn a majority, but if they can convince enough people that they are right, they can direct this new following towards the next step.

Step 2: Intimidate their supporters

Having raised their quasi-army of opponents through persistent attacks on the project, they will turn them loose on the supporters of that project. They can’t attack them directly, at this point they’re still a relatively small movement and dependent on the acquiescence of the rest of the community to continue in this behavior. Instead they will playfully mock those supporters, making a point to embarrass them or question their intelligence because of their support. The point of this isn’t to change anybody’s mind, it’s to drive those supporters underground, make them hesitant to show their support, and make it look like the project is losing support even if it isn’t. Without a tangible community of users and supporters, the project’s contributors become entirely dependent on each other for the support and recognition that is essential in motivating volunteer contributions.

Step 3: Undermine their contributors

With the loss of a protective support community, and an increasingly emboldened number of attackers, they will at this point begin to attack the people actually contributing to the project itself. By now those involved in the project will have begun to close themselves off from the community, a reaction to try and insulate themselves from what is happening to them. Because of this, there isn’t much the opponents can do to them directly, so instead they begin to corner them in that project by cutting them off from any other project. Calls for boycotts will go out, project contributors will become persona-non-grata in other communities, and it will be at least strongly implied (if not explicitly declared) that any project or community that collaborates with them will suffer the same fate. With no supporters giving them recognition for work on the project, and being increasingly unable to have any of their work recognized and accepted by other projects, contributors will start to drop out, both of the targeted project itself and quite often the entire FOSS community as well.

Step 4: Attack the person

Quite often the previous steps are enough to destroy a project. With supporters not showing support, and contributors stopping their contribution, it would take a very strongly-willed person to carry on. If the project manages to continue with any measure of success, the attacks on the people at the core of it will become intense. Not many projects have survived to this point, so it’s difficult to give general descriptions of what will happen, but things will start to get very ugly, even to threats of violence. If the opposition group hasn’t begun to experience their own negative blow-back from what they’ve been doing, they will be able to continue these attacks until the people driving the project either give up, or are driven completely away from working in the community.

Prevention

All of this happens only when it’s allowed to happen. It isn’t inevitable, nor is it unstoppable.  It can be stopped at any point along this path, if enough of us decide that it ought to be stopped. I hope I don’t have to write another post convincing anybody that is ought to be stopped.

Prevention starts with identifying that this is happening, which is the reason I detailed it’s progress above. Once we know that it is happening, and how far along it has gotten, we can start to roll it back and undo some of the damage already done.

Defend the person

If it has gotten all the way to the final attack on the head of the project we must, as a community, be outspoken in our defence of them as people and as members of our community. You don’t have to like their project, or even support it, to honestly give support to the person. Ugly attacks, threats of violence, and any other attack on a person or their character should never be tolerated, and we need to make sure everybody knows that it won’t be tolerated.

Support the contributors

When the project’s contributors are under attack, and people are trying to isolate them from the rest of the community, it is important for them to be welcomed by other groups and projects, to continue to give recognition and value to the contributions they make elsewhere. At this point we are in danger of losing them as FOSS contributors, not just to one project but to any project, current of future. Even if you don’t see value in their current work, the lost potential for future contributions should be enough to motivate you to give them your support.

Speak up

If you support a project, don’t be afraid or ashamed to let people know it. When the supporters are being mocked or insulted, they need to hear from each other, or else feel alone in their support. Let them know they are not alone, let them know that you are not afraid to be seen supporting that project. It only takes a few voices, a few brave voices who refuse to be quieted, to make the others feel confident enough to do likewise.

Don’t be afraid

The best, and easiest, time to prevent this from happening is at the very start, when that initial opposition tries to turn you against a project. Whenever somebody starts to tell you that a project shouldn’t exist, or that it’s existence is going to be bad for you, be immediately skeptical. When they want to you be angry or afraid of it you should be questioning their intention. Especially when the code is open, it’s nearly impossible that it’s going to be able to cause you any real or lasting harm. Don’t let your emotions be hijacked by those who want to use you for their own purposes. Keep calm, use what works for you, make something better if you don’t like what’s available.

18 November, 2014 10:00AM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 392

Welcome to the Ubuntu Weekly Newsletter. This is issue #392 for the week November 10 – 16, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

18 November, 2014 02:08AM by José Antonio Rey

November 17, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S07E33 – The One with the Late Thanksgiving

Join Laura Cowen and Mark Johnson in Studio L for season seven, episode thirty-three of the Ubuntu Podcast! We have some rather iffy sound problems that get a bit better if you stick with it past the introduction. Mark explains a bit more at the start. Thanks for listening!

In this week’s show:-

We’ll be back next week, when we’ll be talking about the [ind.ie[(https://ind.ie/) crowdfunding project and looking over your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

17 November, 2014 09:45PM

hackergotchi for Grml developers

Grml developers

grml development blog: Grml - new stable release 2014.11 available

We just released Grml 2014.11 - Gschistigschasti.

This Grml release provides fresh software packages from Debian testing (AKA jessie). As usual it also incorporates up2date hardware support and fixes known bugs from the previous Grml release.

More information is available in the release notes of Grml 2014.11.

Special thanks to Christian Hofstaedtler for helping us in solving a big release stopper caused by udev's net.agent on systems without systemd (AKA #754987).

Grab the latest Grml ISO(s) and spread the word!

Thanks everyone and happy grml-ing!

17 November, 2014 09:25PM by Michael Prokop (nospam@example.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Svetlana Belkin: Adventures With Conky

For some reason, I wanted a desktop wallpaper with a calender but I quickly figured out that is a wallpaper, I need to change it every month.  I’m also on Linux, so I don’t have Rainmeter, but instead I have Conky:

Screenshot from 2014-11-17 15:32:21

I used the Seamod widget from Conky Manger and the calender is a part of my conkyrc file HERE.  Wallpaper is HERE.

Please note that there is some issues with the calender displaying correctly, it’s a Ubuntu issue.  I still have a bit to tweak, mainly with making the calendar a bit wider.


17 November, 2014 09:20PM

hackergotchi for Whonix

Whonix

Whonix 9.4 Maintenance Release

Existing users can upgrade the usual way using apt-get, see also: https://www.whonix.org/wiki/Security_Guide#Updates

The post Whonix 9.4 Maintenance Release appeared first on Whonix.

17 November, 2014 02:52PM by Patrick Schleizer

hackergotchi for Ubuntu developers

Ubuntu developers

Michael Hall: The Ubuntu Online Summit: A Community Success

Last week was our second ever Ubuntu Online Summit, and it couldn’t have gone better. Not only was it a great chance for us in Canonical to talk about what we’re working on and get community members involved in the ongoing work, it was also an opportunity for the community to show us what they have been working on and give us an opportunity to get involved with them.

Community Track leads

This was also the second time we’ve recruited track leads from among the community. Traditionally leading a track was a responsibility given to one of the engineering managers within Canonical, and it was up to them to decide what sessions to put on the UDS schedule. We kept the same basic approach when we went to online vUDS. But starting with UOS 14.06, we asked leaders in the community to help us with that, and they’ve done a phenomenal job. This time we had Nekhelesh RamananthanJosé Antonio ReySvetlana BelkinRohan GargElfy, and Scarlett Clark take up that call, and they were instrumental in getting even more of the community involved

Community Session Hosts

uos_creatorsMore than a third of those who created sessions for this UOS were from the community, not Canonical. For comparison, in the last in-person UDS, less than a quarter of session creators were non-Canonical. The shift online has been disruptive, and we’ve tried many variations to try and find what works, but this metric shows that those efforts are starting to pay off. Community involvement, indeed community direction, is higher in these Online Summits than it was in UDS. This is becoming a true community event: community focused, community organized, and community run.

Community Initiatives

The Ubuntu Online Summit wasn’t just about the projects driven by Canonical, such as the Ubuntu desktop and phone, there were many sessions about projects started and driven by members of the community. Last week we were shown the latest development on Ubuntu MATE and KDE Plasma 5 from non-Canonical lead flavors. We saw a whole set of planning sessions for community developed Core Apps and an exciting new Component Store for app developers to share bits of code with each other. For outreach there were sessions for providing localized ISOs for loco teams and expanding the scope of the community-lead Start Ubuntu project. Finally we had someone from the community kick off a serious discussion about getting Ubuntu running on cars. Cars! All of these exciting sessions were thought up by, proposed by, and run by members of the community.

Community Improvements

This was a great Ubuntu Online Summit, and I was certainly happy with the increased level of community involvement in it, but we still have room to make it better. And we are going to make it better with help from the community. We will be sending out a survey to everyone who registered as attending for this UOS to gather feedback and ideas, please take the time to fill it out when you get the link. If you attended but didn’t register there’s still time, go to the link above, log in and save your attendance record. Finally, it’s never too early to start thinking about the next UOS and what sessions you might want to lead for it, so that you’re prepared when those track leads come knocking at your door.

17 November, 2014 10:00AM

Ronnie Tucker: Ubuntu Touch Music App Is Proof That Total Ubuntu Convergence Is Getting Closer – Gallery

While other platforms like Windows or iOS are still working towards their convergence goal, Canonical is already there and the developers now have applications that work both on the mobile and on the desktop platform without any major modifications. One such example is the Ubuntu Touch Music App, which looks and feels native on both operating systems.

For now, Canonical is working on Ubuntu for phones and Ubuntu for desktop. Before long, however, the projects will be folded into a single one, probably in a couple of years. Until then, the biggest change that we’re seeing due to this convergence policy is the fact that applications for Ubuntu Touch don’t really have a problem running on the desktop.

The Ubuntu Touch Music App 2.0 is the same as the one you can find on the mobile platform, but there are some perks if you run it on the desktop. Users can resize it and work much more easily with the playlist, which is a nice thing to have. In any case, it only runs on Ubuntu 14.10 (Utopic Unicorn), so that’s the only way to test it.

Source:

http://news.softpedia.com/news/Ubuntu-Touch-Music-App-Is-Proof-that-Total-Ubuntu-Convergence-Is-Getting-Closer-464595.shtml

Submitted by: Silviu Stahie

17 November, 2014 07:17AM

hackergotchi for TurnKey Linux

TurnKey Linux

Vim smartopen plugin adds Vim support for CDPATH

Overview

Ever since I discovered CDPATH last year I've been thinking wouldn't it be great if Vim could access files using CDPATH without having to chdir anywhere first. In other words, why can't Vim understand that it has to look for tklbam/restore.py in the CDPATH instead of telling me that it doesn't exist in the current working directory?

I started experimenting with Vim hooks and eventually figured out how to implement this as a generic mechanism that defines a more useful way for Vim to access the filesystem.

Throw in a Python implemented autocomplete algorithm, and in a nutshell, thats how my Vim smartopen plugin works.

Configuration

In your bashrc file:;

export _CDPATH=$CDPATH

Path lookup algorithm

  1. Try to lookup path in cdpath

  2. If no such file exists, try looking up path as a tag

  3. If no such tag exists, assume its a new file

    Lookup the path of this new file by trying to find its parent directory in the cdpath. If it doesn't have a parent directory, assume the new file should be created in the current working directory.

Special cases:

/path   path is absolute (not looked up)
./path  path is relative to the current working directory (not looked up)

Usage

commands:

:O[pen]         [ <path> ]
:Ta[bOpen]      [ <path> ]
:Sp[litOpen]    [ <path> ]

if no <path> argument provided:
    defaults to taking <path> from word under cursor

Note: shell-style autocomplete is supported, but only for filesystem paths, not tags.

key bindings:

gf      open file or tag (under cursor)
CTRL-]  open file or tag (under cursor)

<C-W>f  open file (under cursor) in split window
<C-T>   go back

Note: overloaded vim native keybindings, with new enhanced functionality.

mouse bindings (browser-inspired):

left double click to open link

    doubleclick         open
    shift-doubleclick   split open
    ctrl-doubleclick    tab open (like in a browser)

ctrl-rightclick         go back

17 November, 2014 05:10AM by Liraz Siri