July 30, 2014

hackergotchi for SolydXK

SolydXK

New ISOs!

The Home Editions were upgraded to the latest Upgrade Pack and the Business Editions were upgraded with the latest security updates. This time I will not list the version changes of the major applications, but limit myself to the most important changes.

  • Debian has started to move testing to systemd. The Home Editions use systemd while the Business Editions continue to use sysvinit. For the Home Editions, you will notice the difference during boot, but especially during shut down which now takes a lot less time. We can still need your help to improve boot time, though. Samba is on by default, and that is causing no significant improvement in boot time.

    If you’re really into optimizing boot time, you can start by analyzing the output of these commands:

      systemd-analyze blame
      systemd-analyze critical-chain

  • As from the last update kdenext was removed from SolydK. We are now tracking Debian KDE.

  • The multimedia repository (deb-multimedia) has been removed from the ISOs. Some of the multimedia packages, and some codecs were added to our own repository. These packages might, or might not be legal to use in your country, but you can install them by checking the Multimedia check box during the installation. This option is selected by default.

    The current multimedia repository is still available, and you can continue to use it. If you want to remove the multimedia repository from your current system, and replace them with a Debian equivalent, you can use Grizzler’s script as described in this tutorial: http://forums.solydxk.com/viewtopic.php?f=9&t=4367

  • The KDE Display Manager (KDM) has been replaced with LightDM. If you want to replace KDM with LightDM on your current system, you can follow this tutorial: http://forums.solydxk.com/viewtopic.php?f=9&t=4368.

 

You can find more information, and download the ISOs on our product pages:
SolydX Business Edition: http://solydxk.com/business/solydxbe/
SolydK Business Edition: http://solydxk.com/business/solydkbe/
SolydK Back Office: http://solydxk.com/business/solydkbo/
SolydX: http://solydxk.com/homeedition/solydx/
SolydK: http://solydxk.com/homeedition/solydk/

For any questions or issues, please visit our forum: http://forums.solydxk.com/

 

Note: Not all mirrors are updated, yet.

 

30 July, 2014 07:31PM by Arjen Balfoort (Schoelje)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu App Developer Blog: A faster Ubuntu positioning system with Nokia HERE

OSM GPS dump

We’re very excited to announce an agreement with Nokia HERE to provide A-GPS support on Ubuntu. The new platform service will enable developers to obtain accurate positioning data for their location-based apps in under two minutes, a significantly shorter Time To First Fix (TTFF) than the average for raw GPS technologies.

Faster positioning

While Ubuntu already features GPS-based location, it has always been a key requirement for the OS to provide application developers with rapid and efficient location positioning capabilities.

The new positioning service will be a hybrid solution integrating A-GPS and WiFi positioning, a powerful combo to help obtaining a very fast and accurate TTFF. The system is to be functional by the Release To Manufacturer (RTM) milestone, and available on the regular Ubuntu builds and for retail phones shipping Ubuntu.

Privacy and security

With the user’s explicit consent, anonymous data related to signal strength of local WiFi signals and radio cells can be contributed to crowd-sourcing location services, with the purpose of improving the overall quality of the positioning service for all users.

In line with Ubuntu’s privacy policy, no personal data of any nature is to be collected and released. Users will also be able to opt-out of this service if they do not wish their mobile handset to collect this type of data.

The positioning system will also be run under strict confinement, so that the service and its data cannot be accessed without the user explicitly granting access. With Ubuntu’s trust model, a confined application has to be granted trust by the user to gain access to security- or privacy-relevant system components.

Mapping capabilities

As the new service is to be focused on positioning, it will be decoupled from any mapping solution. Ubuntu Developers, as before, will have a choice of mapping services to use for their applications, including Nokia HERE, OpenStreetMap and others.

Header image based on “openstreetmap gps coverage” by Steven Kay, CC-BY-SA 2.0.

30 July, 2014 09:09AM

hackergotchi for Cumulus Linux

Cumulus Linux

Dell User Forum: While the Beach Beckons, It’s Far Better Inside!

What a great event Dell put on with its Enterprise User Forum in Hollywood, Florida!!

When I arrived, my first view of the event was a grand entry into the Westin Diplomat Resort & Spa revealing a glistening blue ocean straight through the lobby. On the back deck were umbrellas and chaise loungers around the pool with an awe-inspiring view of the beach. The white sand beach was only a few short steps from there. I thought to myself, “Are you kidding me? How I am going to stay focused on this event?” I was seriously considering the dilemma of having to work inside while paradise was only a few short steps away.

Dell User ForumDell User Forum View #DUF14

Well, Dell took care of that! With hundreds of customers and channel partners there to hear about Dell technology solutions, there was always a technology discussion to be had. Despite the sun-drenched pools and white sand beaches, I found the dialogue inside to be far better than downtime outside.

Dell offered several sessions covering networking. Tom Burns, Dell’s General Manager of the Networking Business, talked about Cumulus Networks during his keynote where he highlighted our open approach to networking and how we fit into environments where end users aspire to have common deployment and management capabilities for both servers and networking. He also discussed the networking roadmap in another session and discussed how he sees a large part of the market emerging as an open networking solution, like those provided by Cumulus Networks. And then in his open networking session, he talked about how Cumulus Networks delivers more than just an open networking operating system. He discussed the concept of lowering operating expense as a result of having an agile data center that allows for rapid innovation using a software-defined network. This approach stands in stark contrast to the closed, proprietary systems others are offering the market today where APIs stand guarding vendors’ profit while preventing you from the innovation your organization deserves.

Along with these sessions, we had numerous opportunities to discuss Cumulus Linux with our existing customers and others interested in hearing more about us. It was great to hear all the great ideas people had for using the technology, spanning from high performance clusters for the energy vertical to small businesses who need a lower-cost 10G switch. What customers most valued is the ability to use a native Linux operating system that can be managed using Linux commands and can be tightly integrated with their other applications. The depth of discussion was great and the interest in our solutions was even better.

Oh, and about that beach. Well, I never made it there. I did, however, get a chance to go through some email out on the umbrella-laden deck for a while. Not exactly living la vida loca, but I had something better going on inside!

The post Dell User Forum: While the Beach Beckons, It’s Far Better Inside! appeared first on Cumulus Networks Blog.

30 July, 2014 08:00AM by Larry Hart

hackergotchi for Ubuntu developers

Ubuntu developers

Duncan McGreggor: OSCON 2014 Theme Song - Andrew Sorensen's Live Coding Keynote

Andrew Sorensen live-coding at OSCON 2014
Keynote

Shortly after Andrew Sorensen began the performance segment of his keynote at OSCON 2014, the #oscon Twitter topic began erupting with posts about the live coding session. Comments, retweets, and additional links persisted for that day and the next. In short, Andrew was a hit :-)

My first encounter with Andrew's work was a few years ago when I was getting back into Lisp. I was playing with generative music with Overtone (and then, a bit later, experimenting with SuperCollider, Hy, and Twisted) and came across his piece A Study in Keith. You might want to take a break from reading this port and watch that now ...

When Andrew started up his presentation, I didn't immediately recognize him. In fact, when the code was displayed on the big screens, I assumed it was Clojure until I looked closely and saw he was using (define ...) and not (defun ...).  This seemed very familiar, and then I remembered Impromptu, which ultimately lead to my discovery of Extempore (see more links below) and the realization that this is what Andrew was using to live code.

At the end of the performance a bunch of us jumped up and gave a standing ovation. (In fact, you can hear me yell out "YEAH" at the end of his presentation when he says "And there we go."). It was quite a show. It seemed the OSCON 2014 had been given a theme song. The next step was getting the source code ...


Andrew's gist (Dark Github Theme)
Sharing the Code

Andrew gave a presentation on Extempore in the ballroom right after the keynote. This too was fantastic and resulted in much tweeting.

Afterwards a bunch of us went up front and chatted with him, enthusing about his work, the recent presentation, the keynote, and his previously published pieces.

I had Andrew's ear for a moment, and asked him if he was interested in sharing his keynote source -- there had been several requests for it on Twitter (that also got retweeted and/or favourited). Without hesitation, he gave an enthusiastic "yes" and we were off and running for the lounge where we could sit down to create a gist (and grab a cappuccino!). The availability of the source was announced immediately, to the delight of many.


Setting Up Extempore

Sublime Text 3 connected to Extempore
Later that night in my hotel room, I had time to download and run Extempore ... and discovered that I couldn't actually play the keynote code, since there was some implicit setup I was missing. However, after some digging around on the docs site and the mail list, music was pouring forth from my laptop -- to my great joy :-D

To ensure anyone else who is not familiar with Extempore can also have this pleasure, I've put together the all the prerequisites and setup necessary in a forked gist, in multiple parts. I will go through those in this blog post. Also: all of my testing and live coding was done using Ben Swift's Extempore Sublime Text plugin.

The first step is getting all the dependencies. You'll want to start the downloads right away, since they are large (the sample files are compressed .wavs). While that's going on, you can install Extempore using Homebrew (this worked for me on Mac OS X with no additional tweaking/configuration necessary):

With Extempore running, let's do some setup. We're going to need to:

  • load some libraries (this takes a while for them to compile),
  • define some samples, and then
  • define some musical note aliases for convenience (and visual clarity).
The easiest way to use the files below is to clone the gist repo and load them up in Sublime Text, executing blocks of text by hi-lighting them, and then pressing ^x^x.

Here is the file for the fist two bullets mentioned above:


You will need to edit this file to point to the locations where your samples were downloaded. Also,
at the very end there are some lines of code you can execute to make sure that your samples are working.

Now let's define the note aliases. You can just hi-light the entire contents of this file in Sublime Text and then ^x^x:
At this point, we're ready to play!


Playing the Music

To get started on the music, open up the fourth file from the clone of the gist and ^x^x the root, scale, and left-hand-notes-* constants.

Here is the evolution of the left hand part:
Go ahead and start that first one playing (^x^x the definition as well as the call). Wait for a bit, and then execute the next one, etc. Once you've started playing the final left hand form, you can switch to the wider range of notes defined/updated at the bottom.

Next, you'll want to bring in the right hand ... then bassline ... then the higher fmsynth sparkles for the right hand:

Then you'll increase the energy with the drum section:

Finally, you'll bring it to the climax, and then start the gentle fade out:

A slightly modified code listing for the final keynote form is here:


Variation on a Theme

I have recorded a variation of Andrew's keynote based on the code above, for your listening pleasure :-) You can listen to it in your browser or download it.

This version plays part of the left hand piano an octave lower. There's a tiny bit of clipping in places, and I accidentally jazzed it up (and for too long!) with a hi-hat change in the middle. There are also some awkward transitions and volume oddities. However, these should be inspiration for you to make your own variation of the OSCON 2014 Theme Song :-)

The "script" used for the recording can found here.


Links of Note

Some of these were mentioned above, some haven't been. All relate to Extempore :-)


30 July, 2014 06:14AM by Duncan McGreggor (noreply@blogger.com)

hackergotchi for TurnKey Linux

TurnKey Linux

Audio video codecs: a video editor's tutorial introduction to codecs

What's a codec?

Raw uncompressed digital video contain a huge amount of information: 3 bytes per pixel translates into roughly 240Mbit/s for standard definition video, 504Mbit for standard HD video (720p), and 1136Mbit/s for full HD video (1080p).

Even at just standard definition (DVD) you'd need over 200GB to store a two hour uncompressed movie which isn't practical for most applications.

General purpose compression algorithms (e.g., zip, gzip, bzip2) don't work well on video/audio, but fortunately we've invented these marvelous domain-specific CODEC (short for COmpression and DECompression) algorithms which can achieve 100:1 compression ratio (and higher) without a visible loss of quality. I've even seen examples of videos that achieve an almost magical 1000:1 compression ratio while still maintaining very good quality.

One the things that are likely to confuse a newcomer (it sure confused me) is the huge variety and codecs available and their configuration options.

How do I choose a codec?

It depends on your purpose. Codecs are not created equal. They're often different because they are designed to be well suited for different purposes (e.g., real-time video conferencing vs streaming vs video editing). Each Codec has its own limitations.

In general, you always want to use the best tool for the job and codecs are no exception.

Codecs can generally be divided into two categories: lossless and lossy.

Lossless vs lossy codecs

  1. Lossless (e.g., FFV1, HuffYuv): Like zip for general files or PNG for images, there is no loss of information (I.e., quality) so decompressed video is bit-for-bit identical to the original video.

    Lossless codecs are good for editing and archiving "master" copies.

  2. Lossy codecs (e.g., H.264, MP4): can achieve much better bitrates but the decompressed video is not identical to the original video though it may look like that (or very close) to the naked eye.

For example, to simulate editing with a lossy codec I re-encoded the same video a few dozen times to see what the quality degradation would look like and after a few passes it is definitely visible to the naked eye. You don't want to do that.

In general codecs are designed to make trade-offs between:

  1. CPU requirements (how many FPS you can encode/decode)
  2. bitrate for a given resolution/quality
  3. implementation complexity

So you can optimize a codec for streaming to squeeze out the best possible video quality for a given bitrate, but that means you need to throw much more computational resources at it to encode the video and maybe to decode it as well.

Best lossless codecs for editing and storage

Audio: WAV (PCM) or FLAC

Video editing: FF HuffYuv (supported by Avidemux). Poor compression (112mbps) but it's fast and it's not an intra-frame (no key frames) format which makes it easier to work with in a video editor (e.g., you can quickly access each frame individually without having to calculate it in reference to a key frame)

Video archiving: FFV1. An intra-frame lossless format. Much better compression than HuffYuv (20Mbit) but much slower to encode/decode and harder to work with.

Best lossy codec for streaming

Audio: AAC (open format, better quality for a given bitrate than MP3). For simple speech audio I personally can't hear the difference between high bitrates and low bitrates.

Video: H.264 (AKA MP4 AVC). Beats everything else hands down in terms of quality for a given bitrate but much more computationally intensive than algorithms such as MP4 ASP.

H.264 is highly configurable so your results may vary depending on the configuration settings and the type of video. I find it much easier to use Avidemux to configure H.264 encoding than FFmpeg.

It's an amazing codec. With clever configuration it seems to be possible to get some 720p HD videos down to under 550 kbps (over 1000:1 compression!) while still providing very good quality.

30 July, 2014 05:15AM by Liraz Siri

July 29, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: Pruning Syslog entries from MongoDB

I previously announced the availability of rsyslog+MongoDB+LogAnalyzer in Debian wheezy-backports. This latest rsyslog with MongoDB storage support is also available for Ubuntu and Fedora users in one way or another.

Just one thing was missing: a flexible way to prune the database. LogAnalyzer provides a very basic pruning script that simply purges all records over a certain age. The script hasn't been adapted to work within the package layout. It is written in PHP, which may not be ideal for people who don't actually want LogAnalyzer on their Syslog/MongoDB host.

Now there is a convenient solution: I've just contributed a very trivial Python script for selectively pruning the records.

Thanks to Python syntax and the PyMongo client, it is extremely concise: in fact, here is the full script:

#!/usr/bin/python

import syslog
import datetime
from pymongo import Connection

# It assumes we use the default database name 'logs' and collection 'syslog'
# in the rsyslog configuration.

with Connection() as client:
    db = client.logs
    table = db.syslog
    #print "Initial count: %d" % table.count()
    today = datetime.datetime.today()

    # remove ANY record older than 5 weeks except mail.info
    t = today - datetime.timedelta(weeks=5)
    table.remove({"time":{ "$lt": t }, "syslog_fac": { "$ne" : syslog.LOG_MAIL }})

    # remove any debug record older than 7 days
    t = today - datetime.timedelta(days=7)
    table.remove({"time":{ "$lt": t }, "syslog_sever": syslog.LOG_DEBUG})

    #print "Final count: %d" % table.count()

Just put it in /usr/local/bin and run it daily from cron.

Customization

Just adapt the table.remove statements as required. See the PyMongo tutorial for a very basic introduction to the query syntax and full details in the MongoDB query operator reference for creating more elaborate pruning rules.

Potential improvements

  • Indexing the columns used in the queries
  • Logging progress and stats to Syslog


LogAnalyzer using a database backend such as MongoDB is very easy to set up and much faster than working with text-based log files

29 July, 2014 06:27PM

Ubuntu Server blog: Server team meeting minutes: 2014-07-29

Meeting Actions

None

U Development

The discussion about “U Development” started at 16:00.

  • Feature freeze is August 21. Note Debian Import Freeze is coming up
    • as well.
  • The mysql /var/lib/mysql discussion is proceeding, but it seems
    • unlikely that this will happen by feature freeze now. Nevertheless, we expect to land 5.6 in main in the same manner as 5.5 is currently on schedule.
  • http://status.ubuntu.com/ubuntu-u/group/topic-u-server.html – please

    • remember to keep your blueprints updated with work item progress and re-plan milestones if things slip.

Server & Cloud Bugs (caribou)

The discussion about “Server & Cloud Bugs (caribou)” started at 16:03.

  • No updates

Weekly Updates & Questions for the QA Team (psivaa)

The discussion about “Weekly Updates & Questions for the QA Team (psivaa)” started at 16:05.

  • No updates

Weekly Updates & Questions for the Kernel Team (smb, sforshee)

The discussion about “Weekly Updates & Questions for the Kernel Team (smb, sforshee)” started at 16:05.

  • James Page reports that iscsitarget 12.04 DKMS updates for HWE
    • kernels are ready and uploaded to trusty-proposed awaiting SRU team review (bug 1262712)
  • The KSM on NUMA + KVM bug (1346917) is making great progress, driven
    • by Chris Arges. Brad Figg reports that an upload to trusty-proposed is imminent, and it should land on August 8th (the day after 12.04.5). 12.04.5 (for the HWE kernel) won’t include the update, but one will be available for it the next day.
  • For kernel SRU cadence updates, see

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:17.

  • rbasak noted that the Canonical Server Team have been sprinting in
    • #ubuntu-server on Fridays to complete merges, including mentoring and sponsoring, and that all are welcome to join them.

Open Discussion

The discussion about “Open Discussion” started at 16:18.

  • James Page reported that there are plans to SRU docker 1.0.x to
    • 14.04 in bug 1338768. The proposed uploaded is in a PPA and awaiting review from the SRU team. Testers are encouraged to try it out.

Agree on next meeting date and time

Next meeting will be on Tuesday, August 4th at 16:00 UTC in #ubuntu-meeting. Note that this was stated incorrectly in the meeting itself. The chair will be Liam Young.

29 July, 2014 05:46PM

Ubuntu Kernel Team: Kernel Team Meeting Minutes – July 29, 2014

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140729 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16-rc7 and uploaded to the
archive, ie. linux-3.13.0-6.11. Please test and let us know your
results. I also want to mention 14.04.1 released last Thursday
July 24 and 12.04.5 is scheduled to release next Thurs Aug 7.
—–
Important upcoming dates:
Thurs Aug 07 – 12.04.5 (~1 week away)
Thurs Aug 21 – Utopic Feature Freeze (~3 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 22):

  • Lucid – Released
  • Precise – Released
  • Saucy – Released
  • Trusty – Released

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

29 July, 2014 05:18PM

Kubuntu Wire: Rohan on ubuntuonair.com

Kubuntu Ninja Rohan was on today’s ubuntuonair talking about Plasma 5 and what is happening in Kubuntu.  Watch it now to hear the news.

 

29 July, 2014 04:07PM

Svetlana Belkin: Ubuntu Leadership Team:Team Leaders Wanted

On the behalf of the Ubuntu Leadership team, I’m doing a call for the team leaders of the various teams that make Ubuntu and its flavours possible.  The reason for this call is simple- (as a team) to find what problems in that teams’ leadership and what works.  In turn, these problems can be talked about in order for a solution to be found and that solution can be then shared via the Ubuntu Leadership wiki for other folks to read.

What teams are needed:

  • Ubuntu and the flvours developer, translation, marketing, and the other teams that are key to the success
  • LoCo Leaders/Point of Contacts
  • Ubuntu Women Elected Leaders
  • And other team leaders can of course join in!

How to join:

It’s your choice if you want to join the Ubuntu Leadership team on LaunchPad, it’s the the mailing-list that is important!  What you need to put in for your message is: who you are, what team that you are a leader to, what problems in leadership that you are facing or what is working for you, and what sort of questions/comments do you have about the problem that you are facing.  For the subject, your standard new member introduction or maybe say “Leader from [insert team name here]“.

Extra Links:

Ubuntu Leadership mailing-list idea on this


29 July, 2014 03:39PM

hackergotchi for Blankon developers

Blankon developers

Herpiko Dwi Aguno: Extremely Loud & Incredibly Close : Pendengaran Mr. Black

Saya suka sekali buku ini.

Alih-alih menulis ulasan, saya mau tulis sesuatu yang janggal di buku ini.

He led me to the kitchen table, which was where our kitchen table was, and he sat down and slapped his hand against his knee. “Well!” he said, so loudly that I wanted to cover my ears.

Mari kita lihat cuplikan yang lain,

It was only then that I observed that the key was reaching toward the bed. Because it was relatively heavy, the effect was small. The string pulled incredibly gently at the back of my neck, while the key floated just a tiny bit off my chest. I thought about all the metal buried in Central Park. Was it being pulled, even if just a little, to the bed? Mr. Black closed his hand around the floating key and said, “I haven’t left the apartment in twenty-four years!” “What do you mean?” “Sadly, my boy! I mean exactly what I said! I haven’t left the apartment in twenty-four years! My feet haven’t touched the ground!” “Why not?” “There hasn’t been any reason to!” “What about stuff you need?” “What does someone like me need that he can still get!” “Food. Books. Stuff” “I call in an order for food, and they bring it to me! I call the bookstore for books, the video store for movies! Pens, stationery, cleaning supplies, medicine! I even order my clothes over the phone! …

Kemudian,

“I’ve been reading your lips!” “What?” He pointed at his hearing aids, which I hadn’t notice  before, even though I was trying as hard as I could to notice everything. “I turned them off a long time ago!” “You turned them off?” “A long, long time ago!” “On purpose?” “I thought I’d save the batteries!”…

Mr. Black berbicara dengan suara yang keras karena dia tuli. Dia pembaca bibir. Nah, bagaimana dia memesan makanan dan kebutuhannya via telepon kalau dia tuli? Kecuali dia bisa mengatakan semuanya sekaligus dengan lengkap sehingga tidak perlu ditanya-tanya lagi. Tapi itu sulit dibayangkan.

Buku ini aneh, tapi saya suka. Nanti saya posting tentang hal-hal yang tidak lazim di buku ini. Beberapa hal mestinya membuat buku ini sulit dialihbahasakan. Tapi anehnya, terjemahannya sudah ada.

  • http://www.bukukita.com/Non-Fiksi-Lainnya/Non-Fiksi-Umum/78712-Extremely-Loud-&-Incredibly-Close.html
  • http://sepetaklangitku.blogspot.com/2012/02/extremely-loud-incredibly-close.html

29 July, 2014 01:05PM

hackergotchi for TurnKey Linux

TurnKey Linux

Video editing with avidemux and audacity

Not too long ago I explored free software video editing tools for a video demo production I was working on. I was finding it impossible to shoot the whole video in one take without major goofs in a reasonable amount of time while also narrating what I was doing. As usual, I was having trouble because I was trying to do too many things at once, without willing to compromise on quality. When I realized this I decided to be practical and break down my production into bite sized chunks. I'd shoot each section of the video as a separate "take", and stitch together the best cuts into a single video. Then I'd edit out the goofs, accelerate the boring parts and add narration. That was the plan anyhow.

After surveying the various free software video editing tools (most sucked) available I selected the following for my video editing toolkit:

  1. avidemux: for video editing and encoding/decoding
  2. audacity: for audio capture and editing
  3. ffmpeg: for complimentary transcoding (I used the latest version compiled from source)

Digital video terminology

A few basic video editing terms and definitions before we get started:

  • FPS: frames per second

  • Codec: COmpression DECompression algorithm for audio or video which can be lossy (some information lost during compression) or lossless (decompressed is bit-for-bit identical with original input).

    Examples:

    • video codecs:
      • lossy: H.264, MP4 ASP
      • lossless: HuffYuv, FFV1
    • audio codecs:
      • lossy; MPEG 1 audio layer 3 (MP3), AAC
      • lossless: PCM (AKA WAV) FLAC
  • Container format: Often confused with codecs, and sometimes even called by the same name but not the same thing. These are file formats that can contain streams of video, audio and subtitles / data.

    Examples: AVI, ASF (Microsoft), MOV (quicktime), FLV (flash video)

Audio video editing tools

Best free software audio editing tool: Audacity

This was an easy one: just use Audacity. It's an excellent piece of open source software with a rich feature set that was pretty easy learn how to use - even for an audio novice like me.

If you're just doing basic voice narration and don't want to invest in a good microphone, the next best thing would be to use the noise removal and normalization plugins to clean up voice capture from a simple microphone.

Best free software video editing: Avidemux

For video editing I recommend Avidemux: again, I explored lots of programs to do this on Linux but discovered that the state of video editing on Linux to be somewhat dismal currently. Avidemux sucked the least. It won't win any usability awards but it is a simple, no-frills video editor that gets the job done.

Features:

  • playback

  • very basic video editing (delete, copy, paste)

    Salient points:

    • There are two markers A and B and all operations work on what's in between (including save!)
    • Watch out: cut doesn't actually work like you would expect. It's equivalent to delete (you need to remember to copy first)
    • When you paste a video segment it is inserted at the current cursor position.
    • You can't copy paste between different video files, but you can simulate this by appending a video and then copy/cut/pasting it into the desired position.
  • encoding (easier to use and configure than ffmpeg)

    • You can save the existing audio track to a separate file
    • You can remove or add an audio track from an external file
  • Video filters: written as modules and sort of Unixish in how they are used. You add one filter after the other in a pipeline and configure how each will transform the video.

A source of frustration for me when I started exploring Avidemux was that it is a low level program that assumes you know exactly what you are doing and does exactly what you say, not necessarily what you mean. It doesn't coddle you.

If you're not a video/audio expert many things don't work as you would expect. For example, if you've loaded a video compressed with an infra-frame codec (I.e., a codec that saves space by calculates each frame in reference to a preceding key frame) the video cursor will move between key frames rather than between frames. Converting to a codec more suited for editing (HuffYuv) solves this problem.

By working within the limitations of this simple tool and combining the available primitives cleverly you can get pretty far though.

It's not easy as it could be but you still have it pretty good when you consider that in the old days video cutting used to be done by manually cutting and stitching together 35mm film. Or worse - magnetic tape (where you can't even see the frames). If you're interested in this bit of technological history read the Wikipedia article on "linear video editing".

I recommend experimenting extensively on short 30 second clips until you understand how to perform the operations you want.

Encoding your video for streaming over the web

If you're going to host your creation on one of the many popular streaming video sites (e.g., YouTube) upload the highest possible quality the site supports. Most sites will send your video to a transcoding pipeline, which re-encodes the video in whatever selection of formats the site supports. Re-encoding is unfortunately a lossy operation so for best results try uploading the highest possible quality.

If you're hosting the video yourself you'll most likely want to encode your file in Flash Video (FLV). Though HTML5 video may one day obsolete Flash, currently Flash Video is still the most commonly supported format.

Newer versions of Flash support H.264 as a video codec but it's a relatively new development so I encountered trouble getting avidemux to encode H.264 stream into an FLV file directly. The workaround I used was to save the video in a different container format and then use a relatively new version of FFmpeg to repackage the video into FLV:

ffmpeg -i path/to/h264-input.avi -vcodec copy path/to/h264-output.flv

Unfortunately H.264 inside Flash Video was new enough a combination that many programs on my system (e.g., vlc, avidemux) didn't support playing it back at time of writing.

One thing to keep in mind if you're thinking of hosting your own video content is that flash video doesn't play itself. You need a Flash Video player. Video hosting sites such as YouTube have their own flash players. If you're hosting your own FLVs (or testing streaming locally) you'll need to pick a Flash player first. To test the result of my video transcoding voodoo I downloaded the Longtail Flash video player and embedded my video in a local HTML file. There are many other Flash players besides Longtail though, including a few free software ones.

Video streaming sites

Of the major video sites I took a look at (e.g., YouTube, Vimeo) blip.tv seems to be the only one that allows you to bypass their transcoders and upload an FLV you encoded directly, with no loss of quality.

This allows you to encode at much higher quality if you know what you're doing and bypass lengthy waiting times waiting for your video to pass through the waiting queue for the transcoders (which can take hours on other sites - cough, Vimeo, cough).

Blip.tv also provides full control over your content, so you can delete it, edit it, monetize with advertisements (50/50 split with blip.tv), etc. They're even generous enough to let you opt your videos out of their advertisement program if you want.

29 July, 2014 05:30AM by Liraz Siri

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 376

Welcome to the Ubuntu Weekly Newsletter. This is issue #376 for the week July 21 – 27, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

29 July, 2014 04:45AM

Duncan McGreggor: The Future of Programming - Adopting The Functional Paradigm?

Series Links

Survivors' Breakfast

The previous post covered some thoughts on the future-looking programming themes present at OSCON 2014.

Following that wonderful conference, long-time Open Source advocate, Pythonista, and instructor Steve Holden, was kind enough to host his third annual "OSCON Survivors' Breakfast" with tens of esteemed attendees, speakers, and organizers enjoying great company and conversation, relaxing together after the flurry of conference activity, planning a leisurely day in Portland, and -- most immediately -- having some much-needed breakfast.

The view from the 23rd floor was quite an eyeful, and the conversation ranged across equally panoramic topics. Sitting with Alex Martelli, Anna Ravenscroft, and Katie Miller, the conversation inevitably turned to thoughts programmatical. One thread of the discussion was so compelling that it helped crystallize this series of blog posts. That was kicked off with Katie's question:

Why [have some large companies] not embraced functional programming to the extent that other large ones have?

Multiple points of discussion spawned from this, some of which still continue. The rest of this post explores these. 


Large Companies?

What constitutes a large company? We settled on discussing Fortune 500 companies, which, by definition are:
  • U.S. Companies
  • Ranked by gross revenue (after adjustments for excise taxes).

Afterwards, I looked up the 2013 top 25 tech companies in the Fortune 500. I've listed them below; in parentheses is the Fortune 500 ranking. After the dash are the functional programming languages used on various company projects -- these are listed only if I have talked to someone who has worked on a project (or interviewed for a job that used the language), or if I have read an article by an employee who has stated that they use the listed language(s) [1].
  1. Apple (6) - Swift, Clojure, Scala
  2. AT&T (11) - Haskell
  3. HP (15) - F#, Scala
  4. Verizon Communications (16) - Scala
  5. IBM (20) - Scala
  6. Microsoft (35) - F#, F*
  7. Comcast (46) - Scala
  8. Amazon (49) - Haskell, Scala, Erlang
  9. Dell (51) - Erlang, Scala
  10. Intel (54) - Haskell, SML, PLT Scheme
  11. Google (55) - Haskell [2]
  12. Cisco (60) - Scala
  13. Ingram Micro (76) - ?
  14. Oracle (80) - Scala
  15. Avnet (117) - ?
  16. Tech Data (119) - ?
  17. Emerson Electric (123) - ?
  18. Xerox (131) - Scala
  19. EMC (133) - Scala
  20. Arrow Electronics (141) - ?
  21. Century Link (150) - ?
  22. Computer Sciences Corp. (176) - ?
  23. eBay (196) - Scala 
  24. TI (218) - ?
  25. Western Digital (222) - ?

The companies which have committed to projects guessed to be of significant business value written in FP languages include: Apple, HP, and eBay. Possibly also Oracle and Intel. So, a rough estimate of between 3 to 5 of the top 25 U.S. tech companies have made a significant investment in FP.

Why not Google?

The next two sections offer summaries of some views on this.


Ideal Use Case?

Is an FP language suitable for large organisations? Are smaller companies better served by them? During breakfast, It was postulated that dealing with such things as immutable data, handling I/O in pure FP languages, and creating/using higher order functions is easier for small startups due to the shorter amount of time required to hire or train a critical mass of skilled programmers.

It is certainly true that it will take larger organisations longer to train its personnel simply due to sheer numbers and, even with enough trainers, logistics. But this argument can be made for any corporate level of instruction; in my book, this cancels out on both sides and is not an argument unique to hard topics, even less, specifically pertinent to FP adoption.


Brain Fit?

I've heard this one a bit: "Some people just don't think in FP terms." They need loops and iteration, not higher order functions and recursion. Joel Spolsky makes reference to this in his article The Guerrilla Guide to Interviewing. In particular, he says that "For some reason most people seem to be born without the part of the brain that understands pointers." This has been applied to topics in FP as well as C.

To be fair, Joel's comment was probably made with a bit of lightness and not meant to be a statement on the nature of mind or a theory of cognition. The context of the article is a very practical one: hiring. When trying to identify whether a programmer would be an asset for your team, you're not living in the space of cognitive theory, rather you inhabit the realm of quick approximations, gut instincts, and fast, economical decisions.

Regardless, I find this perspective -- Type Physicalism [3] -- fairly objectionable. This is because I see it as a kind of intellectual "racism." Early social sciences utilized this form of reasoning to justify all sorts of discriminatory thinking in the name of "science", reinforcing a rigid mentality of "us" vs. "them." In my own experience, I've seen this sort of approach used to shutdown exploration, to enforce elitism, and dismiss ideas that threaten the authority of the status quo.

Rather than seeing the problem of comprehending FP as a physical limitation of the individual, I see instructional failure as the obstacle to overcome. If we start with the proposition that certain brains are deficient, we are essentially abandoning education. It is the responsibility of the instructor to engage creatively with each student's learning style. When adhering to the idea that certain brains are limited, one discards creative engagement; one doesn't even consider working with the students and their learning styles. This is a view that, however implicitly, can be used to shun diversity and dismiss potential.

I believe the essence of what Joel was shooting for can be approached in a much kinder fashion (adapted for an FP discussion):

None of us was born knowing GOTO statements, global state, mutable data, or for loops. There are many programmers alive, though, whose first contact with programming involved one or more of these. That's their "home town", as it were; their programmatic birth place. Having utilized -- as well as taught -- imperative, OOP, and functional styles of programming, I do not feel that one is intrinsically any harder than another. However, they are sometimes so vastly different from each other in style or syntax or semantics that once a student has solidified around the concepts of a particular paradigm, it can be a challenge retraining to work easily in another.


Why the Objections?

If both "ideal use case" and "brain fit" are given as arguments against adopting FP (or any other new paradigm) in large organisations, and neither are considered logically or philosophically valid, what's at the root of the resistance?

It is not uncommon for changes in an industry or field of study to be met with resistance. The bigger or more different the change from the status quo, very often is proportional to the amount of resistance. I suspect that this is really what we're seeing when companies take a stance against FP. There are very often valid business concerns: "we've made an investment in OOP" or "it will cost too much to train/hire/migrate to FP." 

I would remind those company leaders, though, that new sources of revenue, that product innovation and changes in market adoption do not often come from maintaining or enforcing the current state. Instead, that is an identifying characteristic of companies whose relevance is fading.

Even if your company has market dominance or is a monopoly, there is still a good incentive for exploring alternative paradigms. At the very least, one can uncover inefficiencies and apply new knowledge to remove duplication of efforts, increase margins, etc.


Careers

As a manager, I have found that about half of the senior engineers up for promotion have very little to no interest in taking on different (new to them) programmatic paradigms. They consider current burdens sufficient (or too much) and would rather spend what little free time they have available to them in improving existing systems.

Senior engineers who have a more academic or research bent (or are easily bored) are much more likely to embrace this sort of change. Interestingly, senior engineers who have little to no competitive drive will more readily pick up something new if the need arises. This may be due to such things as not perceiving accumulated knowledge as territory to defend, for example.

Younger engineers with less experience (and less of an investment made in a particular school of thought) are much more willing to take on new challenges. I believe there are many reasons for this, one of which may include an interest in becoming more professionally competitive with their peers.

Junior or senior, I have found that programmers who are currently looking to find new employment are nearly invariably not only willing to take on the challenge of learning different paradigms, but are usually going about that proactively and engaging in self-study.

I want to work with programmers who can take on any problem space in any paradigm and find creative solutions, contributing as valued members of a team. This is certainly an ideal set of characteristics, but one that I have seen in the wilds of the workplace on multiple occasions. It has nothing to do with FP or OOP paradigms, but rather with the people themselves.

Even if a company is locked into well-established processes and views on programming, they may find it in their best interests to provide a more open-minded approach with their employees who would enjoy that. Their retention rates could very well increase dramatically.


Do We Need To?

Philosophy and hiring strategies aside, do we -- as programmers, software projects, or organizations that support programming -- need to take on the burden of learning or adopting functional programming? Quite possibly not.

If Google's plans around Go involve building a new operating system (in the spirit of 1970s C and UNIX), the systems programmers may find pure functions too cumbersome to work with. FP may be too burdensome a fit for that type of work.

If one is not tied to a historical analogy with UNIX, as Mozilla is not with Rust, doing something like creating a new browser engine (or running a remote services company) may be a good fit for FP, especially if one has data showing reduced error counts when using type systems.

As we shall see illustrated in the next post, the usual advice continues to apply: the decision of which paradigm to employ for any given project should be dictated by the best fit and not ideological inflexibility. The bearing this has on programming is innovation: it is the early adopters who have the best chance of leading us into the future.

Up next: Retrospective on Programming Paradigms
Previously: Themes at OSCON 2014


Footnotes

[1] If anyone has additional information as to which FP languages are used by these top 25 companies, please let me know, and I will include that information. Bonus points for knowing of business-critical applications.

[2] Google Switzerland are using Haskell.

[3] Type Physicality is a form of reductive materialism, also known as the Mind-Brain Identity Theory that does not allow for mental states to be realized in organisms or computational systems that do not have a brain. See "Criticisms of Type Physicality" at http://en.wikipedia.org/wiki/Identity_theory_of_mind#Multiple_realizability.

29 July, 2014 03:53AM by Duncan McGreggor (noreply@blogger.com)

July 28, 2014

Lubuntu Blog: Lubuntu 14.04.1 LTS

It's already available Ubuntu 14.04.1 LTS, the first update on Trusty Tahr, a recommended by-default setup for all flavours including, of course, Lubuntu. This comes with extended support (until 2019). Various bugs that required updates are rolled into this update: A bug for btrfs has been found, this is scheduled to be fixed in 14.04.2 A bug affecting Alt-F2 is still present For PPC the bug

28 July, 2014 08:24PM by Rafael Laguna (noreply@blogger.com)

hackergotchi for Maemo developers

Maemo developers

Android image/kernel building/flashing - A *VERY* short guide :-)

This week, I had to go through the process of Android OS/Kernel building/installation. And it was a lot much better and 6 months ago (maybe, because I built it for a device and not for the emulator?). I compiled the images in Ubuntu 12.04 and I used a Samsung Galaxy Nexus device (maguro with tuna as kernel). Therefore, I decided to summarize the steps that I took. This mini-tutorial is a lot shorter and simpler (and really works!!).

1. Android OS

1.0 Setting up the building environment

Check this instructions (here and here) to set up the basic environment and download the code. I used the branch [android-4.3_r1.1].

1.1 Compiling the Android OS

a. Download and unpack the manufacturer drivers from this link. They have to be unpacked into the directory [android_source_code]/vendors -- but don't worry, as the .zip files contain a script that does all the work for you.

b. Once the drivers are in the proper place, run the following commands:

  @desktop:$ cd [android_source_code]
  @desktop:$ make clobber
  @desktop:$ lunch full_maguro-userdebug
  @desktop:$ make -j4

It takes a long time to compile the image.

After these steps, the Android OS is ready.

1.2 Flashing the device with the new Android OS

Now, you need two tools from the Android SDK: adb and fastboot. These tools are located in the folder [androis_sdk]/platform-tools.

a. Reboot the device in the bootloader mode -- hold VolumeDown and VolumeUp and then press the PowerUp button.

b. Connect the USB cable.

c. Run the following commands:

  @desktop:$ export PATH=$PATH:[android_sdk]/platform-tools
  @desktop:$ cd [android_source_code]
  @desktop:$ sudo fastboot format cache
  @desktop:$ sudo fastboot format userdata
  @desktop:$ sudo ANDROID_PRODUCT_OUT=[android_source_code]/out/target/product/maguro/ fastboot -w flashall

After these steps, reboot the device. A clean installation will take place. To check the new version of you device, go to "Settings" - - > "About Phone" and check "Model number": now, it should be "AOSP on Maguro" (check attached image)



2. Android Kernel

Ok. Now, we have the AOSP in place and we need to compile a new kernel. But why do you need to compile and install a new kernel? Oh, well, let's say that you want to apply some patches or that you need to change the kernel to enable Linux module support (the default Android Linux Kernel does not support modules).

2.0 Setting up the building environment

If you have built the Android OS before, you don't need anything special for the kernel building. I used the official code from https://android.googlesource.com/kernel/omap.git, branch android-omap-tuna-3.0-jb-mr2.

2.1 Compiling the Kernel

First, you need to set some variables that are important for the building process (ARCH and CROSS_COMPILE):

  @desktop:$ export ARCH=arm
  @desktop:$ export CROSS_COMPILE=[android_source_code]/prebuilts/gcc/linux-x86/arm/arm-eabi-4.7/bin/arm-eabi-

Now, you have to generate a .config which contains all the options for the kernel building. By running the following command you generate a basic .config file for Android.

  @desktop:$ cd [android_kernel_code]
  @desktio:$ make tuna_defconfig

Sometimes, you need to set some specific entries of the .config to enable/disable certain features of the kernel. For this specific example, let's set the option CONFIG_MODULES to y (the entry in the .config file should be CONFIG_MODULES=y). With CONFIG_MODULES set to y, it is possible to insert/remove kernel modules. Now, let's build the kernel

  @desktop:$ cd [android_kernel_code]
  @desktop:$ make

(it takes some time to compile the kernel)

2.2 Preparing the kernel for installation

The kernel image is almost ready: it's still necessary to wrap it up properly to flash it into the device. The Android source code contains scripts that do the work for us. Consider that the image was generated at [android_kernel_code]/arch/arm/boot/zImage.

  @desktop:$ cd [android_source_code]
  @desktop:$ export TARGET_PREBUILT_KERNEL= [android_kernel_code]/arch/arm/boot/zImage
  @desktop:$ make bootimage

At the end, a custom image is ready for installation at [android_source_code]/out/target/product/maguro/boot.img

2.3 Flashing the device with the new Kernel
 
Now, everything is in place and we can finally flash our kernel image. To do so:

a. You need to boot the device in bootloader mode (hold VolumeDown and VolumeUp and then press the PowerUp button)

b. Connect the USB cable

c. Run the following commands

  @desktop:$ cd [android_source_code]
  @desktop:$ sudo ANDROID_PRODUCT_OUT=[android_source_code]/out/target/product/maguro/ fastboot flash boot [android_source_code]/out/target/product/maguro/boot.img

After these steps, reboot the device. A clean installation will take place. To check the new version of you kernel, go to "Settings" - - > "About Phone" and check "Kernel version": you will see a different name for you kernel image (as for the previuos image).


0 Add to favourites0 Bury

28 July, 2014 07:06PM by Raul Herbster (raulherbster@gmail.com)

hackergotchi for Cumulus Linux

Cumulus Linux

7 Reasons Why I Love Working at Cumulus Networks (And Why You Might Too)

1. The Linux Revolution

Linux Revolution

Let’s start with the most obvious reason here. Cumulus Networks developed Cumulus Linux, which is an operating system for networking hardware that is reshaping the industry. Most employees here left their previous jobs behind because they saw what we were doing and could not wait to be part of the movement. Whether you’re a networking guru or a Linux master, the change we’re bringing should be as exciting and intriguing to you as it is to us. Every day I feel so privileged to be right in the middle of such a revolutionary movement as it’s unfolding.

2. Rocket Turtle

Rocket Turtle

Rocket Turtle (#rocketturtle) is our beloved mascot. Created by the brilliant minds of our marketing team, Rocket Turtle represents Cumulus Networks literally all around the world. But, of course, when it (I only call it “it” because the turtle is gender neutral for now, though we’ve had a lot of debate on this topic around the office) is not eating a croissant in front of the Eiffel Tower or something, it’s hanging out at one of our offices.

3. Awesome Culture

Cumulus Networks culture

When I was interviewing for Cumulus, one thing that really made me want to work here was what our VP of Engineering told me when I asked him what he looks for in his candidates. He said, “I believe aptitude can be taught, but not attitude. It doesn’t matter if you’re the most intelligent and talented person. If you can’t get along with people, you’re just not fit to work at Cumulus.” This was not a mere statement but rather a philosophy that is deeply ingrained in the company’s culture. At Cumulus, we strongly reinforce respect for each other and encourage support and growth. We also have a quirky but awesome sense of humor, if the turtle hasn’t given that away yet.

4. Wonderful Benefits

Cumulus benefits

Our extremely positive and encouraging culture is also reflected in our policies, especially the ones concerning benefits. We believe that employees that work as hard as they do here also deserve kick-ass benefits that can help balance their lives. On top of our highly competitive insurance policies, we also offer the Quality of Life benefit that allows employees to get reimbursed for expenses spent on their well-being. Oh, did I mention that we have a very, very “flexible” PTO policy? ;)

 5. The Cumulus Networks exec team

Cumulus exec teamJason Martin zooming through the office on a skateboard and Reza Malekzadeh keeping the office safe.

You can tell so much about a company by looking at the executives. They set the tone for the entire company, and this is true at Cumulus as well. Our exec team not only advocates creativity, innovation, and success, but they also demonstrate humility, kindness, and healthy work-life balance. They’re also extremely interesting people who are serious about music, sports, and food & wine too. There’s never a dull day with them around!

6. Rapidly growing team

cumulus networks growing

It’s a very exciting time at Cumulus right now. We are not only growing in revenue and recognition, but we are also growing in size! That means our team is snowballing with brilliance and success. We currently have our largest team in Mountain View, CA, two smaller teams in San Francisco and Raleigh, NC, and other individuals who are working on their own in different parts of the world. As long as our customers keep wanting us, we will continue to grow, and things will shift quickly.

7. Exciting future ahead

JR Rivers

By now it should be clear to you that our success will not stop here. We aim to change the networking game forever, so we are looking at a bright future ahead. And it’s very exciting being in this period of preparation, growth, and anticipation of that future.

Now that you’ve got the inside scoop on what it’s like to work at Cumulus, you probably find yourself wondering how you can work with us as well. If you are a motivated, fun, creative, and unique individual, we may want to work with you, too! Check out our openings and follow us on LinkedIn for career opportunities!

May peace be with you and your switches.

Cheers,

Susie

The post 7 Reasons Why I Love Working at Cumulus Networks (And Why You Might Too) appeared first on Cumulus Networks Blog.

28 July, 2014 05:54PM by Susie Song

hackergotchi for Xanadu

Xanadu

Como actualizar versiones anteriores de Xanadu GNU/Linux

Si eres usuario de la versión 0.5.8 o alguna de las anteriores versiones en este post te enseñaremos a actualizar tu sistema para que tengas las ultimas optimizaciones y actualizaciones de la distribución.

En el caso de los usuarios de la versión 0.5.8 (sin importar si es i386 o amd64) solo deben ejecutar lo siguiente:

# apt update
# apt -y install xanadu-full
# apt -y full-upgrade

En el caso de versiones anteriores el proceso es un poco mas largo ya que antes debemos agregar el repositorio de Xanadu a la lista de orígenes.

# echo "deb http://ppa.launchpad.net/sinfallas/xanadu/ubuntu trusty main" >> /etc/apt/sources.list
# echo "deb-src http://ppa.launchpad.net/sinfallas/xanadu/ubuntu trusty main" >> /etc/apt/sources.list
# apt update
# apt -y install xanadu-full
# apt -y full-upgrade

Luego de esto siempre que realices una actualización completa del sistema obtendrás las ultimas características incluidas en las próximas versiones.

Saludos…


Tagged: actualizaciones, version

28 July, 2014 03:20PM by jose perez

hackergotchi for Xanadu developers

Xanadu developers

Xanadu GNU/Linux libera su versión 0.5.9

El proyecto Xanadu ha liberado su nueva versión 0.5.9, teniendo como principal novedad el estreno de la imagen minimal para amd64 además de múltiples optimizaciones y correcciones de errores.

Acá les dejo el enlace de la nota de lanzamiento:

https://xanadulinux.wordpress.com/2014/07/28/lanzamiento-de-la-version-0-5-9/


Tagged: lanzamiento, xanadu

28 July, 2014 02:50PM by jose perez

hackergotchi for Xanadu

Xanadu

Lanzamiento de la versión 0.5.9

Tarde pero seguro; Después de múltiples problemas con el equipo donde se construyen las imágenes de Xanadu y gracias a los amigos de viserproject.com que nos facilitaron uno de sus servidores para construir allí las imágenes y hoy como todos los meses estrenamos nueva versión de nuestra distribución, en este lanzamiento se han incluidos los siguientes cambios:

* se añade el paquete “gigolo” para el manejo de conexiones a sistemas de archivos remotos.
* se añade el paquete “autojump” a la opción de programas recomendados para instalar.
* se incluyen los paquetes del repositorio de Xanadu, cuya utilidad sera permitirnos actualizar en subsecuentes versiones los archivos de configuración de diversos programas sin la necesidad de reempaquetar el instalador original de Debian.
* se cambia la ubicación de los script de post-instalación y arranque de algunas funciones de Xanadu para cumplir con el estándar HFS.
* se corrige un error en el script “revisión” que no permitía detectar correctamente la versión del kernel.
* se corrigen un error en el script variables que no permitía exportar correctamente algunas funciones.

A parte de estos cambios se actualizan los paquetes incluidos y estrenamos la imagen “minimal” para adm64 y se elimina la imagen que incluía LXQT.

En la sección de descarga se encuentran los nuevos enlaces:

https://xanadulinux.wordpress.com/descarga/

NOTA: Existe un error en el paquete “plymouth” que no permite la correcta visualización del banner durante el arranque.


Tagged: descarga, lanzamiento, version

28 July, 2014 02:08PM by jose perez

hackergotchi for TurnKey Linux

TurnKey Linux

tmux is a superior alternative to screen

Today was the first day I stopped using screen and started using tmux, which is a superior alternative which supports a more complex range of splits and has a nicer interface. It's a bit different from screen in that it has this concept of windows and panes. A tmux pane is a window (e.g., shell session) in screen terminology. A tmux window is a layout of panes (e.g., two windows side by side). A tmux window could have only one pane, or it could have an arbitrarily complex configuration of panes.

My initial impetus for investigating screen alternatives is that I hated having to run more than one terminal to on my screen to get the layout I wanted. I also find it infuriating to have to use the mouse to setup my terminal workspace.

Important commands:

tmux            start new tmux session
tmux attach     attach to existing tmux session

Important default key bindings

Tmux uses C-b instead of C-a as the control character. Which is great since C-a actually means something in shell-land (move to the start of the command)

Here is a shorthand listing of keys. Prepend C-b to invoke them.

General:

?                           full list of key bindings
d                           detach from tmux session
C-z                         suspend tmux

Pane keys:

"                           split horizontally
%                           split vertically
x                           kill current pane

up/down/right/left          navigate between window panes
                            (you can also click on the pane with the
                            mouse)

C-up/down/right/left        resize window pane

space                       automatic re-layout of panes
}                           swap pane in layout forward
{                           swap pane in layout backward

Cut and paste / clipboard keys:

[                           copy mode (cut and paste with keyboard)

                            Copy mode has its own key bindings:

                                cursors + page-up/page-down: move around
                                space: start selection
                                enter: copy selection

]                           paste buffer

C-c                         copy buffer to X clipboard
C-v                         paste buffer from X clipboard

Window keys:

c                           create new window
1-9                         switch to window number ...
n                           next window
p                           previous window
,                           rename window

!                           maximize current pane

Advanced:

:                           command line mode
:list-commands              list commands

My tmux configuration file:

cat > $HOME/.tmux.conf << EOF

set -g status off
set -g mouse-select-pane on
set -g base-index 1
set-window-option -g mode-keys vi
set-window-option -g mode-mouse on

bind-key C-c run-shell "tmux show-buffer | xclip -i"
bind-key C-v run-shell "tmux set-buffer \\"$(xclip -o)\\"; tmux paste-buffer"

EOF

28 July, 2014 12:22PM by Liraz Siri

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Kerensa: Until Next Year CLS!

Bs7Qxr CMAAYtLa 300x199 Until Next Year CLS!

Community Leadership Summit 2014 Group Photo

This past week marked my second year helping out as a co-organizer of the Community Leadership Summit. This Community Leadership Summit was especially important because not only did we introduce a new Community Leadership Forum but we also introduced CLSx events and continued to introduce some new changes to our overall event format.

Like previous years, the attendance was a great mix of community managers and leaders. I was really excited to have an entire group of Mozillians who attended this year. As usual, my most enjoyable conversations took place at the pre-CLS social and in the hallway track. I was excited to briefly chat with the Community Team from Lego and also some folks from Adobe and learn about how they are building community in their respective settings.

I’m always a big advocate for community building, so for me, CLS is an event I try and make it to each and every year because I think it is great to have an event for community managers and builders that isn’t limited to any specific industry. It is really a great opportunity to share best practices and really learn from one another so that everyone mutually improves their own toolkits and technique.

It was apparent to me that this year there were even more women than in previous years and so it was really awesome to see that considering CLS is often times heavily attended by men in the tech industry.

I really look forward to seeing the CLS community continue to grow and look forward to participating and co-organizing next year’s event and possibly even kick of a CLSxPortland.

A big thanks to the rest of the CLS Team for helping make this free event a wonderful experience for all and to this years sponsors O’Reilly, Citrix, Oracle, Linux Fund, Mozilla and Ubuntu!

28 July, 2014 12:00PM

Michael Hall: Who do you contribute to?

When you contribute something as a member of a community, who are you actually giving it to? The simple answer of course is “the community” or “the project”, but those aren’t very specific.  On the one hand you have a nebulous group of people, most of which you probably don’t even know about, and on the other you’ve got some cold, lifeless code repository or collection of web pages. When you contribute, who is that you really care about, who do you really want to see and use what you’ve made?

In my last post I talked about the importance of recognition, how it’s what contributors get in exchange for their contribution, and how human recognition is the kind that matters most. But which humans do our contributors want to be recognized by? Are you one of them and, if so, are you giving it effectively?

Owners

The owner of a project has a distinct privilege in a community, they are ultimately the source of all recognition in that community.  Early contributions made to a project get recognized directly by the founder. Later contributions may only get recognized by one of those first contributors, but the value of their recognition comes from the recognition they received as the first contributors.  As the project grows, more generations of contributors come in, with recognition coming from the previous generations, though the relative value of it diminishes as you get further from the owner.

Leaders

After the project owner, the next most important source of recognition is a project’s leaders. Leaders are people who gain authority and responsibility in a project, they can affect the direction of a project through decisions in addition to direct contributions. Many of those early contributors naturally become leaders in the project but many will not, and many others who come later will rise to this position as well. In both cases, it’s their ability to affect the direction of a project that gives their recognition added value, not their distance from the owner. Before a community can grown beyond a very small size it must produce leaders, either through a formal or informal process, otherwise the availability of recognition will suffer.

Legends

Leadership isn’t for everybody, and many of the early contributors who don’t become one still remain with the project, and end of making very significant contributions to it and the community over time.  Whenever you make contributions, and get recognition for them, you start to build up a reputation for yourself.  The more and better contributions you make, the more your reputation grows.  Some people have accumulated such a large reputation that even though they are not leaders, their recognition is still sought after more than most. Not all communities will have one of these contributors, and they are more likely in communities where heads-down work is valued more than very public work.

Mentors

When any of us gets started with a community for the first time, we usually end of finding one or two people who help us learn the ropes.  These people help us find the resources we need, teach us what those resources don’t, and are instrumental in helping us make the leap from user to contributor. Very often these people aren’t the project owners or leaders.  Very often they have very little reputation themselves in the overall project. But because they take the time to help the new contributor, and because theirs is very likely to be the first, the recognition they give is disproportionately more valuable to that contributor than it otherwise would be.

Every member of a community can provide recognition, and every one should, but if you find yourself in one of the roles above it is even more important for you to be doing so. These roles are responsible both for setting the example, and keeping a proper flow, or recognition in a community. And without that flow or recognition, you will find that your flow of contributions will also dry up.

28 July, 2014 12:00PM

Kubuntu: New Kubuntu Plasma 5 Flavour in Testing

Kubuntu Plasma 5 ISOs have started being built. These are early development builds of what should be a Tech Preview with our 14.10 release in October. Plasma 5 should be the default desktop in a future release.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

28 July, 2014 10:33AM

Jonathan Riddell: Kubuntu Plasma 5 ISOs Rolling

KDE Project:

Your friendly Kubuntu team is hard at work packaging up Plasma 5 and making sure it's ready to take over your desktop sometime in the future. Scarlett has spent many hours packaging it and now Rohan has spent more hours putting it onto some ISO images which you can download to try as a live session or install.

This is the first build of a flavour we hope to call a technical preview at 14.10. Plasma 4 will remain the default in 14.10 proper. As I said earlier it will eat your babies. It has obvious bugs like kdelibs4 theme not working and mouse themes only sometimes working. But also be excited and if you want to make it beautiful we're sitting in #kubuntu-devel having a party for you to join.

I recommend downloading by Torrent or failing that zsync, the server it's on has small pipes.

Default login is blank password, just press return to login.

28 July, 2014 10:20AM

Rohan Garg: Plasma5 : Now more awesome as a Kubuntu ISO

Kbuntu Next

The Kubuntu team is proud to announce the immediate availability of the Plasma 5 flavor of the Kubuntu ISO which can be found here (here’s a mirror to the torrent file in case the server is slow). Unlike it’s Neon 5 counterpart , this ISO contains packages made from the stock Plasma 5.0 release . The ISO is meant to be a technical preview of what is to come when Kubuntu switches to Plasma 5 by default in a future release of Kubuntu.

A special note of thanks to the Plasma team for making a rocking release. If you enjoy using KDE as much as we do, please consider donating to Kubuntu and KDE :)

NB: When booting the live ISO up, at the login screen, just hit the login button and you’ll be logged into a Plasma 5 session.


28 July, 2014 09:39AM

Duncan McGreggor: The Future of Programming - Themes at OSCON 2014

Series Links


A Qualitative OSCON Debrief

As you might have noticed from the OSCON Twitter-storm this year, the conference was a blast. Even if you weren't physically present, given the 17 tracks, you can imagine that the presentations -- and subsequent conversations -- were deeply varied.

This was the second OSCON I'd attended; the first was was in 2008 as a guest of Michael Bernstein, a friend who was speaking there. OSCON 2008 was a zoo - I'm not sure of the actual body count, but I've heard that attendees + vendors + miscellaneous topped 12,000 people over the course of the week (I would love to hear if someone has hard data on that -- googling didn't reveal much). OSCON 2008 was dominated by Big Data, Hadoop, endless buzzword bingo, and business posturing by all sorts. The most interesting bits of that conference were the outlines that formed around the conversations people weren't having. In fact, over the following 6 months, that's what I spent my spare time pondering: what people didn't say at OSCON.

This year's conference seemed like a completely different animal. It felt like easily 1/2 to 1/3rd the number of attendees in 2008. Where that one had all the anonymizing feel of rush-hour in a major metropolitan hub, OSCON 2014 had a distinctly small-town vibe to it -- I was completely charmed. Conversations (overheard as well as participated in) were not littered with examples from the latest bizspeak, but rather focused on essence. The interactions were not continually distracted, but rather steadily focused, allowing people to form, express, and dispute complete thoughts with their peers.


Conversations

So what were people talking about this year? Here are some of the topics I heard covered during lunches, in hallways, and at podiums; at pubs, in restaurants and at parks [1]:
  • What communities are thriving?
  • Which [projects, organisations, companies, etc.] are treating their people right?
  • What successful processes are being followed at [project, organisation, etc.]?
  • Who is hiring and why should someone want to work there?
  • Where can I go to learn X? Who is teaching X? Who shares the most about X?
  • Which [projects, organisations] support X?
  • Why don't more [people, projects, organisations] care about [possible future X]?
  • Why don't more [people, projects, organisations] spend more time investigating the history of X for "lessons learned"?
  • There was so much more X in computing during the 60s and 70s -- what happened? [2]
  • Why are we reinventing X?
  • When is X going to be invented, and who's going to do it?
  • Everything is changing! I can't keep up anymore.
  • I want to keep up, but how?
  • Why can't we stop making so many X?
  • Nobody cares about Y anymore; we're all doing X now.
  • Full stack developers!
  • Haskell!
  • Fault-tolerant systems!

After lots of reflection, here's how I classified most of the conversations I heard:
  • Developing communities,
  • Developing careers and/or personal/professional qualities, and
  • Developing software, 

along lines such as:
  • Effective maintenance, maturity, and health,
  • Focusing on the "art",  eventual mastery, and investments of time,
  • Tempering bare pragmatism with something resembling science or academic excellence,
  • Learning the new to bolster the old,
  • Inspiring innovation from a place of contemplation and analysis,
  • Mining the past for great ideas, and
  • Figuring out how to better share and spread the adoption of good ideas.


Themes

Generalized to such a degree, this could have been pretty much any congregation of interested, engaged minds since the dawn of civilization. So what does it look like if we don't normalize quite so much? Weighing these with what may well be my own bias (and the bias of like-minded peers), I submit to your review these themes:

  • A very strong interest in programming (thinking and creating) vs. integration (assessing and consuming).
  • An express desire to become better at abstraction (higher-order functions, composition, and types) to better deal with growing systems complexities.
  • An interest in building even more complicated systems.
  • A fear of reimplementing past mistakes or of letting dust gather on past intellectual achievements.

As you might have guessed, these number very highly among the reasons why the conference was such an unexpected pleasure for me. But it should also not come as a surprise that these themes are present:

  • We have had several years of companies such as Google and Amazon (AWS) building and deploying some of the most sophisticated examples of logic-made-manifest in human history. This has created perceived value in our industry and many wish to emulate it. Similarly, we have single purpose distributed systems being purchased for nearly 20 billion USD -- a different kind of complexity, with a different kind of perceived reward.
  • In the 70s and 80s, OOP adoption brought with it the ability to create large software systems in ways that people had not dared dream or were impractical to realize. Today's growing adoption of the Functional paradigm is giving early signs of allowing us to better integrate complex systems with more predictability and fewer errors.
  • Case studies of improvements in productivity or the capacity to handle highly complex or previously intractable problems with better abstractions, has ignited the passions of many. Not wanting to limit their scope of knowledge or sources of inspiration, people are not simply limiting themselves to the exploration of such things as Category Theory -- they are opening the vaults of computer science with such projects as Papers We Love.

There's a brave new world in the making. It's a world for programmers and thinkers, for philosophers and makers. There's a lot to learn, but it's really not so different from older worlds: the same passions drive us, the same idealism burns brightly. And it's nice to see that these themes arise not only in small, highly specialized venues such as university doctoral programs and StrangeLoop (or LambdaJam), but also in larger intersections of the industry like OSCON (or more general-audience ones like Meetups).

Up next: Adopting the Functional Paradigm?
PreviouslyAn Overview


Footnotes

[1] It goes without saying that any one attendee couldn't possibly be exposed to enough conversations to form a perfectly accurate sense of the total distribution of conversation topics. No claim to the contrary is being made here :-)

[2] I strongly adhere to the multifaceted hypothesis proposed by Bret Victor
here in the section titled "Why did all these ideas happen during this particular time period?"


28 July, 2014 05:25AM by Duncan McGreggor (noreply@blogger.com)

Duncan McGreggor: The Future of Programming - An Overview

Art by Philip Straub
There's a new series of blog posts coming, inspired by on-going conversations with peers, continuous inspection of the development landscape, habitual navel-gazing, and participation at the catalytic OSCON 2014. As you might have inferred, these will be on the topic of "The Future of Programming."

Not to be confused with Bret Victor's excellent talk last year at DBX, these posts will be less about individual technologies or developer user experience, and more about historic trends and viewing the present (and near future) through such a lense.

In this mini-series, the aim is to present posts on following topics:

I did a similar set of posts, conceived in late 2008 and published in 2009 on the future of cloud computing entitled After the Cloud. It was a very successful series and the cloud industry seems to be heading towards some of the predictions made in it -- ZeroVM and Docker are an incremental step towards the future of distributed processes/functions outlined in To Atomic Computation and Beyond

In that post, though, are two quotes from industry greats. These provide an excellent context for this series as well, hinting at an overriding theme:
  • Alan Kay, 1998: A crucial key to growing large systems is effective communications between components.
  • Joe Armstrong, 2004: To effectively model and solve problems in a distributed manner, we need concurrency... this is made easier when we isolate processes and do not share data.

In the decade since these statements were made, we have seen individuals, projects, and companies take that vision to heart -- and succeeding as a result. But as an industry, we continue to struggle with the definition of our art; we still are tormented by change -- both from within and externally -- and do not seem to adapt to it well.

These posts will peer into such places ... in the hope that such inspection might guide us better through the tangled forest of our present into the unimagined forest of our future.

28 July, 2014 05:17AM by Duncan McGreggor (noreply@blogger.com)

Benjamin Kerensa: Mozilla at O’Reilly Open Source Convention

IMG 20140723 161033 300x225 Mozilla at OReilly Open Source Convention

Mozililla OSCON 2014 Team

This past week marked my fourth year of attending O’Reilly Open Source Convention (OSCON). It was also my second year speaking at the convention. One new thing that happened this year was I co-led Mozilla’s presence during the convention from our booth to the social events and our social media campaign.

Like each previous year, OSCON 2014 didn’t disappoint and it was great to have Mozilla back at the convention after not having a presence for some years. This year our presence was focused on promoting Firefox OS, Firefox Developer Tools and Firefox for Android.

While the metrics are not yet finished being tracked, I think our presence was a great success. We heard from a lot of developers who are already using our Developer tools and from a lot of developers who are not; many of which we were able to educate about new features and why they should use our tools.

IMG 20140721 171609 300x168 Mozilla at OReilly Open Source Convention

Alex shows attendee Firefox Dev Tools

Attendees were very excited about Firefox OS with a majority of those stopping by asking about the different layers of the platform, where they can get a device, and how they can make an app for the platform.

In addition to our booth, we also had members of the team such as Emma Irwin who helped support OSCON’s Children’s Day by hosting a Mozilla Webmaker event which was very popular with the kids and their parents. It really was great to see the future generation tinkering with Open Web technologies.

Finally, we had a social event on Wednesday evening that was very popular so much that the Mozilla Portland office was packed till last call. During the social event, we had a local airbrush artist doing tattoos with several attendees opting for a Firefox Tattoo.

All in all, I think our presence last week was very positive and even the early numbers look positive. I want to give a big thanks to Stormy Peters, Christian Heilmann, Robyn Chau, Shezmeen Prasad, Dave Camp, Dietrich Ayala, Chris Maglione, William Reynolds, Emma Irwin, Majken Connor, Jim Blandy, Alex Lakatos for helping this event be a success.

28 July, 2014 01:48AM

July 27, 2014

hackergotchi for Blankon developers

Blankon developers

Ainul Hakim: Selamat Idul Fitri 1435H

Ramadhan 1435 Hijriyah akan segera meninggalkan kita, sungguh cepat waktu berlalu, bulan yang penuh rahmat, bulan penuh bacaan Al Quran, bulan penuh kajian ilmu dien, bulan penuh amal jama`i. Bulan dimana anak anak senang sekali pergi ke Masjid.

Ya Alloh berikanlah kami kekuatanMu dalan mengarungi 11 bulan kedepan dan berikanlah kami kesempatan kembali untuk merasakan manisnya bulan Ramadhan yang penuh Barokah.

Kami sekeluarga mengucapkan “Taqobalallahu minna waminkum”
Mohon maaf jika ada kesalahan kami, insyaAlloh kamipun sudah memaafkan saudara.

Ainul Hakim dan keluarga

Idul Fitri 1435 H

1 vote, 5.00 avg. rating (95% score)

27 July, 2014 09:46AM

MDAMT: Harapan Teknologi Informasi dari Pemerintahan Baru

Berhubung akan ada pemerintahan baru, saya jadi kepikiran hal-hal apa saja yang barangkali bisa ditingkatkan dalam pemerintahan ini. Terlepas dari polemik yang sedang terjadi, dan siapa yang nanti akan naik di pemerintahan baru, sesuai bidang saya dan pengalaman berinteraksi dengan masyarakat umum dan pemerintahan, paling tidak ada beberapa aspek yang menurut saya perlu menjadi alat pertimbangan saat membuat cetak biru landasan TI (Teknologi Informasi) di negara kita. Tulisan ini agak sedikit bercampur mengenai apa yang saya harapkan terjadi di kalangan pemerintahan itu sendiri serta hal yang saya harapkan terjadi di masyarakat. Bagi yang memiliki akses ke pemerintahan baru silakan menggunakan artikel ini sebagai pertimbangan.

F/OSS

Kebijakan menggunakan perangkat lunak bebas/terbuka alias F/OSS (Free/Open Source Software) dalam pemerintahan wajib dilakukan dengan berbagai alasan. Di antaranya adalah alasan kemandirian, transparansi, keterlibatan publik dan biaya. 

Kemandirian adalah salah satu tonggak negara yang maju. Dengan memperkecil kebergantungan kita dari negara lain, maka kita akan lebih bebas menentukan apa yang kita inginkan dan ke arah mana kita ingin berjalan.Vendor atau komunitas yang digandeng dalam implementasi perlu dari dalam negeri agar selain keuntungan sisi TI yang didapat juga dapat meningkatkan ekonomi dan industri kreatif di bidang ini. Perangkat-perangkat keras yang beredar di NKRI juga perlu dapat diakses atau digunakan melalui sistem-sistem berbasis F/OSS sehingga tidak perlulah lagi suatu instansi mengganti sistem operasi komputer hanya untuk dapat mencetak. Ini perlu ada payung hukum agar semua pihak dapat mengakses informasi tanpa terhalang sistem proprietary.

Transparansi membuat masyarakat dapat memahami apa saja yang dilakukan dalam sistem TI yang berdampak bagi kehidupan mereka sehari-hari. Dengan melibatkan para pakar, masyarakat dapat mengetahui apakah ada permainan dalam pengurusan suatu izin, dalam proses pembelian, dalam pemilihan pemenang tender, dan sebagainya. Tentu ada argumen bahwa sistem yang berjalan belum tentu menggunakan kode sumber yang sama dengan yang dipublikasikan, namun ini masih dapat ditangani dengan metode-metode rekayasa perangkat lunak, misalnya dengan menjalankan uji unit secara langsung yang dilakukan oleh suatu tim yang ditunjuk.

Publik yang merasa tidak puas atas kinerja suatu sistem dapat melakukan pengawasan bahkan mengusulkan perbaikan-perbaikan dalam sistem itu. Tanpa adanya praktik pengembangan terbuka, keterlibatan publik seperti ini tidak mungkin dapat dilakukan. Hal-hal lain yang juga turut tercakup adalah penemuan kesalahan pemrograman atau setelan yang dapat membuat sistem tidak dapat berjalan sebagaimana mestinya. Tentu ada argumen penentang misalnya dengan membuka kode sumber berarti bisa memudahkan orang untuk menjebol sistem. Di sini perlu ada mitra si "pemilik" sistem dari sisi komunitas yang memang sengaja digandeng untuk turut mengawasi dan mengaudit jika ada kebocoran di sana-sini. Di sini, F/OSS tidak hanya digunakan sebagai basis implementasi, namun keseluruhan proses implementasi mengikuti kaidah-kaidah F/OSS. 

Biaya saya tempatkan di akhir karena isu gratis atau tidak membayar justru bukan kelebihan F/OSS. Memang ada nilai ekonomis namun hal-hal di atas lebih penting daripada nilai ekonomi itu sendiri, selama perputarannya ada di dalam negeri ini. 

Sebelumnya saya pernah menulis tiga mitos F/OSS yang mungkin bisa jadi bahan pemikiran.

Ramah Lingkungan

Solusi TI yang dipilih sebaiknya tetap memerhatikan lingkungan dalam jangka panjang. Pemilihan perangkat keras dengan pemakaian energi yang kecil sebaiknya lebih diutamakan. Infrastruktur energi terbarukan perlu diperbanyak dan digunakan oleh semua pihak. Daerah-daerah di Indonesia bagian timur yang memiliki akses terbatas ke listrik dapat lebih meningkatkan lagi daya guna penggunaan TI di sana.

Selain itu, pencetakan ke kertas sebaiknya dihindari. Konsekuensinya, kita memerlukan alat lain yang dapat menggantikan kertas. Salah satu sistem yang sering menggunakan kertas adalah proses persuratan. Surat menyurat antar instansi sudah sebaiknya menggunakan Simaya (diinisiasi oleh KOMINFO) yang menjadi tata kelola persuratan hingga ke tingkat disposisi sehingga mengurangi penggunaan kertas. Memang kita sering dengar para pejabat lebih senang membaca surat atau menulis lembar disposisi dalam kertas, tapi mungkin sebaiknya pejabatnya yang dilatih (jangan diganti dulu!) lebih keras lagi dan perlu ada payung hukum untuk hal ini.

Interoperabilitas

Dalam beberapa kali berinteraksi dengan dunia pemerintahan, interoperabilitas menjadi hal yang krusial untuk mengurangi tingkat kepusingan dalam merancang sistem. Saat ini hampir tiap-tiap instansi memiliki standar sendiri-sendiri dalam pertukaran data dan bahkan ada yang menggunakan format data proprietary. Perlu ada suatu badan yang memikirkan bagaimana caranya suatu instansi dapat (misalnya) mengambil informasi dari BKN atau informasi tentang pajak seseorang melalui jalur komunikasi data dalam format data yang disepakati. Atau misalnya badan tersebut sudah ada, perlu ada dorongan hukum lagi agar semua instansi menaati. Data yang dipertukarkan boleh menggunakan format data yang sudah ada atau menggunakan jenis data baru, asalkan formatnya terbuka. Dengan adanya interoperabilitas, banyak waktu dapat dipotong (plus biaya plus pikiran) saat suatu instansi membutuhkan data dari instansi lain. Selain itu, masyarakat yang membutuhkan akses ke sistem informasi milik pemerintah juga dapat terbantu.
Dalam hal interoperabilitas, adalah hal yang krusial juga untuk menentukan apakah data yang dipertukarkan benar-benar valid dan sah dari sisi instansi yang benar? Saat ini sulit dibuktikan karena tidak adanya PKI yang mendukung hal ini.

Keterbukaan data

Data yang dapat disajikan ke publik sudah sebaiknya dipublikasikan ke publik dengan lisensi bebas. Karena sekarang publik sudah semakin pandai, sebaiknya data yang dipublikasikan adalah data mentah, namun lengkap. Industri-industri kreatif dapat bermunculan dan memanfaatkan data ini sehingga dapat meningkatkan daya guna dan manfaat data tersebut. Data pemerintah dari tangan pertama seperti data tabular, peta, sensor, dan sebagainya saat ini masih sangat kurang tersedia di publik. Barangkali perlu ada pusat publikasi data yang dikoordinasikan oleh suatu kementrian (mungkin KOMINFO) agar dapat menjadi suatu titik di mana masyarakat dapat mencari data.

Internet

Internet cepat buat apa? Pertanyaan yang cukup populer beberapa waktu lalu. Namun bila kita lihat lebih dalam lagi, mungkin memang pertanyaannya valid dan perlu dijawab. Kebutuhan internet seseorang belum tentu sama dengan kebutuhan orang yang lainnya, dan jika kita dapat mengumpulkan data yang cukup, kita bisa mengoptimalkan di sisi mana internet perlu ditingkatkan. Apakah dari sisi regulasi, apakah dari sisi implementasi, dan sebagainya. Data yang terkumpul bisa dimanfaatkan oleh pemerintah, operator hingga masyarakat agar bisa menyediakan layanan internet dengan baik. Namun sehubungan dengan kata "cepat" itu, seberapa cepat internet dianggap "cepat"? Bagaimana jika pemerintah mengeluarkan regulasi agar operator tidak menggunakan istilah "up to" untuk mengiklankan kecepatan layanannya, tapi lebih menggunakan kata "mulai dari" yang menunjukan komitmen kecepatan terkecil yang bisa operator sajikan.


Demikian.

Sudah banyak negara-negara yang dapat kita jadikan contoh bagaimana cara melaksanakan hal ini tinggal kita sendiri apakah mau melakukannya atau tidak.

27 July, 2014 07:31AM by Mohammad Anwari (noreply@blogger.com)

hackergotchi for Parsix developers

Parsix developers

New security updates have been released for Parsix GNU/Linux 6.0 (Trev) and 7.0...

New security updates have been released for Parsix GNU/Linux 6.0 (Trev) and 7.0 (Nestor). Please see http://www.parsix.org/wiki/Security for details.

27 July, 2014 07:03AM by Parsix GNU/Linux

hackergotchi for TurnKey Linux

TurnKey Linux

Chromebook Acer C720 review: the browser is the operating system and it doesn't suck

It's 2014 and the present is not my grandfather's future. There's no race to colonize the moon and we are most assuredly not zipping around in jetpacks and flying cars. Most predictions fail, but some ghosts of future past are alive and well.

20 years ago I had my first run in with a web browser. I was browsing the NASA website at an Internet conference and and it was a revelation! The BBS community I had grown up in was dead in the water. This would change everything. A couple of years later, the developer of Mosaic, Marc Anderseen, now heading Netscape made a prescient prediction:

In the future - the browser will be the operating system.

Unsurprisingly, this mobilized Microsoft to crush Netscape in what would become known by some as the browser wars, and the start of the end for Microsoft's reign of terror by others.

Netscape didn't stick around long enough to see Anderseen's prediction come to pass, but some technology trends are seemingly inexorable, or perhaps self fulfilling? 20 years after trying my first browser, I tried my first computer in which Anderseen's "the browser will be the operating system" vision has been fully realized. By Google, who supplanted Microsoft as the world's leading tech empire, in the form of a refurbished (never used) Acer C720 Chromebook which I picked up for a ridiculously low $180.

To my surprise... it's pretty good, and apparently I'm not the only one that thinks so because these things are flying off the shelves. The model I picked up is currently the most popular netbook on Amazon and is a steal even at the original price of $220.

chromebook acer c720

Specs

Intel Celeron 2955U 1.4 GHz (2 MB Cache)
2 GB DDR3L SDRAM
32 GB Solid-State Drive
12-Inch Screen, Intel HD Graphics, HDMI output
8.5-hour battery life
Weight is 1.25 KG

I've been using my new Acer C720 chromebook this week as a backup computer and it is surprisingly good, especially considering the price. This is the first Linux based laptop/netbook I would whole-heartedly recommend to non-tech savvy family members and friends.

What I like

  • Amazing value

    This thing costs about as much as the cheapest Intel NUC yet comes with memory, a hard drive, a screen and a keyboard - all in a very light, tightly integrated package.
     
  • Good hardware, good Linux integration
    • Cool and quiet
    • Amazing battery life
    • Very light
    • You can replace the SSD if you want more storage
    • Uses an open BIOS
    • Best out of the box Linux support of any laptop type device I have ever owned. With Chromebook Linux isn't an option, it's the default. So everything works.
    • Crazy fast boot/suspend time. Chromebook has the fastest Linux boot time I haver ever experienced. I plugged in the device and about a second or two later I was configuring it. It only takes about 3-4 seconds to reboot the device.
    • Sleak button-less trackpad with support for gestures (e.g., click with two fingers for right click). The buttons are actually there, they're just hidden by the trackpad itself so if you click on the left corner of the trackpad that's a left click and clicking on the right corner is a right click. I just click on the trackpad itself unless I need drag and drop.
    • Dedicated search button on the keyboard, doubles as an app launcher


chromebook search launcher

  • Innovative ChromeOS
    • ChromeOS is a lightening quick Gentoo based OS that performs extremely well, especially considering there are only 2GBs of RAM in there. Seems very stable.
    • The novelty of having Chrome run everything, including the shell. It's actually not that bad.
    • Very good integration with Google apps (big surprise!)
    • Surprisingly good usability. It's different from what I'm used to but I found it to be a consistent, tightly integrated experience.
    • I'm a big fan of text to speech which is how I get most of my reading done nowadays. I started with the Kindle and moved on to Moon Reader+ on my phone. So I was delighted to discover ChromeOS features the best text-to-speech integration in any OS via the built-in ChromeVox functionality  This is supposedly for vision impaired users but I've been finding it useful to have it read web content (e.g., Wikipedia) to me over breakfast.

      To toggle ChromeVox on/off: Ctrl + Alt + Z
      On ChromeBox is enabled, you change the speed with the { } keys. I read pretty quickly.
       
    • Keyboard shortcuts for everything with onboard help


chromebook shortcuts
 

  • High security, low maintenance
    • Auto-updating / self-maintaining, like the Nexus phones
    • Recovery mode resets to known good state.
    • Designed with security in mind from the bottom up (assuming you trust Google of course): in non developer mode it uses tamper resistant keys stored in a TPM chip to verify the boot chain from the bootloader up so that installing malware isn't possible without physical modification of the laptop.
       
  • Hacker friendly
    • No need to jailbreak: You don't have to "jailbreak" it to get full access to your device as it's pretty easy to put into developer mode where you have full root access, can boot any OS from USB. You just run a bunch of cryptic commands and you're set.
       
    • Runs Debian/Ubuntu side-by-side: In developer mode you can run alterate userlands (e.g., Debian/Ubuntu) side-by-side with ChromeOS in chroot using crouton. If you want to run an alternate desktop environment besides ChromeOS (e.g., XFCE, GNOME, Unity, etc.) to get a more conventional usage experience you can use crouton to launch an X server in the chroot. If you just need command line access you can use crouton to enter the chroot from within a ChromeOS shell. Crouton supports multiple chroots. Initially, my main impetus for exploring crouton was getting VLC to work since the ChromeOS default player is too basic.
       
    • You don't have to keep ChromeOS: If you don't want to run ChromeOS you can replace it and install any operating system on this thing. Nice to know I have that freedom. Also, since ChromeOS is based on free software there shouldn't be issues with proprietary hardware support so other Linux distributions should run well on it.

What I don't like

  • This is a cloud computer: Out of the box, it's not totally useless offline, but not far from it.
  • It feels like I'm renting a computer from Google: Out of developer mode you are definitely playing 100% by Google's rules.
  • No native apps: the only thing you can install (outside of dev) mode are Chrome extensions.
     
  • Poor visibility of software updates: Chromebooks auto-updates themselves silently and invisibly, even in dev mode. You agree to that when you first setup the computer. As far as I can tell there's no device accessible changelog. Supposedly your system is getting the same updates discussed on the chrome releases blog but it's hard to tell. Maybe I just need to dig deeper in dev mode. I'm guessing the vast majority don't care anyway. At least not yet. They just want their Chromebook to work. It's a feature. Sure, if you don't trust Google or the OEM to auto-update your machine then you can turn off the update engine (in dev mode) but it's an out of the frying pan and into the fire (of exploitable security vulnerabilities) catch-22 type situation. By comparison on a regular Linux distribution you have the option to check the source code of each individual update (which you can build from source) before applying it. Bad stuff can still happen but less so.
     
  • Doesn't support my favorite ebook format - Mobipocket: This might eventually be fixed by a Chrome extension, but right now the only supported ebook format is epub. Converting my ebooks is a hassle.
     
  • Media player sucks: Built in media player is very limited compared with vlc. Not just in terms of support for subtitles, display ratios, but even basic stuff like codecs. The audio didn't work on half my sample videos.
     
  • Small stuff:
    • The keyboard shortcuts are not what I'm used to.
    • No Android style Desktop widgets? Why not?

27 July, 2014 05:55AM by Liraz Siri

July 25, 2014

hackergotchi for Cumulus Linux

Cumulus Linux

Sit Down for some “Coffee with Cumulus” + Other Cool Events

Coffee with Cumulus and other cool events happening.

Cumulus Networks is hosting many exciting webinars over the next few weeks! Let’s take a closer look at what we have coming up, and be sure to check out all of the links below to see what’s in the pipeline.

Beginning on July 31, Cumulus Networks will host a weekly morning intro and product overview, “Coffee with Cumulus”. Read more here about the July 31 webinar here. I’m really excited about this and already have my favorite coffee flavors picked out. For the first 30-minute live demo, I’m definitely going to drink a hazelnut latte while I moderate the live demo with one of our awesome Customer Solutions Engineers, Todd Craw. Initially, the sessions will start with a 9am live intro and product overview. Then in a few weeks we will expand into other parts of the world by adding additional introductory product demos in additional time zones. Why? Because I think of coffee as a morning beverage and I don’t believe people in other parts of the world should be drinking coffee at 3am.

Coffee with CumulusCoffee with Cumulus

Next week, join us for a discussion about what it takes to truly integrate a virtual network platform with an underlay physical network for a scalable data center orchestration, automation and multi-tenancy solution over high-capacity IP fabrics? On August 6th we are excited to team up with the VMware NSX team for an engaging webcast. Now that Cumulus Linux integrates with NSX Layer 2 gateway services, just about anyone can now connect virtual workloads to physical workloads with almost zero performance impact.

And, if you happened to miss our most recent discussion about Network Automation and ONIE, you definitely missed out on a few really great live webcasts.

Sound cool? We have a ton of great webinars coming up in the next few months. Also, if there is a webinar topic you are interested in hearing down the road, feel free to send me an email.

Oh, and if you aren’t following us on Twitter yet…. What are you waiting for? Follow us @cumulusnetworks. Share a picture of your pet turtle with #rocketturtle.

The post Sit Down for some “Coffee with Cumulus” + Other Cool Events appeared first on Cumulus Networks Blog.

25 July, 2014 09:10PM by Carrie Smith

hackergotchi for Ubuntu developers

Ubuntu developers

Xubuntu: Xubuntu 14.04.1 released

Xubuntu 14.04 Trusty Tahr

Xubuntu 14.04 Trusty Tahr

The Xubuntu team is pleased to announce the immediate release of Xubuntu 14.04.1 Xubuntu 14.04 is an LTS (Long-Term Support) release and will be supported for 3 years. This is the first Point Release of it’s cycle.

The final release images are available as Torrents and direct downloads at http://cdimage.ubuntu.com/xubuntu/releases/trusty/release/

As the main server will be very busy in the first days after the release, we recommend using the Torrents wherever possible.

For support with the release, navigate to Help & Support for a complete list of methods to get help.

Bug fixes for the first point release

  • Black screen after wakeup from suspending by closing the laptop lid. (1303736)
  • Light Locker blanks the screen when playing video. (1309744)
  • Include MenuLibre 2.0.4, which contains many fixes. (1323405)
  • The documentation is now attributed to the Translators.

Highlights, changes and known issues

The highlights of this release include:

  • Light Locker replaces xscreensaver for screen locking, a setting editing GUI is included
  • The panel layout is updated, and now uses Whiskermenu as the default menu
  • Mugshot is included to allow you to easily edit your personal preferences
  • MenuLibre for menu editing, with full Xfce support, replaces Alacarte
  • A community wallpapers package, which includes work from the five winners of the wallpaper contest
  • GTK Theme Config to customize your desktop theme colors
  • Updated artwork, including various enhancements to themes as well as a new default wallpaper

Some of the known issues include:

  • Window manager shortcut keys don’t work after reboot (1292290)
  • Sorting by date or name not working correctly in Ristretto (1270894)
  • Due to the switch from xscreensaver to light-locker, some users might have issues with timing of locking; removing xscreensaver from the system should fix these problems
  • IBus does not support certain keyboard layouts (1284635). Only affects upgrades with certain keyboard layouts. See release notes for a workaround.

To see the complete list of new features, improvements and known and fixed bugs, read the release notes.

25 July, 2014 06:55PM

Ronnie Tucker: Ladies and gentlemen, Full Circle #87 has arrived.


This month:

* Command & Conquer
* How-To : Python, LibreOffice, and GRUB2.
* Graphics : Inkscape.
* Book Review: Puppet
* Security – TrueCrypt Alternatives
* CryptoCurrency: Dualminer and dual-cgminer
* Arduino
plus: Q&A, Linux Labs, Ubuntu Games, and Ubuntu Women.

Get it while it’s hot!
http://fullcirclemagazine.org/issue-87/

25 July, 2014 04:47PM

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

Attackers abusing Internet Explorer to enumerate software and detect security products

During the last few years we have seen an increase on the number of malicious actors using tricks and browser vulnerabilities to enumerate the software that is running on the victim’s system using Internet Explorer.

In this blog post we will describe some of the techniques that attackers are using to perform reconnaisance that gives them information for future attacks. We have also seen these techniques being used to decide whether or not they exploit the victim based on detected Antivirus, versions of potential vulnerable software or the presence of certain security features such as Enhanced Mitigation Experience Toolkit EMET. EMET is a Microsoft tool that uses security mitigation to prevent vulnerabilities from being successfully exploited.  This makes it more difficult for attackers – so they would prefer to avoid it.

1. Abusing res:\\

 The first technique we are describing affects Internet Explorer 8 and earlier. Internet Explorer blocks attempts to access the local file system using “file://” but it used to be possible to access image files within a resource section of a DLL/EXE. In a previous blog post we mentioned how attackers were using this technique as part of a waterhole campaign affecting a Thailand NGO. In that case we found the following code in the HTML of the affected website:

The resList array contains a list of executable files with resource sections containing an image file. An example using explorer.exe:

 {id: 'Windows Explorer', res: 'res://explorer.exe/#2/#143'}

If we take a look at the resource sections present on explorer.exe we can find a resource named 143:

 

The resLis array contains a big list of executable files that is used to detect Antivirus software and VMware (probably to check if it is an analysis machine used by a security researcher):

The complete list of detected software is:

  •  Webroot
  •  Sophos
  •  Microsoft Security Client
  •  F-Secure
  •  BitDefender
  •  Norton Antivirus
  •  McAfee Antivirus
  •  Kingsoft Antivirus
  •  Avira Antivirus
  •  Kaspersky Antivirus
  •  360 AV
  •  ESET NOD32
  •  Trend Micro Internet Security
  •  Rising Antivirus
  • Vmware Player
  • Vmware Tools

We found similar code being used by the Sykipot actors in combination with a phishing scheme. In that case the list of software was much longer and it detected common software along with security products:

The list of detected software:

  • Microsoft Office (all versions)
  • WPS (Kingsoft Office)
  • Winrar
  • Winzip
  • 7z
  • Adobre Reader (all versions)
  • Skype
  • Microsoft Outlook (all versions)
  • Yahoo Messenger (all versions)
  • Flashget
  • Thunder
  • Emule
  • Serv-U
  • RAdmin
  • UltraVNC
  • pcAnywhere
  • RealVNC
  • Fetion
  • Google Talk
  • AliIM
  • POPO
  • ICQLite
  • ICQ
  • Tencent Messenger
  • Sina UC
  • QQ
  • BaihI
  • AIM
  • Microsoft Messenger
  • Windows Live MSN
  • Windows Media Player (all versions)
  • SSReader
  • PPStream
  • Storm Player
  • TTPlayer
  • Haojie SuperPlayer
  • Winamp
  • KuGoo
  • UltraEdit
  • Sylpheed
  • ACDSee
  • Photoshop
  • Foxmail
  • Gmail Notifier
  • Windows Live Mail
  • Adobe Media Player
  • Flash CS
  • Dreamwear
  • Fireworks
  • Delphi
  • Java
  • VMware Tools
  • Tracks Eraser
  • Microsoft Virtual PC
  • VMware
  • Microsoft ActiveSync
  • Microsoft .NET
  • PGP
  • CCClient
  • DriverGenius
  • Daemon Tools
  • MagicSet
  • Baidu Tool
  • Foxit Reader
  • MySQL Server (all versions)
  • SQLyog
  • Firefox
  • World IE
  • TT IE
  • Google Chrome
  • Maxthon
  • 360 IE
  • Opera
  • Safari
  • SaaYaa
  • GreenBrowser

Security software detected:

  • Microsoft Security Essentials
  • AVG
  • 360
  • SSM
  • Keniu
  • ESET
  • NOD32
  • Skynet Firewall
  • Kingsoft
  • Norton
  • Rising AV
  • Kaspersky
  • JingMin kav
  • Mcafee
  • BitDefender
  • AntiVir
  • TrendMicro
  • Avira
  • Dr Web
  • Avast
  • Sophos
  • Zone Alarm
  • Panda Security

They also used this code snippet to detect Adobe Acrobat Reader (English, Chinese and Taiwanese.)

Finally they were also able to list the patches that were installed in the Microsoft platform using a predefined list of patch numbers:

2. Microsoft XMLDOM ActiveX control information disclosure vulnerability

Another technique we found is being used by the Deep Panda actors.  They usually use this code in waterholing campaigns to detect specific software installed on the intended victim's system. It exploits the XMLDOM ActiveX to check for the presence of multiple files and folders:

This vulnerability was disclosed last year and it affects Internet Explorer versions 6 through 11 running on Windows through version 8.1.

Software enumerated includes most of the Antivirus and endpoint security products on the market:

  • 7z
  • AhnLab_V3
  • BkavHome
  • Java
  • COMODO
  • Dr.Web
  • ESET-SMART
  • ESTsoft
  • F-PROT
  • F-Secure
  • Fortinet
  • IKARUS
  • Immunet
  • JiangMin
  • Kaspersky_2012
  • Kaspersky_2013
  • Kaspersky_Endpoint_Security_8
  • Mse
  • Norman
  • Norton
  • Nprotect
  • Outpost
  • PC_Tools
  • QuickHeal
  • Rising
  • Rising_firewall
  • SQLServer
  • SUPERAntiSpyware
  • Sunbelt
  • Symantec_Endpoint12
  • Trend2013
  • ViRobot4
  • VirusBuster
  • WinRAR
  • a-squared
  • antiyfx
  • avg2012
  • bitdefender_2013
  • eScan
  • eset_nod32
  • f-secure2011
  • iTunes
  • mcafee-x64
  • mcafee_enterprise
  • north-x64
  • sophos
  • symantec-endpoint
  • systemwaler
  • systemwaler-cap
  • trend
  • trend-x64
  • var justforfunpath
  • vmware-client
  • vmware-server
  • winzip

3. More XMLDOM vulnerabilities

At the beginning of the year we found a different method being used in combination with a Zeroday vulnerability affecting Internet Explorer (CVE-2014-0322) targeting the French Aerospace Association. In that case we found the following code snippet.

The attackers were using a similar technique to detect if EMET was present on the system.  If EMET was detected they didn’t trigger the exploit since EMET was able to block it and alert the user to the 0 Day and diminish the attacker's effectiveness.

A month after the exploit code was made public we detected the same technique being used in the Angler Exploit Kit. They were using it to detect Kaspersky Antivirus.

In recent samples of the Angler Exploit Kit we have seen an improved version where they added detection for TrendMicro products.

In this blog post we have given an overview of the different techniques attackers are using to enumerate software running on a remote system.  These techniques can give attackers information that they can use in future attacks to exploit certain vectors based on the software running (or not running) on a system. In addition, we've illustrated ways were cybercriminals have adapted and copied techniques used by more advanced attackers for their own purposes.

References:

Vulnerability in Internet Explorer 10.1

XMLDOM vulnerability

URI Use and Abuse

Angler Exploit Kit 

       

25 July, 2014 02:25PM

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Bringing fluid motion to browsing

In the previous Blog Post, we looked at how we use the Recency principle to redesign the experience around bookmarks, tabs and history.
In this blog post, we look at how the new Ubuntu Browser makes the UI fade to the background in favour of the content. The design focuses on physical impulse familiarity – “muscle memory” – by marrying simple gestures to the two key browser tasks, making the experience feel as fluid and simple as flipping through a magazine.

 

Creating a new tab

For all new browsers, the approach to the URI Top Bar that enables searching as well as manual address entry has made the “new tab” function more central to the experience than ever. In addition, evidence suggests that opening a new tab is the third of the most frequently used action in browser. To facilitate this, we made opening a new tab effortless and even (we think) a bit fun.
By pulling down anywhere on the current page, you activate a sprint loaded “new tab” feature that appears under the address bar of the page. Keep dragging far enough, and you’ll see a new blank page coming into view. If you release at this stage, a new tab will load ready with the address bar and keyboard open as well as an easy way to get to your bookmarks. But, if you change your mind, just drag the current page back up or release early and your current page comes back.

http://youtu.be/zaJkNRvZWgw

 

Get to your open tabs and recently visited sites

Pulling the current page downward can create a new blank tab, and conversely dragging the bottom edge upward shows you already open tabs ordered by recency that echoes the right edge “open apps” view.

If you keep on dragging upward without releasing, you can dig even further into the past with your most recently visited pages grouped by site in a “history” list. By grouping under the site domain name, it’s easier to find what you’re looking for without thumbing through hundreds of individual page URLs. However, if you want all the detail, tap an item in the list to see your complete history.

Blog Post - Browser #2 (1)
It’s not easy to improve upon such a well-worn application as the browser, it’s true. We’re hopeful that by adding new fluidity to creating, opening and switching between tabs, our users will find that this browsing experience is simpler to use, especially with one hand, and feels more seamless and fluid than ever.

 

 

25 July, 2014 11:42AM

Kubuntu: Kubuntu 14.04 LTSUpdate Out

The first update to our LTS release 14.04 is out now. This contains all the bugfixes added to 14.04 since its first release in April. Users of 14.04 can run the normal update procedure to get these bufixes.

See the 14.04.1 release announcement.

Download 14.04.1 images.

25 July, 2014 10:35AM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Pak Utian Yang Saya Kenal

Kemarin tanggal 24 Juli 2014 di Sosial Media heboh dengan hadirnya sebuah blog ngawur berisi fitnah yang berhubungan dengan Pilpres 2014, salah satu teman Pengembang BlankOn yaitu pak Utian Ayuba ikut menjadi korban dari blog ngawur tersebut. Dari membaca sekilas isi blog tersebut saya sudah meragukan kebenarannya, karena saya meragukan tulisan blog tersebut maka yang terpikirkan secara

25 July, 2014 05:56AM by Istana Media (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Edubuntu: Edubuntu 14.04.1 Release Announcement

Edubuntu Long-Term Support

The Edubuntu team is proud to announce Edubuntu 14.04.1 LTS, which is the first Long Term Support (LTS) update as part of the Edubuntu 14.04 5 years support cycle.

This point release includes all the bug fixes and improvements that have been applied via updates to Edubuntu 14.04 LTS since it has been released. It also includes updated hardware support and installer fixes. If you have an Edubuntu 14.04 LTS system and have applied all the available updates, then your system will already be on 14.04.1 LTS and there is no need to re-install. For new installations, installing from the updated media is recommended since it will be installable on more systems than before and will require less updates than installing from the original 14.04 LTS media.

  • Information on where to download the Edubuntu 14.04.1 LTS media is available from the Downloads page.
  • We do not ship free Edubuntu discs at this time, however, there are 3rd party distributors available who ship discs at reasonable prices listed on the Edubuntu Martketplace

To ensure that the Edubuntu 14.04 LTS series will continue to work on the latest hardware as well as keeping quality high right out of the box, further point releases will be made available during its lifecycle. More information will be available on the release schedule page on the Ubuntu wiki.

See also

Thanks for your support and interest in Edubuntu!

25 July, 2014 04:46AM

hackergotchi for Maemo developers

Maemo developers

Firefox for Android: Collecting and Using Telemetry

Firefox for Mobile
Firefox for Android: Collecting and Using Telemetry - http://starkravingfinkle.org/blog...
0 Add to favourites0 Bury

25 July, 2014 03:08AM by Midgard Administrator (dev@midgard-project.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Sebastian Kügler: Plasma’s Road to Wayland

Road to WaylandWith the Plasma 5.0 release out the door, we can lift our heads a bit and look forward, instead of just looking at what’s directly ahead of us, and make that work by fixing bug after bug. One of the important topics which we have (kind of) excluded from Plasma’s recent 5.0 release is support for Wayland. The reason is that much of the work that has gone into renovating our graphics stack was also needed in preparation for Wayland support in Plasma. In order to support Wayland systems properly, we needed to lift the software stack to Qt5, make X11 dependencies in our underlying libraries, Frameworks 5 optional. This part is pretty much done. We now need to ready support for non-X11 systems in our workspace components, the window manager and compositor, and the workspace shell.

Let’s dig a bit deeper and look at at aspects underlying to and resulting from this transition.

Why Wayland?

The short answer to this question, from a Plasma perspective, is:

  • Xorg lacks modern interfaces and protocols, instead it carries a lot of ballast from the past. This makes it complex and hard to work with.
  • Wayland offers much better graphics support than Xorg, especially in terms of rendering correctness. X11′s asynchronous rendering makes it impossible to be sure about correctness and timeliness of graphics that ends up on screen. Instead, Wayland provides the guarantee that every frame is perfect
  • Security considerations. It is almost impossible to shield applications properly from each other. X11 allows applications to wiretap each other’s input and output. This makes it a security nightmare.

I could go deeply into the history of Xorg, and add lots of technicalities to that story, but instead of giving you a huge swath of text, hop over to Youtube and watch Daniel Stone’s presentation “The Real Story Behind Wayland and X” from last year’s LinuxConf.au, which gives you all the information you need, in a much more entertaining way than I could present it. H-Online also has an interesting background story “Wayland — Beyond X”.

While Xorg is a huge beast that does everything, like input, printing, graphics (in many different flavours), Wayland is limited by design to the use-cases we currently need X for, without the ballast.
With all that in mind, we need to respect our elders and acknowledge Xorg for its important role in the history of graphical Linux, but we also need to look beyond it.

What is Wayland support?

KDE Frameworks 5 apps under Weston

KDE Frameworks 5 apps under Weston

Without communicating our goal, we might think of entirely different things when talking about Wayland support. Will Wayland retire X? I don’t think it will in the near future, the point where we can stop caring for X11-based setups is likely still a number of years away, and I would not be surprised if X11 was still a pretty common thing to find in enterprise setups ten years down the road from now. Can we stop caring about X11? Surely not, but what does this mean for Wayland? The answer to this question is that support for Wayland will be added, and that X11 will not be required anymore to run a Plasma desktop, but that it is possible to run Plasma (and apps) under both, X11 and Wayland systems. This, I believe, is the migration process that serves our users best, as the question “When can I run Plasma on Wayland?” can then be answered on an individual basis, and nobody is going to be thrown into the deep (at least not by us, your distro might still decide to not offer support for X11 anymore — that is not in our hands). To me, while a quick migration to Wayland (once ready) is something desirable, realistically, people will be running Plasma on X11 for years to come. Wayland can be offered as an alternative at first, and then promote to primary platform once the whole stack matures further.

Where at we now?

With the release of KDE Frameworks 5, most of the issues in our underlying libraries have been ironed out, that means X11-dependent codepaths have become optional. Today, it’s possible to run most applications built on top of Frameworks 5 under a Wayland compositor, independent from X11. This means that applications can run under both, X11 and Wayland with the same binary. This is already really cool, as without applications, having a workspace (which in a way is the glue between applications would be a pointless endeavour). This chicken-egg situation plays both ways, though: Without a workspace environment, just having apps run under Wayland is not all that useful. This video shows some of our apps under the Weston compositor. (This is not a pure Wayland session “on bare metal”, but one running in an X11 window in my Plasma 5 session for the purpose of the screen-recoding.)

For a full-blown workspace, the porting situation is a bit different, as the workspace interacts much more intimately with the underlying display server than applications do at this point. These interactions are well-hidden behind the Qt platform abstraction. The workspace provides the host for rendering graphics onto the screen (the compositor) and the machinery to start and switch between applications.

We are currently missing a number of important pieces of the full puzzle: Interfaces between the workspace shell, the compositor (KWin) and the display server are not yet well-defined or implemented, some pioneering work is ahead of us. There is also a number of workspace components that need bigger adjustments, global shortcut handling being a good example. Most importantly, KWin needs to take over the role of Wayland compositor. While some support for Wayland has already been added to KWin, the work is not yet complete. Besides KWin, we also need to add support for Wayland to various bits of our workspace. Information about attached screens and their layout has to be made accessible. Global keyboard shortcuts only support X11 right now. The screen locking mechanism needs to be implemented. Information about Windows for the task-manager has to be shared. Dialog positioning and rendering needs to be ported. There are also a few assumptions in startkde and klauncher that currently prevent them from being able to start a session under Wayland, and more bits and pieces which need additional work to offer a full workspace experience under Wayland.

Porting Strategy

The idea is to be able to run the same binaries under both, X11 and Wayland. This means that we (need to decide at runtime how to interact with the windowing system. The following strategy is useful (in descending order of preference):

  • Use abstract Qt and Frameworks (KF5) APIs
  • Use XCB when there are no suitable Qt and KF5 APIs
  • Decide at runtime whether to call X11-specific functions

In case we have to resort to functions specific to a display server, X11 should be optional both at build-time and at run-time:

  • The build of X11-dependent code optional. This can be done through plugins, which are optionally included by the build-system or (less desirably) by #ifdef’ing blocks of code.
  • Even with X11 support built into the binary, calls into X11-specific libraries should be guarded at runtime (QX11Info::isPlatformX11() can be used to check at runtime).

Get your Hands Dirty!

Computer graphics are an exciting thing, and many of us are longing for the day they can remove X11 from their systems. This day will eventually come, but it won’t come by itself. It’s a very exciting time to get involved, and make the migration happen. As you can see, we have a multitude of tasks that need work. An excellent first step is to build the thing on your system and try running, fix issues, and send us patches. Get in touch with us on Freenode’s #plasma IRC channel, or via our mailing list plasma-devel(at)kde.org.

25 July, 2014 02:28AM

The Fridge: Ubuntu 14.04.1 LTS released

The Ubuntu team is pleased to announce the release of Ubuntu 14.04.1 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavours of Ubuntu with long-term support.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 14.04 LTS.

Kubuntu 14.04.1 LTS, Edubuntu 14.04.1 LTS, Xubuntu 14.04.1 LTS, Mythbuntu 14.04.1 LTS, Ubuntu GNOME 14.04.1 LTS, Lubuntu 14.04.1 LTS, Ubuntu Kylin 14.04.1 LTS, and Ubuntu Studio 14.04.1 LTS are also now available. More details can be found in their individual release notes:

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes#Official_flavours

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu Core, Ubuntu Kylin, Edubuntu, and Kubuntu. All the remaining flavours will be supported for 3 years.

To get Ubuntu 14.04.1

In order to download Ubuntu 14.04.1, visit:

http://www.ubuntu.com/download

Users of Ubuntu 12.04 will soon be offered an automatic upgrade to 14.04.1 via Update Manager. For further information about upgrading, see:

https://help.ubuntu.com/community/TrustyUpgrades

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 14.04.1 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

http://www.ubuntu.com/community/get-involved

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

http://www.ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

http://www.ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Fri Jul 25 01:35:00 UTC 2014 by Adam Conrad

25 July, 2014 02:11AM

July 24, 2014

hackergotchi for Tanglu developers

Tanglu developers

Cutelyst 0.3.0 is now C10K ready!

Release early, release often has never been my strength especially since I don’t do a fair scheduling of all the projects I’m involved…

So since I was in need to better polish Cutelyst I took more time on it, and the result is great, around 100% speed up and a few new features added.

Cutelyst uWSGI plugin now has support for –thread, which will create a QThread to process a request, however I strongly discourage its usage in Cutelyst, the performance is ~7% inferior and a crash in your code will break other requests, and as of now ASYNC mode is not supported in threaded mode due to a limitation in uWSGI request queue.

Thanks to valgrind I managed to make a hello world application from 5K request per second on 0.2.0 to 10K req/s on 0.3.0 on an Intel Core2Duo 2.4Ghz (or 44K req/s on an AMD Phenom II 965 x4 3.4Ghz), however if you enable Grantlee templating you get around 600 req/s so if I happen to have time I will be looking into improving its performance.

Response::body() is now a QIODevice so you can set a QFile* of something else and have Cutelyst to send it back.

Now http://cutelyst.org points to a gitorious Wiki which is slowly getting populated, and API is available as http://api.cutelyst.org.

The 0.3.0 tarball can be downloaded here

Have fun :)


24 July, 2014 11:18PM by dantti

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S07E17 – The One with the Chicken Pox

Tony Whitmore and Laura Cowen are in Studio L, Alan Pope is AWOL, and Mark Johnson Skypes in from his sick bed for Season Seven, Episode Seventeen of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll be interviewing Graham Binns about the MAAS project project, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

24 July, 2014 07:30PM

Arthur Schiwon: In Kazan? Me too, join my ownCloud talk!

Currently I am enjoying my summer vacation. Vacation is when you do non-stop activities that make fun, no matter whether this is more relaxing or challenging, and usually in a different place. So I am going to take the opportunity to visit Kazan, and furthermore I am taking the other opportunity and will give an ownCloud talk (btw, did you hear, ownCloud 7 was released?) at the local hackerspace, FOSS Labs. Due to my lack of Russian I will stick to English, however ;)

So, if you are there and interested in ownCloud please save the date:

Monday, July 28th, 18:00
FOSS Labs
Universitetskaya 22, of. 114
420111 Kazan, Russia

Thank you, FOSS Labs and especially Mansur Ziatdinov, for making this possible. I am very much excited to not only to share information with you, but also to learn and get to know local (FOSS) culture!

Picture: Kazan Kremlin, derived from Skyline of Kazan city by TY-214.

24 July, 2014 07:20PM

Oli Warner: Converting an existing Ubuntu Desktop into a Chrome kiosk

You might already have Ubuntu Desktop installed and you might want to just run one application without stripping it down. This article should give you a decent idea how to convert a stock Desktop/Unity install into a single-application computer.

This follows straight on from today's other article on building a kiosk computer with Ubuntu and Chrome [from scratch]. In my mind that's the perfect setup: low fat and speedy... But we don't always get it right first time. You might have already been battling with a full Ubuntu install and not have the time to strip it down.

This tutorial assumes you're starting with an Ubuntu desktop, all installed with working network and graphics. While we're in graphical-land, you might as well go and install Chrome.

I have tested this in a clean 14.04 install but be careful. Back up any important data before you commit.

sudo apt update
sudo apt install --no-install-recommends openbox

sudo install -b -m 755 /dev/stdin /opt/kiosk.sh <<- EOF
  #!/bin/bash

  xset -dpms
  xset s off
  openbox-session &

  while true; do
    rm -rf ~/.{config,cache}/google-chrome/
    google-chrome --kiosk --no-first-run  'http://thepcspy.com'
  done
EOF

sudo install -b -m 644 /dev/stdin /etc/init/kiosk.conf <<- EOF
  start on (filesystem and stopped udevtrigger)
  stop on runlevel [06]

  emits starting-x
  respawn

  exec sudo -u $USER startx /etc/X11/Xsession /opt/kiosk.sh --
EOF

sudo dpkg-reconfigure x11-common  # select Anybody

echo manual | sudo tee /etc/init/lightdm.override  # disable desktop

sudo reboot

This should boot you into a browser looking at my home page (use sudoedit /opt/kiosk.sh to change that), but broadly speaking, we're done.

If you ever need to get back into the desktop you should be able to run sudo start lightdm. It'll probably appear on VT8 (Control+Alt+F8 to switch).

Why wouldn't I always do it this way?

I'll freely admit that I've done farts longer than it took to run the above. Starting from an Ubuntu Desktop base does do a lot of the work for us, however it is demonstrably flabbier:

  • The Server result was 1.6GB, using 117MB RAM with 38 processes.
  • The Desktop result is 3.7GB, using 294MB RAM with 80 processes!

Yeah, the Desktop is still loading a number of udisks mount helpers, PulseAudio, GVFS, Deja Dup, Bluetooth daemons, volume controls, Ubuntu 1, CUPS the printer server and all the various Network and Modem Manager things a traditional desktop needs.

This is the reason you base your production model off Ubuntu Server (or even Ubuntu Minimal).

And remember that you aren't done yet. There's a big list of boring jobs to do before it's Martini O'Clock

Just remember that everything I said about physical and network security last time applies doubly here. Ubuntu-proper ships a ton of software on its 1GB image and quite a lot more of that will be running, even after we've disabled the desktop. You're going to want to spend time stripping some of that out and putting in place any security you need to stop people getting in.

Just be careful and conscientious about how you deploy software.

24 July, 2014 04:16PM

Ubuntu Scientists: Who We Are: Svetlana Belkin, Admin/Founder

Welcome all to the first of many “Who We Are” posts.  These posts will introduce you to many of our  members of the team.  We will start with Svetlana Belkin, the founder and admin of the team:

I am Svetlana Belkin (A.K.A. belkinsa everywhere in Ubuntu community and
Mechafish on the Ubuntu Forums), and I am getting my BS in biology with
molecular sciences as my focus at University of Cincinnati. I used
Ubuntu since 2009, but the only “scientific” program that I used was
Ugene. But hopefully, I will get to use more in my field.


Filed under: Who We Are Tagged: Svetlana Belkin, Ubuntu Forums, University of Cincinnati

24 July, 2014 03:48PM

Oli Warner: Building a kiosk computer with Ubuntu 14.04 and Chrome

Single-purpose kiosk computing might seem scary and industrial but thanks to cheap hardware and Ubuntu, it's an increasingly popular idea. I'm going to show you how and it's only going to take a few minutes to get to something usable.

Hopefully we'll do better than the image on the right.

We're going to be running a very light stack of X, Openbox and the Google Chrome web browser to load a specified website. The website could be local files on the kiosk or remote. It could be interactive or just an advertising roll. Of course you could load any standalone application. XBMC for a media centre, Steam for a gaming machine, Xibo or Concerto for digital signage. The possibilities are endless.

The whole thing takes less than 2GB of disk space and can run on 512MB of RAM.

Update: If you've already installed, read this companion tutorial if you want to convert an existing Ubuntu Desktop install to a kiosk.

Step 1: Installing Ubuntu Server

I'm picking the Server flavour of Ubuntu for this. It's all the nuts-and-bolts of regular Ubuntu without installing a load of flabby graphical applications that we're never ever going to use.

It's free for download. I would suggest 64bit if your hardware supports it and I'm going with the latest LTS (14.04 at the time of writing). Sidebar: If you've never tested your kiosk's hardware in Ubuntu before it might be worth download the Desktop Live USB, burning it and checking everything works.

Just follow the installation instructions. Burn it to a USB stick, boot the kiosk to it and go through. I just accepted the defaults and when asked:

  • Set my username to user and set an hard-to-guess, strong password.
  • Enabled automatic updates
  • At the end when tasksel ran, opted to install the SSH server task so I could SSH in from a client that supported copy and paste!

After you reboot, you should be looking at a Ubuntu 14.04 LTS ubuntu tty1 login prompt. You can either SSH in (assuming you're networked and you installed the SSH server task) or just log in.

The installer auto-configures an ethernet connection (if one exists) so I'm going to assume you already have a network connection. If you don't or want to change to wireless, this is the point where you'd want to use nmcli to add and enable your connection. It'll go something like this:

sudo apt install network-manager
sudo nmcli dev wifi con <SSID> password <password>

Later releases should have nmtui which will make this easier but until then you always have man nmcli :)

Step 2: Install all the things

We obviously need a bit of extra software to get up and running but we can keep this fairly compact. We need to install:

  • X (the display server) and some scripts to launch it
  • A lightweight window manager to enable Chrome to go fullscreen
  • Google Chrome

We'll start by adding the Google-maintained repository for Chrome:

sudo add-apt-repository 'deb http://dl.google.com/linux/chrome/deb/ stable main'
wget -qO- https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

Then update our packages list and install:

sudo apt update
sudo apt install --no-install-recommends xorg openbox google-chrome-stable

If you omit --no-install-recommends you will pull in hundreds of megabytes of extra packages that would normally make life easier but in a kiosk scenario, only serve as bloat.

Step 3: Loading the browser on boot

I know we've only been going for about five minutes but we're almost done. We just need two little scripts.

Run sudoedit /opt/kiosk.sh first. This is going to be what loads Chrome once X has started. It also needs to wipe the Chrome profile so that between loads you aren't persisting stuff. This in incredibly important for kiosk computing because you never want a user to be able to affect the next user. We want them to start with a clean environment every time. Here's where I've got to:

#!/bin/bash

xset -dpms
xset s off
openbox-session &

while true; do
  rm -rf ~/.{config,cache}/google-chrome/
  google-chrome --kiosk --no-first-run  'http://thepcspy.com'
done

When you're done there, Control+X to exit and run sudo chmod +x /opt/kiosk.sh to make the script executable. Then we can move onto starting X (and loading kiosk.sh).

Run sudoedit /etc/init/kiosk.conf and this time fill it with:

start on (filesystem and stopped udevtrigger)
stop on runlevel [06]

console output
emits starting-x

respawn

exec sudo -u user startx /etc/X11/Xsession /opt/kiosk.sh --

Replace user with your username. Exit, Control+X, save.

X still needs some root privileges to start. These are locked down by default but we can allow anybody to start an X server by running sudo dpkg-reconfigure x11-common and selecting "Anybody".

After that we should be able to test. Run sudo start kiosk (or reboot) and it should all come up.

One last problem to fix is the amount of garbage it prints to screen on boot. Ideally your users will never see it boot but when it does, it's probably better that it doesn't look like the Matrix. A fairly simple fix, just run sudoedit /etc/default/grub and edit so the corresponding lines look like this:

GRUB_DEFAULT=0
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""

Save and exit that and run sudo update-grub before rebooting.
The monitor should remain on indefinitely.

Final step: The boring things...

Technically speaking we're done; we have a kiosk and we're probably sipping on a Martini. I know, I know, it's not even midday, we're just that good... But there are extra things to consider before we let a grubby member of the public play with this machine:

  • Can users break it? Open keyboard access is generally a no-no. If they need a keyboard, physically disable keys so they only have what they need. I would disable all the F* keys along with Control, Alt, Super... If they have a standard mouse, right click will let them open links in new windows and tabs and OMG this is a nightmare. You need to limit user-input.

  • Can it break itself? Does the website you're loading have anything that's going to try and open new windows/tabs/etc? Does it ask for any sort of input that you aren't allowing users? Perhaps a better question to ask is Can it fix itself? Consider a mechanism for rebooting that doesn't involve a phone call to you.

  • Is it physically secure? Hide and secure the computer. Lock the BIOS. Ensure no access to USB ports (fill them if you have to). Disable recovery mode. Password protect Grub and make sure it stays hidden (especially with open keyboard access).

  • Is it network secure? SSH is the major ingress vector here so follow some basic tips: so at the very least move it to another port, only allow key-based authentication, install fail2ban and make sure fail2ban is telling you about failed logins.

  • What if Chrome is hacked directly? What if somebody exploited Chrome and had command-level access as user? Well first of all, you can try to stop that happening with AppArmor (should still apply) but you might also want to change things around so that the user running X and the browser doesn't have sudo access. I'd do that by adding a new user and changing the two scripts accordingly.

  • How are you maintaining it? Automatic updates are great but what if that breaks everything? How will you access it in the field to maintain it if (for example) the network dies or there's a hardware failure? This is aimed more at the digital signage people than simple kiosks but it's something to consider.

You can mitigate a lot of the security issues by having no live network (just displaying local files) but this obviously comes at the cost of maintenance. There's no one good answer for that.

Photo credit: allegr0/Candace

24 July, 2014 10:36AM

Martin Pitt: vim config for Markdown+LaTeX pandoc editing

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item]{}<Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

24 July, 2014 09:38AM

hackergotchi for TurnKey Linux

TurnKey Linux

hackergotchi for Ubuntu developers

Ubuntu developers

Dustin Kirkland: Improving Random Seeds in Ubuntu 14.04 LTS Cloud Instances

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

24 July, 2014 02:15AM by Dustin Kirkland (noreply@blogger.com)

July 23, 2014

hackergotchi for Grml developers

Grml developers

Michael Prokop: Book Review: The Docker Book

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. :) (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

23 July, 2014 08:16PM

hackergotchi for Ubuntu developers

Ubuntu developers

Matthew Helmke: Open Source Resources Sale

I don’t usually post sales links, but this sale by InformIT involves my two Ubuntu books along with several others that I know my friends in the open source world would be interested in.

Save 40% on recommend titles in the InformIT OpenSource Resource Center. The sale ends August 8th.

23 July, 2014 05:40PM

Michael Hall: Why do you contribute to open source?

It seems a fairly common, straight forward question.  You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”.  These are all excellent reasons for creating or improving something.  But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached.  When I ask “Why do you contribute to open source”, I’m asking why you give it away.

takemyworkThis question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.

If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing  the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on.  It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen.  Even the most permissive licenses require attribution, something that tells everybody who made it.

Now let’s flip that question around:  Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it?  I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?

We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process.  But human recognition is still what matters most.  Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.

If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.

23 July, 2014 12:00PM

Andrew Pollock: [tech] Going solar

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

23 July, 2014 05:36AM

Serge Hallyn: rsync.net feature: subuids

The problem: Some time ago, I had a server “in the wild” from which I
wanted some data backed up to my rsync.net account. I didn’t want to
put sensitive credentials on this server in case it got compromised.

The awesome admins at rsync.net pointed out their subuid feature. For
no extra charge, they’ll give you another uid, which can have its own
ssh keys, whose home directory is symbolically linked under your main
uid’s home directory. So the server can rsync backups to the subuid,
and if it is compromised, attackers cannot get at any info which didn’t
originate from that server anyway.

Very nice.


23 July, 2014 04:02AM

July 22, 2014

hackergotchi for Tails

Tails

Tails 1.1 is out

Tails, The Amnesic Incognito Live System, version 1.1, is out.

All users must upgrade as soon as possible: this release fixes numerous security issues.

Changes

Notable user-visible changes include:

  • Rebase on Debian Wheezy

    • Upgrade literally thousands of packages.
    • Migrate to GNOME3 fallback mode.
    • Install LibreOffice instead of OpenOffice.
  • Major new features

    • UEFI boot support, which should make Tails boot on modern hardware and Mac computers.
    • Replace the Windows XP camouflage with a Windows 8 camouflage.
    • Bring back VirtualBox guest modules, installed from Wheezy backports. Full functionality is only available when using the 32-bit kernel.
  • Security fixes

    • Fix write access to boot medium via udisks (ticket #6172).
    • Upgrade the web browser to 24.7.0esr-0+tails1~bpo70+1 (Firefox 24.7.0esr + Iceweasel patches + Torbrowser patches).
    • Upgrade to Linux 3.14.12-1 (fixes CVE-2014-4699).
    • Make persistent file permissions safer (ticket #7443).
  • Bugfixes

    • Fix quick search in Tails Greeter's Other languages window (Closes: ticket #5387)
  • Minor improvements

    • Don't install Gobby 0.4 anymore. Gobby 0.5 has been available in Debian since Squeeze, now is a good time to drop the obsolete 0.4 implementation.
    • Require a bit less free memory before checking for upgrades with Tails Upgrader. The general goal is to avoid displaying "Not enough memory available to check for upgrades" too often due to over-cautious memory requirements checked in the wrapper.
    • Whisperback now sanitizes attached logs better with respect to DMI data, IPv6 addresses, and serial numbers (ticket #6797, ticket #6798, ticket #6804).
    • Install the BookletImposer PDF imposition toolkit.

See the online Changelog for technical details.

Known issues

  • Users of persistence must log in at least once with persistence enabled read-write after upgrading to 1.1 to see their settings updated.

  • Upgrading from ISO, from Tails 1.1~rc1, Tails 1.0.1 or earlier, is a bit more complicated than usual. Either follow the instructions to upgrade from ISO. Or, burn a DVD, start Tails from it, and use "Clone and Upgrade".

  • The automatic upgrade from Tails 1.1~rc1 is a bit more complicated than usual. Either follow the instructions to apply the automatic upgrade. Or, do a full upgrade.

  • A persistent volume created with Tails 1.1~beta1 cannot be used with Tails 1.1 or later. Worse, trying this may freeze Tails Greeter.

  • Tails 1.1 does not start in some virtualization environments, such as QEMU 0.11.1 and VirtualBox 4.2. This can be corrected by upgrading to QEMU 1.0 or VirtualBox 4.3, or newer (ticket #7232).

  • The web browser's JavaScript performance may be severely degraded (ticket #7127). Please let us know if you are experiencing this to a level where it is problematic.

  • Longstanding known issues.

I want to try it or to upgrade!

Go to the download page.

As no software is ever perfect, we maintain a list of problems that affects the last release of Tails.

What's coming up?

The next Tails release is scheduled for September 2.

Have a look to our roadmap to see where we are heading to.

Would you want to help? There are many ways you can contribute to Tails. If you want to help, come talk to us!

How to upgrade from ISO?

These steps allow you to upgrade a device installed with Tails Installer from Tails 1.0.1, Tails 1.1~beta1 or earlier, to Tails 1.1.

  1. Start Tails from another DVD, USB stick, or SD card, than the device that you want to upgrade.

  2. Set an administration password.

  3. Run this command in a Root Terminal to install the latest version of Tails Installer:

    echo "deb http://deb.tails.boum.org/ 1.1 main" \
        > /etc/apt/sources.list.d/tails-upgrade.list && \
        apt-get update && \
        apt-get install liveusb-creator
    
  4. Follow the usual instructions to upgrade from ISO, except the first step.

How to automatically upgrade from Tails 1.1~rc1?

These steps allow you to automatically upgrade a device installed with Tails Installer from Tails 1.1~rc1 to Tails 1.1.

  1. Start Tails 1.1~rc1 from the device you want to upgrade.

  2. Set an administration password.

  3. Run this command in a Terminal to apply the automatic upgrade:

    echo 'TAILS_CHANNEL="stable-fixed"' | sudo tee --append /etc/os-release && \
      cd / && tails-upgrade-frontend-wrapper
    

22 July, 2014 07:45PM

On 0days, exploits and disclosure

A recent tweet from Exodus Intel (a company based in Austin, Texas) generated quite some noise on the Internet:

"We're happy to see that TAILS 1.1 is being released tomorrow. Our multiple RCE/de-anonymization zero-days are still effective. #tails #tor"

Tails ships a lot of software, from the Linux kernel to a fully functional desktop, including a web browser and a lot of other programs. Tails also adds a bit of custom software on top of this.

Security issues are discovered every month in a few of these programs. Some people report such vulnerabilities, and then they get fixed: This is the power of free and open source software. Others don't disclose them, but run lucrative businesses by weaponizing and selling them instead. This is not new and comes as no surprise.

We were not contacted by Exodus Intel prior to their tweet. In fact, a more irritated version of this text was ready when we finally received an email from them. They informed us that they would provide us with a report within a week. We're told they won't disclose these vulnerabilities publicly before we have corrected it, and Tails users have had a chance to upgrade. We think that this is the right process to responsibly disclose vulnerabilities, and we're really looking forward to read this report.

Being fully aware of this kind of threat, we're continously working on improving Tails' security in depth. Among other tasks, we're working on a tight integration of AppArmor in Tails, kernel and web browser hardening as well as sandboxing, just to name a few examples.

We are happy about every contribution which protects our users further from de-anonymization and helps them to protect their private data, investigations, and their lives. If you are a security researcher, please audit Tails, Debian, Tor or any other piece of software we ship. To report or discuss vulnerabilities you discover, please get in touch with us by sending email to tails@boum.org.

Anybody wanting to contribute to Tails to help defend privacy, please join us!

22 July, 2014 07:40PM