October 20, 2014

hackergotchi for Ubuntu

Ubuntu

Happy 10th Birthday Ubuntu!

10 years ago today, Mark Shuttleworth made the 4th post ever to the ubuntu-announce mailing list when he wrote: Announcing Ubuntu 4.10 “The Warty Warthog Release”

In this announcement, Mark wrote:

Ubuntu is a new Linux distribution that brings together the extraordinary breadth of Debian with a fast and easy install, regular releases (every six months), a tight selection of excellent packages installed by default and a commitment to security updates with 18 months of security and technical support for every release.

So it’s with much excitement, the Ubuntu News team wishes Ubuntu a happy 10th Birthday!

Ubuntu cake

Over the years, we’ve had several cakes celebrating releases, here are a sampling we found on Flickr, first from the 8.04 release party in London:

ubuntu cake

And an amazing trio from Kitchener-Waterloo, Ontario, Canada for 9.10, 10.10 and 11.04:

Ubuntu 9.10: Karmic Koala Release Party
CIMG4679.JPG
CIMG4817

And dozens of strictly Ubuntu logo cakes over the years (this one from 2006):

Ubuntu cake!!

With the release of 14.10 just days away, enjoy your release parties and perhaps take some time to reflect upon how far we’ve come in these 10 years!

Posted by Elizabeth K. Joseph, on behalf of the Ubuntu News Team

20 October, 2014 07:26PM by lyz

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: Happy Birthday Ubuntu!

Today is Ubuntu’s ten year anniversary. Scott did a wonderful job summarizing many of those early years and his own experience, and while I won’t be as articulate as him, I wanted to share a few thoughts on my experience too.

I heard of this super secret Debian startup from Scott James Remnant. When I worked at OpenAdvantage we would often grab lunch in Birmingham, and he filled me in on what he was working on, but leaving a bunch of the blanks out due to confidentiality.

I was excited about this new mystery distribution. For many years I had been advocating at conferences about a consumer-facing desktop, and felt that Debian and GNOME, complete with the exciting Project Utopia work from Robert Love and David Zeuthen made sense. This was precisely what this new distro would be shipping.

When Warty was released I installed it and immediately became an Ubuntu user. Sure, it was simple, but the level of integration was a great step forward. More importantly though, what really struck me was how community-focused Ubuntu was. There was open governance, a Code Of Conduct, fully transparent mailing lists and IRC channels, and they had the Oceans 11 of rock-star developers involved from Debian, GNOME, and elsewhere.

I knew I wanted to be part of this.

While at GUADEC in Stuttgart I met Mark Shuttleworth and had a short meeting with him. He seemed a pretty cool guy, and I invited him to speak at our very first LugRadio Live in Wolverhampton.

Mark at LugRadio Live.

I am not sure how many multi-millionaires would consider speaking to 250 sweaty geeks in a football stadium sports bar in Wolverhampton, but Mark did it, not once, but twice. In fact, one time he took a helicopter to Wolverhampton and landed at the dog racing stadium. We had to have a debate in the LugRadio team for who had the nicest car to pick him up in. It was not me.

This second LugRadio Live appearance was memorable because two weeks previous I had emailed Mark to see if he had a spot for me at Canonical. OpenAdvantage was a three-year funded project and was wrapping up, and I was looking at other options.

Mark’s response was:

“Well, we are opening up an Ubuntu Community Manager position, but I am not sure it is for you.”

I asked him if he could send over the job description. When I read it I knew I wanted to do it.

Fast forward four interviews, the last of which being in his kitchen (which didn’t feel awkward, at all), and I got the job.

The day I got that job was one of the greatest days of my life. I felt like I had won the lottery; working on a project with mission, meaning, and something that could grow my career and skill-set.

Canonical team in 2007

The day I got the job was not without worry though.

I was going to be working with people like Colin Watson, Scott James Remnant, Martin Pitt, Matt Zimmerman, Robert Collins, and Ben Collins. How on earth was I going to measure up?

A few months later I flew out to my first Ubuntu Developer Summit in Mountain View, California. Knowing little about California in November, I packed nothing but shorts and t-shirts. Idiot.

I will always remember the day I arrived, going to a bar with Scott and some others, meeting the team, and knowing absolutely nothing about what they were saying. It sounded like gibberish, and I felt like I was a fairly technical guy at this point. Obviously not.

What struck me though was how kind, patient, and friendly everyone was. The delta in technical knowledge was narrowed with kindness and mentoring. I met some of my heroes, and they were just normal people wanting to make an awesome Linux distro, and wanting to help others get in on the ride too.

What followed was an incredible seven and a half years. I travelled to Ubuntu Developer Summits, sprints, and conferences in more than 30 countries, helped create a global community enthused by a passion for openness and collaboration, experimented with different methods of getting people to work together, and met some of the smartest and kindest people walking on this planet.

The awesome Ubuntu community

Ubuntu helped to define my career, but more importantly, it helped to define my perspective and outlook on life. My experience in Ubuntu helped me learn how to think, to manage, and to process and execute ideas. It helped me to be a better version of me, and to fill my world with good people doing great things, all of which inspired my own efforts.

This is the reason why Ubuntu has always been much more than just software to me. It is a philosophy, an ethos, and most importantly, a family. While some of us have moved on from Canonical, and some others have moved on from Ubuntu, one thing we will always share is this remarkable experience and a special connection that makes us Ubuntu people.

20 October, 2014 05:52PM

hackergotchi for Blankon developers

Blankon developers

Herpiko Dwi Aguno: Jangan Dipuji, Dibenci Saja

Banyak teman-teman saya yang suka memuji-muji secara berlebihan dan tidak mengambil hikmah dari mengapa dia akhirnya memuji demikian.

Salah tingkah saat dipuji itu sungguh tidak enak dan saya sama sekali tidak menyukainya, terlebih ditambah kesadaran bahwa saya belum ada apa-apanya dibanding beberapa orang yang saya kenal.

Jika saya mengenal dekat orang yang dalam hal-hal tertentu lebih cakap dari pada saya, mencapai lebih banyak prestasi dari pada saya, lebih maju dari saya, maka saya akan berusaha membencinya. Saya benci dikalahkan. Saya sukar ngalah. Maka saya jadikan dia rival, dalam artian yang positif. Paling banter saya hanya bilang, “Wogh, keren!”, tapi kemudian “Sialan!”.

Jika dia membahas sesuatu yang tidak saya mengerti dan membuat saya merasa bodoh, saya juga akan membencinya dan berusaha mencapai tingkat pengetahuan yang sama sehingga akhirnya saya mengerti apa yang dibahas itu.

Pada akhirnya, saya membenci terlalu banyak orang, dan hidup saya seperti jalan setapak yang panjang sekali dimana saya terus-menerus berlari seolah dikejar anjing.

Woosaaaaah!

 

20 October, 2014 02:38PM

Herpiko Dwi Aguno: Smaug at its best achievement of overheating

118 derajat celcius, woohoooooooooo!

Saya mengakui bahwa rata-rata produk komputer dari HP (bukan Compaq ya) cukup bandel. Tapi mengenai rancangan, itu hal yang berbeda. HP Pavilion dm3 milik saya adalah salah satu produk cacat dari HP yang diakui secara luas. Secara tampilan, komputer ini menarik sekali dan berasa pegang Macbook. Serba logam. Tapi tahu nggak sih? Fungsi case logam ini bukan supaya elegan, tapi inilah heatsink dari prosesor laptop ini.

Yoi. Saya nggak salah tulis. Heatsink yang nempel ke procesor itu diangin-angini sama kipas kecil yang tidak punya sumber udara. Jadi sama sekali tidak berguna. Di bawah case logam ini, ada banyak ruang udara kosong yang bagus sekali tempat bersemayam udara panas, yang kemudian menyalurkan panasnya ke case, atas dan bawah. Rasakanlah mengetik di atas bara 50 derajat celcius. Dipangku sama panasnya.

Kalau benar itu isu tentang memangku laptop menyebabkan mandul karena radiasi panasnya, berarti saya sudah tidak bisa diharapkan lagi. #eh

Jadi kemarin saya terpaksa melubangi bagian bawah case untuk ventilasi tambahan, membuat cacat produk cacat ini. Dan hasilnya hari ini adalah..

Yak, sama saja.

Halo, ada yang jual Macbook bekas?

20 October, 2014 02:21PM

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: PSA: Typos in mkfs commands are painful

TL;DR: I apparently typed mkfs.vfat /dev/sda1 at some point. Oops.

So I rarely reboot my machines, and last night, when I rebooted my laptop (for graphics card weirdness) Grub just came up with:

Error: unknown filesystem.
grub rescue>

WTF, I wonder how I borked my grub config? Let's see what happens when we ls my /boot partition.

grub rescue>ls (hd0,msdos1)
unknown filesystem

Hrrm, that's no good. An ls on my other partition isn't going to be very useful, it's a LUKS-encrypted LVM PV. Alright, time for a live system. I grab a Kali live USB (not because Kali is necessarily the best option here, it's just what I happen to have handy) and put it in the system and boot from that. file tells me its an x86 boot sector, which is not at all what I'm expecting from an ext4 boot partition. It slowly dawns on me that at some point, intending to format a flash drive or SD card, I must've run mkfs.vfat /dev/sda1 instead of mkfs.vfat /dev/sdb1. That one letter makes all the difference. Of course, it turns out it's not even a valid FAT filesystem... since the device was mounted, the OS had kept writing to it like an ext4 filesystem, so it was basically a mangled mess. fsck wasn't able to restore it, even pointing to backup superblocks: it seems as though, among other things, the root inode was destroyed.

So, at this point, I basically have a completely useless /boot partition. I have approximately two options: reinstall and reconfigure the entire OS, or try to fix it manually. Since it didn't seem I had much to lose and it would probably be faster to fix manually (if I could), I decided to give door #2 a try.

First step: recreate a valid filesystem. mkfs.ext4 -L boot /dev/sda1 takes care of that, but you better believe I checked the device name about a dozen times. Now I need to get all the partitions and filesystems mounted for a chroot and then get into it:

% mkdir /target
% cryptsetup luksOpen /dev/sda5 sda5_crypt
% vgchange -a y
% mount /dev/mapper/ubuntu-root /target
% mount /dev/sda1 /target/boot
% mount -o bind /proc /target/proc
% mount -o bind /sys /target/sys
% mount -o bind /dev /target/dev
% chroot /target /bin/bash

Now I'm in my system and it's time to replace my missing files, but how to figure out what goes there? I know there are at least files for grub, kernels, initrds. I wonder if dpkg-query can be useful here?

# dpkg-query -S /boot
linux-image-3.13.0-36-generic, linux-image-3.13.0-37-generic, memtest86+, base-files: /boot

Well, there's a handful of packages. Let's reinstall them:

# apt-get install --reinstall linux-image-3.13.0-36-generic linux-image-3.13.0-37-generic memtest86+ base-files

That's gotten our kernel and initrd replace, but no grub files. Those can be copied by grub-install /dev/sda. Just to be on the safe side, let's also make sure our grub config and initrd images are up to date.

# grub-install /dev/sda
# update-grub2
# update-initramfs -k all -u

At this point, I've run out of things to double check, so I decide it's time to find out if this was actually good for anything. Exit the chroot and unmount all the filesystems, then reboot from the hard drive.

...

It worked! Fortunately for me, /boot is such a predictable skeleton that it's relatively easy to rebuild when destroyed. Here's hoping you never find yourself in this situation, but if you do, maybe this will help you get back to normal without a full reinstall.

20 October, 2014 02:19PM

Mark Shuttleworth: V is for Vivid

Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+.

And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used.

Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team.

To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again.

This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust.

In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet!

20 October, 2014 01:22PM

Mattia Migliorini: Pinit 1.0: Pinterest for WordPress rewritten

Pinit, Pinterest for WordPress, is a handy plugin that lets you add Pinterest badges to your website quickly and with no effort.

Today I released the first complete version of this plugin, which was around since 30/10/2013. Although it had only a few widgets and was not so powerful, it has been appreciated by more than 800 people in one year of life. But now it’s time to change! With this new 1.0 release you can leverage the simplicity, lightness and power of Pinit.

 

Download Pinit

Features

Pinit 1.0, or Pinterest for WordPress, includes only one widget to let you add three different Pinterest badges to your website’s sidebar:

  • Pin Widget
  • Profile Widget
  • Board Widget

Interested in adding badges to your posts and pages too? New in this version are three shortcodes:

  • Pin Shortcode [pit-pin]
  • Profile Shortcode [pit-profile]
  • Board Shortcode [pit-board]

 

Pinit Shortcodes Usage

Here is a little reference for the shortcodes.

 

Pin Shortcode

The Pin Shortcode [pit-pin] lets you add the badge of a single pin to your posts and pages and accepts only one argument:

  • url: the URL to the pin (e.g. http://www.pinterest.com/pin/99360735500167749/)

Example:

[pit-pin url="http://www.pinterest.com/pin/99360735500167749/"]

 

Profile Shortcode

With the Profile Shortcode [pit-profile] you can add a Pinterest profile’s badge to your WordPress. It accepts up to four arguments:

  • url: the URL to the profile (e.g. http://www.pinterest.com/pinterest/)
  • imgWidth: width of the badge’s images. Must be an integer. Defaults to 92.
  • boxHeight: height of the badge. Must be an integer. Defaults to 175.
  • boxWidth: width of the badge. Defaults to auto.

Example:

[pit-profile url="http://www.pinterest.com/pinterest/" imgWidth="100" boxHeight="300" boxWidth="200"]

 

Board Shortcode

The Board Shortcode [pit-board] lets you add a Board badge to your pages and posts. It accepts the same arguments of the Profile Shortcode:

  • url: the URL to the profile (e.g. http://www.pinterest.com/pinterest/pin-pets/)
  • imgWidth: width of the badge’s images. Must be an integer. Defaults to 92.
  • boxHeight: height of the badge. Must be an integer. Defaults to 175.
  • boxWidth: width of the badge. Defaults to auto.

Example:

[pit-board url="http://www.pinterest.com/pinterest/pin-pets/" imgWidth="100" boxHeight="300" boxWidth="200"]

 

Languages

Pinterest for WordPress is currently available in 3 different languages:

You can submit new translations with a pull request to the GitHub repository or by email to deshack AT ubuntu DOT com.

 

Conclusion

Feel free to submit issues to the GitHub repository or the official support forum. If you like this plugin, you can contribute back to it simply by leaving a review.

The post Pinit 1.0: Pinterest for WordPress rewritten appeared first on deshack.

20 October, 2014 12:23PM

Kubuntu Wire: Forthcoming Kubuntu Interviews

Kubuntu 14.10 is due out this week bringing a choice of rock solid Plasma 4 or the tech preview of Kubuntu Plasma 5.  The team has a couple of interviews lined up to talk about this.

At 21:00UTC tomorrow (Tuesday) Valorie will be talking with Jupiter Broadcasting’s Linux Unplugged about what’s new and what’s cool.
Watch it live 21:00UTC Tuesday or watch it recorded.

Then on Thursday just fresh from 14.10 being released into the wild me and Scarlett will be on the AtRandom video podcast starting at 20:30UTC.Watch it live 20:30UTC Thursday or watch it recorded.

And feel free to send in questions to either if there is anything you want to know.

 

20 October, 2014 11:36AM

Ronnie Tucker: Amazon Web Services Aims for More Open Source Involvement

In 2006, Amazon was an E-commerce site building out its own IT infrastructure in order to sell more books. Now, AWS and EC2 are well-known acronyms to system administrators and developers across the globe looking to the public cloud to build and deploy web-scale applications. But how exactly did a book seller become a large cloud vendor?

Amazon’s web services business was devised in order to cut data center costs – a feat accomplished largely through the use of Linux and open source software, said Chris Schlaeger, director of kernel and operating systems at Amazon Web Services in his keynote talk at LinuxCon and CloudOpen Europe today in Dusseldorf.

Founder Jeff Bezos “quickly realized that in order to be successful in the online business, he needed a sophisticated IT infrastructure,” Schlaeger said. But that required expensive proprietary infrastructure with enough capacity to handle peak holiday demand. Meanwhile, most of the time the machines were idle. By building their infrastructure with open source software and charging other sellers to use their unused infrastructure, Amazon could cover the up front cost of data center development.

Source:

http://www.linux.com/news/featured-blogs/200-libby-clark/791472-amazon-web-services-aims-for-more-open-source-involvement

Submitted by: Libby Clark

20 October, 2014 07:58AM

Valorie Zimmerman: Start your Season of KDE engines!

Season of KDE (#SoK2014) was delayed a bit, but we're in business now:

http://heenamahour.blogspot.in/2014/10/season-of-kde-2014.html

Please stop by the ideas page if you need an idea. Otherwise, contact a KDE devel you've worked with before, and propose a project idea.

Once you have something, please head over to the Season of KDE website: https://season.kde.org and jump in. You can begin work as soon as you have a mentor sign off on your plan.

Student application deadline: Oct 31 2014, 12:00 am UTC - so spread the word! #SoK2014

Go go go!

20 October, 2014 06:28AM by Valorie Zimmerman (noreply@blogger.com)

hackergotchi for TurnKey Linux

TurnKey Linux

Scaling web sites: a brief overview of tools and strategies

Monolithic server architecture

The easiest way to support additional capacity is to simply use a bigger server (big iron approach), but the disadvantage is you need to pay for that capacity even when you're not using it, and you can't change the amount of capacity for a single server without suffering some downtime (in the traditional monolithic server architecture).

Monolithic server architectures can in fact scale (up to a certain degree), but at a prohibitive price. At the extreme end you get mainframe-types big iron machines.

Distributed server architecture

In a nutshell, by using a bunch of opensource tools, it's possible to build a distributed website with capacity added and removed on the fly on a modular basis.

Load balancer

The keystone in such an arrangement is the load balancer which can operate at either the layer 4 IP level (e.g., Linux Virtual Server project), or at the layer 7 HTTP level (e.g., pound, squid, varnish).

An L4 (layer 4) load balancer is more generic and potentially more powerful than an L7 (layer 7) load balancer in that it can scale further (in some configurations) and support multiple TCP applications (I.e., not just HTTP). On the other hand, a L7 load balancer can benefit from peaking inside the application layer and route requests based on more flexible criteria.

Note that sometimes you don't need a load balancer to scale. In principle, if your application is light enough not to create CPU bottlenecks (e.g., mostly cached content served anonymously) or IO bottlenecks (e.g., your dataset is small enough to fit in memory), you'll be able to max out the network connection with a single machine and a load balancer won't provide much benefit.

The general idea behind using an L7 load balancer / reverse proxy program such as pound is that it allows you to off-load CPU or IO intensive tasks to configurable backends.

Pound is a simple dedicated L7 load balancer. It has no caching capabilities (unlike varnish). Pound detects back-ends that have stopped functioning and stops routing requests to them. This allows us to gracefully scale down capacity because you can remove back-ends and the system continues to run with no interruption.

Since an L7 load balancer is still routing all of the network requests, you can scale this configuration as far as your network connection will allow (e.g., around 20 servers), but if you continue to grow you may eventually max out the connection (which supposedly runs at 250MB/s on EC2).

LVS offers a few configurations (IP tunneling and Direct Request) that can get around this limitation so that the LVS routes requests but allows the back-end nodes to respond directly to the originator of the request. In this case the load balancer could be limited to 100MB/s while the output of the cluster goes beyond 1GB/s.

Of course the load balancer itself can become the central point of failure, and in really high availability scenarios when that matters you can setup the "heartbeat" system to implement automatic failover. I'm not sure heartbeat would work in EC2 (it assumes a local LAN), but since EC2 supports reassignment of an elastic IP to a given machine I bet a similar arrangement could be made to work.

Example architecture (e.g., large zope site):

pound -> bunch of varnish caches -> real web servers with zope -> mysql cluster (LVS offers instructions how to build one)

Varnish caches accelerate the capacity of the web servers they are using as back-ends because they can server static content and cached dynamic content much more quickly than a typical Apache (assuming you can hit the cache of course).

Anyhow, the end result is a sort of virtual server sitting behind one IP address that can scale from 1 machine (e.g., L7 load balancer with web site and mysql database all integrated) to a cluster (behind that same IP address) that maxes out the network connection. How many machines the cluster can have depends on the application's bottlenecks (I.e., CPU or IO) and economics (I.e., price/performance of a few big machines vs a larger amount of small machines)

Scaling further: DNS

When a single cluster isn't enough (e.g., maxed out the full network capacity) you can use DNS scaling tricks (e.., round-robin multiple A hosts) to spread out your load between multiple clusters.

For extra performance you can point users to a geographically close cluster using geolocation features of DNS servers such as PowerDNS (what Wikipedia are using).

BTW, PowerDNS is an opensource DNS server that maintains compatibility with BIND resource zone formats while offering a slueth of new features such as geolocation.

You'll want to keep DNS TTL low so that you can more quickly update the records, but at least some dns clients misbehave and ignore your TTL, which limits your flexibility as:

  1. old clients with cached entries won't see new load balancing cluster

  2. when you remove a load balancing cluster you have to take into account worst case expiration times for the cluster's DNS records before you can remove it offline without letting client's suffer performance degredation.

    Alternatively you can update records and take the machine offline when your monitoring indicate it's no longer being used at any significant level.

In conclusion: after a certain point you're going to have to use DNS based load scaling techniques.

I confirmed this by looking at a few examples of high scale websites (google, yahoo and wikipedia)

Google in particular seems to like using DNS to scale. I looked up www.google.com from 4 different servers around the world and received a different set of IP addresses EVERY time which all seem to be pretty close to the source of origin (I.e., in terms of ping times). Google's DNS servers themselves are a consistent set of IPs though, your location doesn't seem to matter.

Yahoo also serves you a different IP depending on where you are, and they use Akamai's dns services (probably similar to PowerDNS geolocation) to do it.

20 October, 2014 05:05AM by Liraz Siri

hackergotchi for Parsix developers

Parsix developers

Iceweasel (Firefox) 33.0 is now available for Nestor (7.0) and Trev (6.0). Updat...

Iceweasel (Firefox) 33.0 is now available for Nestor (7.0) and Trev (6.0). Update your systems to install it.

20 October, 2014 01:06AM by Parsix GNU/Linux

October 19, 2014

New security updates are available for Parsix GNU/Linux 6.0 (Trev) and 7.0 (Nest...

New security updates are available for Parsix GNU/Linux 6.0 (Trev) and 7.0 (Nestor). Please see http://www.parsix.org/wiki/Security for details.

19 October, 2014 10:46PM by Parsix GNU/Linux

hackergotchi for rescatux

rescatux

Rescatux 0.32 beta 2 released

Rescatux 0.32 beta 2 has been released.

Rescatux 0.32 Beta 1 Extended menuRescatux 0.32 Beta 1 Extended menu

Downloads:

Rescatux 0.32b2 size is about 444 Megabytes.

Rescatux 0.32b2 updated options : Restore Windows MBR, Blank Windows password, Promote Windows user to Admin and Unlock Windows user.Rescatux 0.32b2 updated options : Restore Windows MBR, Blank Windows password, Promote Windows user to Admin and Unlock Windows user.

Some thoughts:

I have had a hard work trying to make the winunlock command (for unlocking windows passwords from command line) because original winpasswd command was not working ok! I have sent an email to upstream chntpw so that it fixes it. Hopefully there is a new upstream release and we can enjoy it both fixes in Debian soon. Here there is my fork: chntpw-ng .

As you might imagine the biggest improvement in this release is that resetting windows password, promoting a windows user to Administrador and unlocking a windows user uses the latest version of chntpw which makes easier and more safe to add users to the admin group. It also fixes a bug that prevented a promoted admin user to be demoted from windows.

The other big improvement is that lilo is being used instead of syslinux so that you can finally solve this:

grub rescue>

( grub rescue > ) problems when you remove GNU/Linux partition from Windows itself. Unfortunately that only works if Windows boot partition is in the first hard disk. This is the Restore Windows MBR option which it’s going to be still BETA till many of you report me that it works ok. The difference is that the old version did break working Windows seven (and probably others) boot when used. So that it’s fixed.

There is not Super Grub2 Disk available from boot menu but, as you can see per, the pending bugs there will be one.

Finally you can boot Rescatux from Super Grub2 Disk thanks to its loopback.cfg file which I hope will be accepted upstream in Debian Live soon although they seem to be busy with Jessie freeze.

In the development arena I have removed old scripts and add new build folders so that everything is easier to understand when developing Rescatux.

This release is very needed so that we can all test this new 140201 chntpw version before I release the stable version in probably less than three months. So please report any bug if you find them. So, contrary to other versions I encourage to download it so that we can debug it.

I almost forgot that we have a new background for you to enjoy!

This is the first time that I recycle older changelog so that you can see the full changes as a whole (well, actually only from Rescatux 0.32b1).

There has been other improvements in this release so I encourage you to click on the Rescatux 0.32-freeze roadmap link so that you get more detailed information about them.

Roadmap for Rescatux 0.32 stable release:

You can check the complete changelog with link to each one of the issues at: Rescatux 0.32-freeze roadmap.

  • [#1323]    GPT support
  • [#1364]    Review Copyright notice
  • (Fixed in: 0.32b2) [#2188]    install-mbr : Windows 7 seems not to be fixed with it
  • (Fixed in: 0.32b2) [#2190]    debian-live. Include cpu detection and loopback cfg patches
  • [#2191]    Change Keyboard layout
  • [#2192]    UEFI boot support
  • (Fixed in: 0.32b2) [#2193]    bootinfoscript: Use it as a package
  • (Fixed in: 0.32b2) [#2199]    Btrfs support
  • [#2205]    Handle different default sh script
  • [#2216]    Verify separated /usr support
  • (Fixed in: 0.32b2) [#2217]    chown root root on sudoers
  • [#2220]    Make sure all the source code is available
  • (Fixed in: 0.32b2) [#2221]    Detect SAM file algorithm fails with directories which have spaces on them
  • (Fixed in: 0.32b2) [#2227]    Use chntpw 1.0-1 from Jessie
  • [#2231]    SElinux support on chroot options
  • [#2233]    Disable USB automount
  • [#2236]    chntpw based options need to be rewritten for reusing code
  • [#2239]http://www.supergrubdisk.org/wizard-step-put-rescatux-into-a-media/suppose that the image is based on Super Grub2 Disk version and not Isolinux.The step about extracting iso inside an iso would not be longer needed.”>Update doc: Put Rescatux into a media for Isolinux based cd
  • (Fixed in: 0.32b2) [#2259]    Update bootinfoscript to the latest GIT version
  • [#2264]    chntpw – Save prior registry files
  • [#2234]    New option: Easy Grub fix
  • [#2235]    New option: Easy Windows Admin

Other fixed bugs (0.32b2):

  • Rescatux logo is not shown at boot
  • Boot entries are named “Live xxxx” instead of “Rescatux xxxx”

Fixed bugs (0.32b1):

  • Networking detection improved (fallback to network-manager-gnome)
  • Bottom bar does not have a shorcut to a file manager as it’s a common practice in modern desktops. Fixed when falling back to LXDE.
  • Double-clicking on directories on desktop opens Iceweasel (Firefox fork) instead of a file manager. Fixed when falling back to LXDE.

Improvements (0.32b1):

  • Super Grub2 Disk is no longer included. That makes easier to put the ISO to USB devices thanks to standard multiboot tools which support Debian Live cds.
  • Rescapp UI has been redesigned
    • Every option is at hand at the first screen.
    • Rescapp options can be scrolled. That makes it easier to add new options without bothering on final design.
    • Run option screen buttons have been rearranged to make it easier to read.
  • RazorQT has been replaced by LXDE which seems more mature. LXQT will have to wait.
  • WICD has been replaced by network-manager-gnome. That makes easier to connect to wired and wireless networks.
  • It is no longer based on Debian Unstable (sid) branch.

Distro facts:

Feedback welcome:
I’ve tried myself the distro in my dev environment for the new options, not the old ones and they seem to start ok. Another thing is doing a full test about their complete functionality. Please test the ISO and report back if something that worked on previous stable versions no longer works in this beta version.

Don’t forget that you can use:

Help Rescatux project if you cannot wait:

I think we can expect three  months maximum till the new stable Rescatux is ready, probably half of it because I manage to fix bugs very quick lately. These are some of the funny tasks that anyone can easily contribute to:

  • Making a youtube video for the new options.
  • Make sure documentation for the new options is right.
  • Translate the documentation of new options to Spanish.
  • Make snapshots for new options documentation so that they don’t lack images.

If you want to help please contact us here:

Thank you and happy download!

19 October, 2014 06:49PM by adrian15

hackergotchi for Ubuntu developers

Ubuntu developers

Ronnie Tucker: Pushbullet + FCM = WIN!

Pushbullet

If you’d like to know the very second FCM is out, and on all of your devices then install Pushbullet and subscribe to the Full Circle Magazine channel: https://www.pushbullet.com/channel?tag=fcm

I’m not sure if I can push a 15MB PDF through Pushbullet, but I’ll give it a first try when FCM#90 is out (31st).

There’s also a Pushbullet subscribe button on the site.

19 October, 2014 05:45PM

Randall Ross: Ubuntu Contributors' Guide

I spent a few minutes this morning writing the comprehensive Ubuntu Contributors' Guide.

Here it is in all its glory:

Yes, that's really all there is to it. It's simple.

As obvious as this seems, there are people (names withheld) that will want you to believe otherwise. I'll elaborate in a future post.

When you encounter them, please forward a copy of this flow chart. Tell them Randall sent you.

19 October, 2014 04:38PM

Ronnie Tucker: VirtualBox 4.3.18 Has Been Released With Lots Of Fixes

Virtualbox 4.3.18 has been released and bringing many different fixes for major operating systems such as Ubuntu Linux, Windows and Mac OS X. The potential misbehavior after restoring the A20 state from a saved state has been fixed, virtualbox does not crash anymore in linux hosts with old versions of the linux kernel, a few remaining warnings in the kernel log if memory allocation fails have been fixed and the GNOME Shell on Fedora 21 is not prevented anymore from starting when  handling video driver display properties.

Thanks to this maintenance release Ubuntu users have now the possibility to use legacy full-screen mode under Unity without experiencing multi-screen issues. Another important issue related to Unity that has been fixed with the release of 4.3.18 version is the quirk  in full-screen mode Unity panels caused by mini-toolbar code changes in last release.

Source:

http://www.unixmen.com/virtualbox-4-3-18-released-lots-fixes/

Submitted by: Oltjano Terpollari

19 October, 2014 06:57AM

hackergotchi for Parsix developers

Parsix developers

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Mako Hill: Another Round of Community Data Science Workshops in Seattle

Pictures from the CDSW sessions in Spring 2014Pictures from the CDSW sessions in Spring 2014

I am helping coordinate three and a half day-long workshops in November for anyone interested in learning how to use programming and data science tools to ask and answer questions about online communities like Wikipedia, free and open source software, Twitter, civic media, etc. This will be a new and improved version of the workshops run successfully earlier this year.

The workshops are for people with no previous programming experience and will be free of charge and open to anyone.

Our goal is that, after the three workshops, participants will be able to use data to produce numbers, hypothesis tests, tables, and graphical visualizations to answer questions like:

  • Are new contributors to an article in Wikipedia sticking around longer or contributing more than people who joined last year?
  • Who are the most active or influential users of a particular Twitter hashtag?
  • Are people who participated in a Wikipedia outreach event staying involved? How do they compare to people that joined the project outside of the event?

If you are interested in participating, fill out our registration form here before October 30th. We were heavily oversubscribed last time so registering may help.

If you already know how to program in Python, it would be really awesome if you would volunteer as a mentor! Being a mentor will involve working with participants and talking them through the challenges they encounter in programming. No special preparation is required. If you’re interested, send me an email.

19 October, 2014 01:19AM

October 18, 2014

hackergotchi for Maemo developers

Maemo developers

2014-10-14 Meeting Minutes

Meeting held 2014-10-14 on FreeNode, channel #maemo-meeting (logs)

Attending: Gido Griese (Win7Mac), Paul Healey (sixwheeledbeast),
Jussi Ohenoja (juiceme), Philippe Coval (RzR), Peter Leinchen (peterleinchen)

Partial: (xes), Ruediger Schiller (chem|st)

Absent: Niel Nielsen (nieldk), Joerg Reisenweber (DocScrutinizer05)

Summary of topics (ordered by discussion):

  • Swear filter on TMO (smartwatch)
  • DocScrutinizer/joerg_rw stepped down from Council!
  • Current Council members
  • Transition to Maemo e.V., referendum
  • Code of Conduct
  • Karma calculation

Topic (Swear filter on TMO (smartwatch)):

  • A lengthy discussion about the forum swear filter and its possibilities to be improved.
    At least to enable the upcoming topic 'smartwatch'.
  • Chemist stepped in and explained the difficulties and possibilities of this add-on module.
  • At the end the module is now set to filter out only full words and will allow the word 'smartwatch'.

Topic (DocScrutinizer/joerg_rw stepped down from Council!):

  • Joerg kept quiet and did not see the need to announce his resignation on official channel to the community .
  • The council decided unanimously to do so for him on community mailing list.

Topic (Current Council members):

  • After the resignation of Niel and Joerg the current council now consist of three persons:
    Jussi Ohenoja (juiceme),
    Philippe Coval (RzR),
    Peter Leinchen (peterleinchen).

Topics (Referendum, Karma, Code of Conduct):

  • Short discussion about upcoming referendum, the karma system and Code of Conduct.
  • These topics were shifted to be discussed in next week's meeting.

Action Items:
  • -- old items:
    • Check if karma calculation/evaluation is fixed. - Karma calculation should work, only wiki entries (according to Doc) not considered. To be cross-checked ...
    • NielDK to prepare a draft for letter to Jolla. - Obsolete
    • Sixwheeledbeast to clarify the CSS issue on wiki.maemo.org with techstaff. - Done
    • juiceme to create a wording draft for the referendum (to be counterchecked by council members). - See
    • Everybody to make up their own minds about referendum and give feedback.
  • -- new items:
    • Peterleinchen to announce resignation of DocScrutinizer*/joerg_rw from council.
    • Next weeks tasks: referendum, karma check, voting for Code of Conduct, sub pages on m.o for e.V.
1 Add to favourites0 Bury

18 October, 2014 06:35PM by Peter Leinchen (peterleinchen@t-online.de)

hackergotchi for Ubuntu developers

Ubuntu developers

Costales: Folder Color y el poder de la comunidad

Como programador, alguna que otra vez me sucedió algo tan especial como ayer...

Un usuario de Folder Color me envió un email solicitando que los iconos dependan del tema, más particularmente del set de iconos Numix.

Algo que a priori creía que no era factible técnicamente (o al menos sin remapear manualmente muchísimos iconos) se resolvió gracias a la comunidad. El usuario me remitió a su pregunta al upstream y ahí la inestimable ayuda de Joshua Fogg de Numix me permitió aprender cómo funcionan los temas en Ubuntu y tras unas horas de desarrollo y pruebas, ¡voalá! Nueva versión, más funcional y bonita que nunca :D ¡Gracias compañeros!

Y así, en este mundillo linuxero: proyecto x proyecto = proyecto3
Sí, al cubo ;) no me equivoqué.

18 October, 2014 02:12PM by Marcos Costales (noreply@blogger.com)

hackergotchi for Whonix

Whonix

Whonix 9.3 Maintenance Release

Download:
https://www.whonix.org/wiki/Download

Upgrading:
Existing users can upgrade the usual way using apt-get, see also: https://www.whonix.org/wiki/Security_Guide#Updates

Changelog between 9 and 9.3:
anon-gw-anonymizer-config: Fixed startup of Tor due to an AppArmor conflict as per bug reports in the forums https://www.whonix.org/forum/index.php/topic,559.0.html. Needed to out commented “/usr/bin/obfsproxy rix,” in file “/etc/apparmor.d/local/system_tor.anondist” because The Tor Project added “/usr/bin/obfsproxy PUx,” to file “/etc/apparmor.d/abstractions/tor”. Therefore users of obfsproxy will now end up running obfsproxy unconfined, because we would now require a standalone obfsproxy AppArmor profile. Note, that this is not a Whonix specific issue. Also if you were using plain Debian, no one redistributes an obfsproxy AppArmor profile at time of writing.
– updated frozen sources (contains apt-get and bash security fixes)
– updated frozen sources (contains bash shellshock #2 fixes)
– anon-ws-disable-stacked-tor: Tor Browser 4.x compatibility fix
– tb-starter: Tor Browser 4.x compatibility fix

Update:
Removed “testers-wanted” from title. Blessed stable.

The post Whonix 9.3 Maintenance Release appeared first on Whonix.

18 October, 2014 01:30PM by Patrick Schleizer

hackergotchi for Ubuntu developers

Ubuntu developers

Lydia Pintscher: One thing that would make KDE better

IMG_20141012_175133

I went to Akademy with two notebooks and a plan. They should both be filled by KDE contributors with writing and sketching about one thing they think would make KDE better. Have a look at the result:
IMG_20140912_222725_v1IMG_20140907_113618IMG_20140908_162427IMG_20140906_150108

The complete set is in this Flickr album. Check it out! What’s your favorite? What’s your one thing – big or small – that would make KDE better?

(Thanks to Fabrice for the idea.)

18 October, 2014 12:14PM

Rhonda D'Vine: Trans Gender Moves

Yesterday I managed to get the last ticket from the waitinglist for the premiere of Trans Gender Moves. It is a play about the lives of three people: A transman, a transwoman and an intersexual person. They tell stories from their life, their process of finding their own identity over time. With in parts amusing anecdotes and ones that gets you thinking I can just wholeheartly encourage you to watch it if you have the chance to. It will still be shown the next few days, potentially extending depending on the requests for tickets, from what I've been told by one of the actors.

The most funny moment for me though was when I was talking with one of the actors about that it really touched me that I was told that one of them will be moving into into the same building I will be moving into in two year's time. Unfortunately that will be delayed a bit because they found me thinks field hamster or the likes in the ground and have to wait until spring for them to move. :/

/personal | permanent link | Comments: 4 | Flattr this

18 October, 2014 10:14AM

Costales: Folder Color is themable now

Folder Color has a new improvement: It's themable now! :)

If your custom theme has the "folder-color" icons (read how to create those icons), you'll see them! By example, this is a screenshot with the awesome Numix icons (WIP yet):


Numix icon set

You can watch it in action in this video.


How to install: Here.

I want to thank you to Joshua Fogg from the Numix Proyect for his help & knowledge!! Really thank you ;)

Enjoy it! :)

18 October, 2014 06:20AM by Marcos Costales (noreply@blogger.com)

Ronnie Tucker: KDE Plasma 5 Now Available for Ubuntu 14.10 (Utopic Unicorn)

The new KDE Plasma and KDE Frameworks packages are now out of Beta and users can test them in various systems, including Ubuntu. In fact, installing the latest KDE is quite easy now because there is a PPA available.

A lot of users are anxious to use the latest Plasma desktop because it’s quite different from the old one. We can call it “the old one” even if the latest branch, 4.14.x, is still maintained until November.

The KDE developers split the project into three major components: Plasma, Frameworks, and Applications. Plasma is actually the desktop and everything that goes with it, Frameworks is made up of all the libraries and other components, and Applications gathers all the regular apps that are usually KDE-specific.

Source:

http://news.softpedia.com/news/KDE-Plasma-5-Now-Available-for-Ubuntu-14-10-Utopic-Unicorn–462042.shtml

Submitted by: Silviu Stahie

18 October, 2014 04:56AM

October 17, 2014

Sam Hewitt: Turkey Soup with Fluffy Dumplings

It was Turkey Day (more commonly called Thanksgiving) this weekend past in Canada which always means there's an abundance of food and leftovers. As such, I feel there's no better use of your turkey carcass and extra meat than making turkey soup.

Part 1. The Soup

    Ingredients

  • 1 leftover turkey carcass –the body, with most of the meat removed, plus any leftover limbs of the bird, if still available.
  • 1 onion, cut into large chunks
  • 2 cups water
  • 2 cups chicken stock
  • 1 kg of cooked turkey meat (or whatever you have left), any skin removed & shredded
  • 2 large carrots, cut into even chunks
  • 1 clove garlic, minces
  • 1/2 teaspoon dried marjoram
  • 1/2 teaspoon dried thyme
  • salt & pepper
  • dumplings, recipe follows.

    Directions

  1. Put the turkey corpse & chopped onion into a pot and cover with stock and water. Bring to boil, then reduce heat and simmer for at least an hour (up to a few hours).
  2. Drain the resulting broth into a large bowl through a large colander to remove the bones & such.
  3. Pour the broth back into the pot through a mesh strainer, to remove the smaller bits from it.
  4. Add the chopped carrot & garlic along with the dried thyme & marjoram and season with salt & pepper, to your taste.
  5. Bring soup to a boil, then reduce heat and simmer until the carrot are soft (which may be up to an hour).
  6. Finish soup with dumplings before serving.

Part 2. Fluffy Dumplings

    Ingredients

  • 1 cup all purpose flour
  • 2 teaspoons baking powder
  • 1/2 teaspoon salt
  • 1/2 cup milk
  • 2 tablespoons olive oil or other vegetable oil
  • 3 tablespoons finely chopped green onion and/or parsley (optional)

    Directions

  1. Combine the dry ingredients in a large bowl plus the chopped herbs, if using.
  2. Add the milk & oil and bring it all together into a sticky mass.
  3. Dump out the dough onto a well-floured surface and knead for a few minutes.
  4. Divide the dough in half and roll into long ~1 inch diameter "logs".
  5. Cut the dough logs into to evenly-sized dumplings.
  6. To eat, add to a pot of hot, simmering broth or soup and let cook for at least 15 minutes.

I favour dumplings as the starch element in a soup like this, but you are free to opt them out and use rice, noodles or even chunks of potato.

17 October, 2014 06:00PM

Martin Pitt: Ramblings from LinuxCon/Plumbers 2014

I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

17 October, 2014 04:54PM

hackergotchi for Xanadu developers

Xanadu developers

Instalar BOINC en Debian y derivados

BOINC (Berkeley Open Infrastructure for Network Computing) es una infraestructura para la computación distribuida, desarrollada originalmente para el proyecto SETI@home, pero que actualmente se utiliza para diversos campos como física, medicina nuclear, climatología, etc. La intención de este proyecto es obtener una capacidad de computación enorme utilizando computadores personales alrededor del mundo. Los proyectos en los que trabaja este software tienen un denominador común, y es que requieren una gran capacidad de cálculo.

La plataforma puede correr bajo varios sistemas operativos, incluyendo Microsoft Windows y varios sistemas Unix-like incluyendo Mac OS X, Linux y FreeBSD. BOINC es software libre y disponible bajo la licencia GNU LGPL.

Para instalarlo en nuestro sistema solo hay que seguir estos simples pasos:

  • Instalar libxss1:
apt -y install libxss1
  • Descargar el paquete desde aquí.
  • Ejecutar el archivo que descargamos, este nos creara una carpeta llamada BOINC donde esta todo lo necesario para su ejecución.
  • Desde un terminal vamos a la carpeta BOINC y ejecutamos run_manager.
  • Ahora damos clic en “Añadir proyecto” y en la ventana que nos abre marcamos “añadir proyecto” y damos clic en siguiente.
  • Nos aparecerá una lista de proyectos en los que podemos participar, seleccionamos uno y damos clic en siguiente.
  • Luego nos pedirá un nombre de usuario y contraseña, en caso de NO tener uno podemos crearlo allí mismo.
  • Al presionar finalizar, nos abrirá una web donde podemos colocar algunos datos adicionales para la cuenta que acabamos de crear.
  • Si estamos interesado en pertenecer a algún grupo podemos buscar alguno o seleccionamos “i’m not interested”.
  • Luego de esto regresaremos a la pantalla principal y podremos ver como comienza la descarga para el proyecto que elegimos.

 Si al agregar un proyecto no podemos ver ninguno en la lista necesitaremos cerrar el programa y hacer una pequeña modificación:

  • Abrimos un editor de texto y pegamos el siguiente contenido:
<cc_config>
  <options>
    <http_1_0>1</http_1_0>
  </options>
</cc_config>
  •  Lo guardamos en la carpeta BOINC con el nombre cc_config.xml y listo, ahora podremos ver la lista de los proyectos.

Referencias:


Tagged: boinc, colaborar

17 October, 2014 04:01PM by sinfallas

Como proteger Firefox / Iceweasel contra el ataque POODLE

POODLE (Padding Oracle On Downgraded Legacy Encryption) Es un fallo de seguridad que puede ser usado para interceptar datos que deberían estar cifrados entre el cliente y el servidor. Lo que hace este exploit es convencer al cliente de que el servidor no soporta el protocolo TLS y lo fuerza a conectarse por SSL 3.0.

En esta situación un atacante que use un ataque man in the middle puede descifrar cookies HTTP seguras y conseguir información.

Mientras esperamos la salida de la versión 34 de Firefox / Iceweasel (planificada para el 25 de Noviembre) que desactivará por defecto SSLv3, podemos utilizar este sencillo método para proteger nuestra versión actual del navegador y así evitar malos ratos al navegar por la red.

Para ello solo basta con instalar el complemento SSL Version Control desarrollado por Mozilla, este complemento también funciona con otros productos de Mozilla como por ejemplo: Firefox para Android, Thunderbird y Seamonkey.

También es recomendable activar las actualizaciones automáticas en Preferencias > Avanzado > Actualizar. De esta manera nos aseguramos de tener siempre la ultima versión del navegador que incluye mejoras y corrección de errores.

 Referencias:


Tagged: poodle, vulnerabilidad

17 October, 2014 02:08PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Harald Sitter: Plasma 5 Weekly ISO Revisited

I am proud to announce that Plasma 5 weekly ISOs have returned today.

http://files.kde.org/snapshots/unstable-i386-latest.iso.mirrorlist

Grab today’s ISO while it is hot. And don’t forget to report the bugs you might notice.

Plasma 5 weekly ISOs bring you the latest and greatest Plasma right from the tip of development.

As some of you might have noticed the previous Plasma 5 weekly ISOs stopped updating a while ago. This was because we at Blue Systems were migrating to new system for distribution level integration. More on this to follow soon. Until then you’ll have to believe me that it is 300% more awesome :)

17 October, 2014 01:38PM

Lucas Nussbaum: Debian Package of the Day revival (quite)

TL;DR: static version of http://debaday.debian.net/, as it was when it was shut down in 2009, available!

A long time ago, between 2006 and 2009, there was a blog called Debian Package of the Day. About once per week, it featured an article about one of the gems available in the Debian archive: one of those many great packages that you had never heard about.

At some point in November 2009, after 181 articles, the blog was hacked and never brought up again. Last week I retrieved the old database, generated a static version, and put it online with the help of DSA. It is now available again at http://debaday.debian.net/. Some of the articles are clearly outdated, but many of them are about packages that are still available in Debian, and still very relevant today.

17 October, 2014 01:05PM

Rhonda D'Vine: New Irssi

After a long time a new irssi upstream release hit the archive. While the most notable change in 0.8.16 was DNSSEC DANE support which is enabled (for linux, src:dnsval has issues to get compiled on kFreeBSD), the most visible change in 0.8.17 was addition of support for both 256 colors and truecolor. While the former can be used directly, for the later you have to explicitly switch the setting colors_ansi_24bit to on. A terminal support it is needed for that though. To test the 256 color support, your terminal has to support it, your TERM environment variable has to be properly set, and you can test it with the newly added /cubes alias. If you have an existing configuration, look at the Testing new Irssi wiki page which helps you get that alias amongst giving other useful tipps, too.

The package currently only lives in unstable, but once it did flow over to testing I will update it in wheezy-backports, too.

Enjoy!

/debian | permanent link | Comments: 0 | Flattr this

17 October, 2014 12:39PM

Jussi Kekkonen: Notes about Dell XPS 13 developer edition and Kubuntu

Got new tool, Dell XPS 13 developer edition, running Ubuntu 12.04. Here’s some experiences using it and also a note for future self what needed to be done to make everything work.

After taking restore disc from the pre-installed Ubuntu using the tool Dell provided, I proceeded on clean installing Kubuntu 14.04. I have to say for the size and price of this piece of hardware is rather amazing, only nitpicking could be the RAM capability being capped to 8 GiB. Having modern Linux distribution running smoothly in any circumstances is simply nice experience. I haven’t hit yet for the limitations of the integrated Intel GPU either, which is surprising, or maybe it is just telling my way of using these things. (:

Touch screen is maybe the most interesting bit on this laptop. Unfortunately I have to say the use of it is limited by UI not working well with touch interaction in many cases. Maybe choosing apps differently I would get better experience. At least some websites are working just fine when using Chromium browser.

Note on hardware support

Everything else works like a charm out of the box in Kubuntu 14.04, except cooling. After some searching I found out some Dell laptops need separate tools for managing the cooling. I figured out the following:

I needed to install i8kutils, which can be found in Ubuntu repositories.

Then I made the following contents to /etc/i8kmon.conf

# Run as daemon, override with --daemon option
set config(daemon)      0

# Automatic fan control, override with --auto option
set config(auto)        1

# Report status on stdout, override with --verbose option
set config(verbose) 1

# Status check timeout (seconds), override with --timeout option
set config(timeout) 12

# Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt}
set config(0)   {{-1 0}  -1  48  -1  48}
set config(1)   {{-1 1}  45  60  45  60}
set config(2)   {{-1 2}  50  128  50  128}

# end of file

Note that some options are overridden in the init script, for example it does set i8kmon to daemon mode. Timeout of 12 seconds is there because I noticed every time fan speed is set, the speed begins to fall down in ~10 seconds so that in half a minute point you notice clearly the accumulated change on the fan speed. My 12 seconds is just compromise I found working for me well, YMWV etc.

Also to have i8kmon control cooling without human interaction, I needed to enable it in /etc/default/i8kmon

ENABLED=1

That’s it for now, I might end up updating the post if something new comes up regarding hardware support.

17 October, 2014 08:14AM

Ronnie Tucker: Canonical Details Plans for Unity 8 Integration in Ubuntu Desktop

Ubuntu users now know for certain when Unity 8 officially arrives on the desktop flavor of the distribution.

The Ubuntu desktop flavor hasn’t been the developers’ focus for some time now, but that is going to change very soon. The new Desktop Team Manager at Canonical, Will Cooke, has talked about the future of the Unity desktop and laid out the plans for the next few Ubuntu versions.

Users might have noticed that Ubuntu developers have been putting much of their efforts into the mobile version of their operating system and the desktop has received less attention than usual. They had to focus on that version because most of the things that are changed and improved for Ubuntu Touch will eventually land on the desktop as well.

Not all users know that the desktop environment that is now on Ubuntu Touch will also power the desktop version in the future, and that future is not very far ahead. In fact, it’s a lot closer than users imagine.

Source:

http://news.softpedia.com/news/Canonical-Details-Plans-for-Unity-8-Integration-in-Ubuntu-Desktop-462117.shtml

Submitted by: Silviu Stahie

17 October, 2014 05:55AM

Joe Liau: Documenting the Death of the Dumb Telephone – Part 2: Balderdash

Sometimes we need text so that we can document history, such as the death of our beloved smart phones. But, our phones are not smart; smart things do not fill themselves with nonsense. For some reason, the number of chatting, texting, mailing, talking channels is constantly increasing, which is also increasing the amount of “garbage information” that is entering our brains. Sometimes there is so much that I have to cut off myself off from the channels. Maybe my phone shouldn’t have a text function at all! It needs to be saved.

In a future post, I will discuss how we might mitigate this by adjusting our habits, but considering that all of these messages contain text, my smart phone should be able to consolidate, cross-reference, reply in-line, or find a way reduce the number of channels and the number of taps required to explain something.

A smart phone does not walk itself into traffic because it needs to reply to so many messages. Poor phones.

sop

17 October, 2014 03:57AM

Eric Hammond: Installing aws-cli, the New AWS Command Line Tool

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Here is a quick list of the services that aws-cli currently supports: Auto Scaling, CloudFormation, CloudSearch, CloudWatch, Data Pipeline, Direct Connect, DynamoDB, EC2, ElastiCache, Elastic Beanstalk, Elastic Transcoder, ELB, EMR, Identity and Access Management, Import/Export, OpsWorks, RDS, Redshift, Route 53, S3, SES, SNS, SQS, Storage Gateway, Security Token Service, Support API, SWF, VPC.

Support for the following appears to be planned: CloudFront, Glacier, SimpleDB.

The aws-cli software is being actively developed as an open source project on Github, with a lot of support from Amazon. You’ll note that the biggest contributors to aws-cli are Amazon employees with Mitch Garnaat leading. Mitch is also the author of boto, the amazing Python library for AWS.

Installing aws-cli

I recommend reading the aws-cli documentation as it has complete instructions for various ways to install and configure the tool, but for convenience, here are the steps I use on Ubuntu:

sudo apt-get install -y python-pip
sudo pip install awscli

Add your Access Key ID and Secret Access Key to $HOME/.aws/config using this format:

[default]
aws_access_key_id = <access key id>
aws_secret_access_key = <secret access key>
region = us-east-1

Protect the config file:

chmod 600 $HOME/.aws/config

Optionally set an environment variable pointing to the config file, especially if you put it in a non-standard location. For future convenience, also add this line to your $HOME/.bashrc

export AWS_CONFIG_FILE=$HOME/.aws/config

Now, wasn’t that a lot easier than installing and configuring all of the old tools?

Testing

Test your installation and configuration:

aws ec2 describe-regions

The default output is in JSON. You can try out other output formats:

 aws ec2 describe-regions --output text
 aws ec2 describe-regions --output table

I posted this brief mention of aws-cli because I expect some of my future articles are going to make use of it instead of the legacy command line tools.

So go ahead and install aws-cli, read the docs, and start to get familiar with this valuable tool.

Notes

Some folks might already have a command line tool installed with the name “aws”. This is likely Tim Kay’s “aws” tool. I would recommend renaming that to another name so that you don’t run into conflicts and confusion with the “aws” command from the aws-cli software.

[Update 2013-10-09: Rename awscli to aws-cli as that seems to be the direction it’s heading.]

*[Update 2014-10-16: Use new .aws/config filename standard.]

Original article: http://alestic.com/2013/08/awscli

17 October, 2014 01:54AM

October 16, 2014

Ubuntu Podcast from the UK LoCo: S07E29 – The One with the Baby on the Bus

Join Laura Cowen, Tony Whitmore and Alan Pope in Studio L for Season Seven, Episode Twenty-Nine of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll be talking about diversity at events like OggCamp and looking over your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

16 October, 2014 07:30PM

hackergotchi for Xanadu developers

Xanadu developers

Comprobar si se es vulnerable a shellshock

Shellshock es el nombre de una falla de seguridad que tiene más de 20 años de antigüedad, pero que fue dada a conocer sólo en septiembre del año 2014.

Esta vulnerabilidad afecta a Bourne-Again Shell (Bash), un componente de software que interpreta órdenes en el sistema Unix, base de Linux y de Mac OS de Apple. El peligro de esta falla de seguridad radica en que cualquier hacker podría controlar a distancia cualquier computador o sistema que utilice Bash, como los servidores que funcionan con Linux o los dispositivos con sistemas operativos de Apple.

 Para saber si nuestro equipo es vulnerable solo debemos abrir un terminal y pegar el siguiente código:

env x='() { :;}; echo vulnerable' bash -c "echo a shellshock"

Si el resultado es “vulnerable a shellshock” significa que debe actualizar su sistema para protegerse de este error, la mayoría de las distribuciones de linux ya han publicado versiones seguras de este paquete. Si por el contrario el resultado es “a shellshock” significa que su sistema esta esta seguro.

Referencias:


Tagged: bash, shellshock, vulnerabilidad

16 October, 2014 07:30PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Nicholas Skaggs: Final testing for Utopic

The final images of what will become utopic are here! Yes, in just one short week utopic unicorn will be released into the world. Celebrate this exciting release and be among the first to run utopic by helping us test!

We need your help and test results, both positive and negative. Please head over to the milestone on the isotracker, select your favorite flavor, and perform the needed tests against the images.

If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

Thank you for helping to make ubuntu better! Happy Testing!

16 October, 2014 04:44PM by Nicholas Skaggs (noreply@blogger.com)

hackergotchi for Tails

Tails

Tails 1.2 is out

Tails, The Amnesic Incognito Live System, version 1.2, is out.

This release fixes numerous security issues and all users must upgrade as soon as possible.

Changes

Notable user-visible changes include:

  • Major new features

    • Install (most of) the Tor Browser, replacing our previous Iceweasel-based browser. The version installed is from TBB 4.0 and is based on Firefox 31.2.0esr. This fixes the POODLE vulnerability.
    • Upgrade Tor to 0.2.5.8-rc.
    • Confine several important applications with AppArmor.
  • Bugfixes

    • Install Linux 3.16-3 (version 3.16.5-1).
  • Minor improvements

    • Upgrade I2P to 0.9.15, and isolate I2P traffic from the Tor Browser by adding a dedicated I2P Browser. Also, start I2P automatically upon network connection, when the i2p boot option is added.
    • Make it clear that TrueCrypt will be removed in Tails 1.2.1 (ticket #7739), and document how to open TrueCrypt volumes with cryptsetup.
    • Enable VirtualBox guest additions by default (ticket #5730). In particular this enables VirtualBox's display management service.
    • Make the OTR status in Pidgin clearer thanks to the formatting toolbar (ticket #7356).
    • Upgrade syslinux to 6.03-pre20, which should fix UEFI boot on some hardware.

See the online Changelog for technical details.

Known issues

I want to try it or to upgrade!

Go to the download page.

As no software is ever perfect, we maintain a list of problems that affects the last release of Tails.

What's coming up?

The next Tails release is scheduled for November 25.

Have a look to our roadmap to see where we are heading to.

Do you want to help? There are many ways you can contribute to Tails. If you want to help, come talk to us!

16 October, 2014 10:34AM

hackergotchi for SolydXK

SolydXK

SSL 3.0 vulnerability a.k.a. “POODLE”

A vulnerability has been found in the SSL protocol version 3.0.

One of the moderators on the Linux Mint forum has created a very good write-up about it and we decided it would be appropriate to link to it here: http://forums.linuxmint.com/viewtopic.php?f=17&t=180418

A fix for the OpenSSL package, disabling the SSLv3 protocol, is expected to become available soon.

16 October, 2014 05:58AM by Arjen Balfoort (Schoelje)

October 15, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Mythbuntu: Actions required by Nov 1st due to Schedules Direct change

The following announcement will affect users using the Schedules Direct service to get guide data, including but not limited to USA and Canada.

On November 1st, 2014, the existing SD service is changing. 

We have been informed that Gracenote (formerly Tribune Media Services) will be ending the guide data service currently used by most users of Schedules Direct. Their plan is to end support for this service on November 1, 2014.

A service is being developed to mimic the DataDirect feed. It has most, but not all of the data currently in the Data Direct feed and will be updated daily. 

What does this mean for Schedules Direct?

The guide data provider (Gracenote) that Schedules Direct uses is changing how they present the guide data to users. Schedules Direct has taken it upon themselves to write a server side compatibility layer so existing applications will continue to get guide data. This does require a change in the URL that applications use to download which is why an update to MythTV is necessary.

What does this mean to you as a user?

If you have a paid subscription to Schedules Direct that will continue the way it has worked previously. A simple update to MythTV will be required for users on a supported version of MythTV.

Users that have enabled the MythTV Updates repo and are on a current version of MythTV and a supported version of Ubuntu will receive the fix for this via regular updates. The Mythbuntu team has always recommended enabling the MythTV Updates repo in the Mythbuntu Control Centre and staying up to date on fixes builds. The fix for this issue was added to our packages in the versions in the below table. More information on the Mythbuntu provided MythTV Update repo can be found here

Users on builds prior to 0.27 (eg. 0.26, 0.25) will need to either upgrade to a supported build version (see Mythbuntu Repos) or use one of the workarounds (See MythTV Wiki)

MythTV Version   Fixed in version
 0.28 (development)2:0.28.0~master.20141013.4cb10e5-0ubuntu0mythbuntu#
 0.27.X2:0.27.4+fixes.20141015.e4f65c8-0ubuntu0mythbuntu#
 Prior to 0.27.XWILL NOT BE FIXED, please either update or see the MythTV Wiki for a workaround


For more information on this issue, please see the writeup on the MythTV wiki. Questions can be directed to the MythTV-Users mailing list

15 October, 2014 09:24PM by Thomas Mashos (thomas@mashos.com)

Randall Ross: Writing About Ubuntu? Own Your Own Content

A friend of mine sent me a link from her "+" account last night, publicizing a fundraising effort...

Admittedly, I've never been impressed with "+", so I rarely (if ever) look at it. Because she was a friend, and I like to help friends, I decided to go in and see what the link was about. I ended up staying longer than I originally planned and took a look around.

What did I see? I saw a lot of people who used to make Planet Ubuntu a lively, exciting, and vibrant place writing prolifically on "+" instead. Sadly and disappointingly, they rarely post on Planet these days.

Are you one of these people?

Friends, do consider the effect of the following:

When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. This license continues even if you stop using our Services ...
(Source: http://www.google.com/intl/en/policies/terms/)

Something smells wrong with this.

Friends, it's really not that difficult to host a blog and to use a more respectful service. I hope you'll consider that one small step in the sprit of not becoming the product, or even better, in the spirit of making Planet Ubuntu *the* place for Ubuntu happenings.

--
image by Terry O'Fee
https://www.flickr.com/photos/tmofee/

15 October, 2014 02:15PM

Jonathan Riddell: Ubuntu's Linux Scheduler or Why Baloo Might be Slowing Your System in 14.04

KDE Project:

Last month I posted about packaging and why it takes time. I commented that the Stable Release Update process could not be rushed because a regression is worse than a known bug. Then last week I was pointed to a problem where Baloo was causing a user's system to run slow. Baloo is the new indexer from KDE and will store all your files in a way you can easily search for them and was a faster replacement for Nepomuk. Baloo has been written to be as lightweight as these things can be using IONice, a feature of Linux which allows processes to say "this isn't very important let everyone else go first".

Except IONice wasn't working. Turns out Ubuntu changed the default Linux scheduler from CFQ to Deadline which doesn't support IONice. Kubuntu devs who had been looking at this for some time had already worked out how to change it back to the upstream defaults in our development version Utopic and in the backports packages we put on Launchpad. Last week we uploaded it as a proposed Stable Release Update and as expected the SRU team was sceptical. We should have been faster with the SRU which is our fault. They're there to be sceptical but the only change here is to go back to using upstream defaults. After much wondering why it was changed in the first place it seems that Unity was having problems with the CFQ scheduler and so it was changed, now we have suggestions that Baloo should be changed to adapt to that which is crazy. Nobody seems to have considered fixing Unity or that making the change in the scheduler in the first place would affect software outside of Unity. We tried taking the issue to the Ubuntu Technical Board but their meeting didn't happen this week.

So alas no fix in the immediate future, if it bothers you best use Kubuntu Backports. When someone on the SRU team is brave enough to approve it into -proposed we'll put out a call for testers and it'll get into -updates eventually. It's what happens when you have a large project like Ubuntu with many competing demands, but it would be nice if the expectation was on Unity to get fixed rather than on Kubuntu to deal with the bureaucracy to workaround their workarounds.

15 October, 2014 11:35AM

Ubuntu App Developer Blog: How to customize and brand your scope

Scopes come with a very flexible customization system. From picking the text color to rearranging how results are laid out, a scope can easily look like a generic RSS reader, a music library or even a store front.

In this new article, you will learn how to make your scope shine by customizing its results, changing its colors, adding a logo and adapting its layout to present your data in the best possible way. Read…

screenshot20145615_125616591

15 October, 2014 11:14AM

Raphaël Hertzog: Freexian’s second report about Debian Long Term Support

Like last month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September 2014, 3 contributors have been paid for 11h each. Here are their individual reports:

Evolution of the situation

Compared to last month, we have gained 5 new sponsors, that’s great. We’re now at almost 25% of a full-time position. But we’re not done yet. We believe that we would need at least twice as many sponsored hours to do a reasonable work with at least the most used packages, and possibly four times as much to be able to cover the full archive.

We’re now at 39 packages that need an update in Squeeze (+9 compared to last month), and the contributors paid by Freexian did handle 11 during last month (this gives an approximate rate of 3 hours per update, CVE triage included).

Open questions

Dear readers, what can we do to convince more companies to join the effort?

The list of sponsors contains almost exclusively companies from Europe. It’s true that Freexian’s offer is in Euro but the economy is world-wide and it’s common to have international invoices. When Ivan Kohler asked if having an offer in dollar would help convince other companies, we got zero feedback.

What are the main obstacles that you face when you try to convince your managers to get the company to contribute?

By the way, we prefer that companies take small sponsorship commitments that they can afford over multiple years over granting lots of money now and then not being able to afford it for another year.

Thanks to our sponsors

Let me thank our main sponsors:

15 October, 2014 07:45AM

hackergotchi for SolydXK

SolydXK

UP 2014.10.15

On 15 October 2014 we have synchronized the production repositories.

Those who haven’t upgraded from the testing repositories, and depending on whether you’re a SolydX or SolydK user you will have between 500MB and 1GB of packages to download.

After the go-live, I’ll start building the new iso’s.

You might have a slow download if people upgrade at the same time. Try again at a later date or time.

If you are following our mirrors: they will follow later, and I will keep you posted on their progress.

If you encounter any problems, post your findings here: http://forums.solydxk.com/viewtopic.php?f=32&t=4853

Below you find the Update Manager’s UP information page.
For users running BE/BO: package updates are few. You can install these package as a regular update.


Update Pack 2014.10.15

Changes

  • Kernel
    The kernel has been updated to version 3.16.3-2.
  • LibreOffice
    LibreOffice has been updated to version 4.3.1-2.
  • KDE
    KDE has been updated to version 4.14.1-1.


Update information

Read this whether you update with the Update Manager, or terminal.

  • After the upgrade
    After the upgrade you won’t be able to reboot or shutdown the system the usual way. You need to do that in the terminal. After that all will function as expected.
    sudo reboot

    or

    sudo shutdown

  • These are some warnings you can safely ignore:
    GdkPixbuf-WARNING **: Cannot open pixbuf loader module file ‘/usr/lib/x86_64-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory


Updating with the Update Manager

The Update Manager can be opened from the system tray by clicking on this icon: .
Most user choices are being done for you by the Update Manager (see “Update information” above).


Updating with the terminal

We recommend using the Update Manager to update your system, but if you prefer using the terminal, please read the below steps carefully.

  • Pre upgrade
    There are some things that need to be cleaned up first (left overs from the LMDE period). So, it’s best to download and run the pre-UP script:

    wget http://repository.solydxk.com/umfiles/prd/pre-up-2014.10.15; chmod +x pre-up-2014.10.15; sudo ./pre-up-2014.10.15

  • Configuration files
    You might be asked to replace certain configuration files. It is recommended to keep the currently installed file (default selected).
  • Post upgrade
    If you’re using Nvidia, you run the following command before reboot:
    sudo apt-get install –reinstall nvidia-kernel-dkms

    If you have systemd-sysv installed, you need to purge systemd-shim:

    sudo apt-get purge systemd-shim

    and finally, remove some unneeded symbolic links:

    sudo update-rc.d -f samba remove

 

15 October, 2014 06:57AM by Arjen Balfoort (Schoelje)

hackergotchi for Ubuntu developers

Ubuntu developers

Bryan Quigley: Want to disable SSL 3.0 for non-technical users…

I just created an add-on that literally just changes the one bit* needed to disable SSL 3.0 support in Firefox

You can get it here: https://addons.mozilla.org/en-US/firefox/addon/disable-ssl-30/

*It’s trivial to do in about:config, yet I don’t really want to recommend that to anyone..

15 October, 2014 04:56AM

Scarlett Clark: Kubuntu: KDE 4.14.2 Release ready in PPA

We have finished packaging KDE 4.14.2 release.
We have also backported to Trusty LTS!
KDE announcement can be found here:

KDE 4.14.2 Release notes
Kubuntu Release with install instructions can be found here:
Kubuntu KDE 4.14.2 Release

15 October, 2014 12:32AM

Kubuntu: KDE Applications and Development Platform 4.14.2

Packages for the release of KDE SC 4.14.2 are available for Kubuntu 14.04LTS and our development release. You can get them from the Kubuntu Backports PPA, and the Kubuntu Utopic Updates PPA

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

15 October, 2014 12:21AM

October 14, 2014

Julian Andres Klode: Key transition

I started transitioning from 1024D to 4096R. The new key is available at:

https://people.debian.org/~jak/pubkey.gpg

and the keys.gnupg.net key server. A very short transition statement is available at:

https://people.debian.org/~jak/transition-statement.txt

and included below (the http version might get extended over time if needed).

The key consists of one master key and 3 sub keys (signing, encryption, authentication). The sub keys are stored on an OpenPGP v2 Smartcard. That’s really cool, isn’t it?

Somehow it seems that GnuPG 1.4.18 also works with 4096R keys on this smartcard (I accidentally used it instead of gpg2 and it worked fine), although only GPG 2.0.13 and newer is supposed to work.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512

Because 1024D keys are not deemed secure enough anymore, I switched to
a 4096R one.

The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one.  I would also like this
new key to be re-integrated into the web of trust.  This message is
signed by both keys to certify the transition.

the old key was:

pub   1024D/00823EC2 2007-04-12
      Key fingerprint = D9D9 754A 4BBA 2E7D 0A0A  C024 AC2A 5FFE 0082 3EC2

And the new key is:

pub   4096R/6B031B00 2014-10-14 [expires: 2017-10-13]
      Key fingerprint = AEE1 C8AA AAF0 B768 4019  C546 021B 361B 6B03 1B00

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iEYEARECAAYFAlQ9j+oACgkQrCpf/gCCPsKskgCgiRn7DoP5RASkaZZjpop9P8aG
zhgAnjHeE8BXvTSkr7hccNb2tZsnqlTaiQIcBAEBCgAGBQJUPY/qAAoJENc8OeVl
gLOGZiMP/1MHubKmA8aGDj8Ow5Uo4lkzp+A89vJqgbm9bjVrfjDHZQIdebYfWrjr
RQzXdbIHnILYnUfYaOHUzMxpBHya3rFu6xbfKesR+jzQf8gxFXoBY7OQVL4Ycyss
4Y++g9m4Lqm+IDyIhhDNY6mtFU9e3CkljI52p/CIqM7eUyBfyRJDRfeh6c40Pfx2
AlNyFe+9JzYG1i3YG96Z8bKiVK5GpvyKWiggo08r3oqGvWyROYY9E4nLM9OJu8EL
GuSNDCRJOhfnegWqKq+BRZUXA2wbTG0f8AxAuetdo6MKmVmHGcHxpIGFHqxO1QhV
VM7VpMj+bxcevJ50BO5kylRrptlUugTaJ6il/o5sfgy1FdXGlgWCsIwmja2Z/fQr
ycnqrtMVVYfln9IwDODItHx3hSwRoHnUxLWq8yY8gyx+//geZ0BROonXVy1YEo9a
PDplOF1HKlaFAHv+Zq8wDWT8Lt1H2EecRFN+hov3+lU74ylnogZLS+bA7tqrjig0
bZfCo7i9Z7ag4GvLWY5PvN4fbws/5Yz9L8I4CnrqCUtzJg4vyA44Kpo8iuQsIrhz
CKDnsoehxS95YjiJcbL0Y63Ed4mkSaibUKfoYObv/k61XmBCNkmNAAuRwzV7d5q2
/w3bSTB0O7FHcCxFDnn+tiLwgiTEQDYAP9nN97uibSUCbf98wl3/
=VRZJ
-----END PGP SIGNATURE-----

Filed under: Uncategorized

14 October, 2014 09:46PM

Randall Ross: Why Smart Phones Aren't - Reason #3

I love movies. I especially love seeing movies in an old-fashioned movie theatre. The smell of popcorn. The immersiveness. The whole sensory experience. Well, almost...

Why oh why must I, my friends, and my family be subjected to nonsense warnings that precede every movie shown in a theatre? You know the ones: "Silence your phone", "Silence is golden", "It only takes one phone call to ruin a movie", etc, etc.

Even with all that preamble, there is inevitably someone at the theatre that ignores it, or is too distracted by their phone to see the warning. So, the messages are largely ineffective. Oh, the irony!

Let's think about this for a minute. According to the MPAA, "More than two thirds of the U.S./Canada population...227.8 million people went to the movies at least once in 2013"
(Source: http://www.mpaa.org/wp-content/uploads/2014/03/MPAA-Theatrical-Market-St...)

Let's take the most conservative view of this statistic. Assume that the total number of person-movies that year was 227.8 million. And, let's also assume that each one of these movies was preceded by a 10-second "Silence your cell phone" message.

That amounts to over 632,000 hours , or 26,365 days, or 72 years of lost time, in one year. "Smart" phone manufacturers, this is a problem you could have solved years ago. For just how many years has this been a solvable problem? My guess is 10.

"Smart" phone manufacturers, you are wasting my time. You are forcing theatres to air useless reminders and distractions. In economic terms, that's called an externality: pushing the costs onto others so you don't have to incur them yourself.

That's right. 720 years lost, in North America alone.

Stop this nonsense. Humanity has better things to do.

I'm sorry "smart" phones. You are as dumb as the day you were born. Think about it. It's really not that hard. Don't be fooled by the name. Movie theatres don't move. You know when you're inside one. Maybe it's time to pay attention?!

With the upcoming Ubuntu Phones, perhaps we, the people that believe in our shared humanity, can give back humanity this precious time it needs to get on with life and perhaps the chance to use this time to solve just one problem to make the world a better place...

---
Our best chance at a phone that respects humanity is here:
http://www.ubuntu.com/phone

More reasons "smart phones aren't are here:
http://blog.josephliau.com/documenting-the-death-of-the-dumb-telephone-p...
http://randall.executiv.es/dumbphones02
http://randall.executiv.es/dumbphones01

---
image by daniel
https://www.flickr.com/photos/number657/

14 October, 2014 05:44PM

hackergotchi for Cumulus Linux

Cumulus Linux

OpenStack and Cumulus Linux: Two Great Tastes that Taste Great Together

OpenStack is a very popular open source technology stack used to build private and public cloud computing platforms. It powers clouds for thousands of companies like Yahoo!, Dreamhost, Rackspace, eBay, and many more.

Why drives its popularity? Being open source, it puts cloud builders in charge of their own destiny, whether they choose to work with a partner, or deploy it themselves. Because it is Linux based, it is highly amenable to automation, whether you’re building out your network or are running it in production. At build time, it’s great for provisioning, installing and configuring the physical resources. In production, it’s just as effective, since provisioning tenants, users, VMs, virtual networks and storage is done via self-service Web interfaces or automatable APIs. Finally, it’s always been designed to run well on commodity servers, avoiding reliance on proprietary vendor features.

Cumulus Linux fits naturally into an OpenStack cloud, because it shares a similar design and philosophy. Built on open source, Cumulus Linux is Linux, allowing common management, monitoring and configuration on both servers and switches. The same automation and provisioning tools that you commonly use for OpenStack servers you can also use unmodified on Cumulus Linux switches, giving a single pane of glass for automation and monitoring. And Cumulus Linux runs on a wide variety of hardware from 5 different hardware manufacturers, so you can utilize the same multi-vendor, commodity approach to procure your network that you do for your server hardware.

The Cumulus Linux OpenStack Validated Solution Guide will show you how to build OpenStack clouds ranging from a simple, single rack proof of concept to a full, scalable data center cloud environment. Installing and configuring both the network and the servers is fully automated; once the servers and switches are racked and cabled, simply insert a USB drive with the Cumulus Linux installer image into one of the switches, and power on the cluster. The switches will install Cumulus Linux using ONIE, then configure themselves using zero touch provisioning (ZTP). The switches then PXE install Linux onto the servers, and use Puppet to install and configure the various OpenStack components, such as Nova, Nova-net, Glance, Cinder, Keystone and Horizon. In mere minutes after powering on the cluster, you’ll be starting VMs on your new OpenStack cloud!

The post OpenStack and Cumulus Linux: Two Great Tastes that Taste Great Together appeared first on Cumulus Networks Blog.

14 October, 2014 05:30PM by Nolan Leake

hackergotchi for Ubuntu developers

Ubuntu developers

Michael Hall: Unity 8 Desktop

Will CookeThis is a guest post from Will Cooke, the new Desktop Team manager at Canonical. It’s being posted here while we work to get a blog setup on unity.ubuntu.com, which is where you can find out more about Unity 8 and how to get involved with it.

Intro

Understandably, most of the Ubuntu news recently has focused around phones. There is a lot of excitement and anticipation building around the imminent release of the first devices.  However, the Ubuntu Desktop has not been dormant during this time.  A lot of thought and planning has been given to what the desktop will become in the future; who will use it and what will they use it for.  All the work which is going in to the phone will be directly applicable to the desktop as well, since they will use the same code.  All the apps, the UI tweaks, everything which makes applications secure and stable will all directly apply to the desktop as well.  The plan is to have the single converged operating system ready for use on the desktop by 16.04.

The plan

We learned some lessons during the early development of Unity 7. Here’s what happened:

  • 11.04: New Unity as default
  • 11.10: New Unity version
  • 12.04: Unity in First LTS

What we’ve decided to do this time is to keep the same, stable Unity 7 desktop as the default while we offer users who want to opt-in to Unity8 an option to use that desktop. As development continues the Unity 8 desktop will get better and better.  It will benefit from a lot of the advances which have come about through the development of the phone OS and will benefit from continual improvements as the releases happen.

  • 14.04 LTS: Unity 7 default / Unity 8 option for the first time
  • 14.10: Unity 7 default / Unity 8 new rev as an option
  • 15.04: Unity 7 default / Unity 8 new rev as an option
  • 15.10: Potentially Unity 8 default / Unity 7 as an option
  • 16.04 LTS: Unity 8 default / Unity 7 as an option

As you can see, this gives us a full 2 cycles (in addition to the one we’ve already done) to really nail Unity 8 with the level of quality that people expect. So what do we have?

How will we deliver Unity 8 with better quality than 7?

Continuous Integration is the best way for us to achieve and maintain the highest quality possible.  We have put a lot of effort in to automating as much of the testing as we can, the best testing is that which is performed easily.  Before every commit the changes get reviewed and approved – this is the first line of defense against bugs.  Every merge request triggers a run of the tests, the second line of defense against bugs and regressions – if a change broke something we find out about it before it gets in to the build.

The CI process builds everything in a “silo”, a self contained & controlled environment where we find out if everything works together before finally landing in the image.

And finally, we have a large number of tests which run against those images. This really is a “belt and braces” approach to software quality and it all happens automatically.  You can see, we are taking the quality of our software very seriously.

What about Unity 7?

Unity 7 and Compiz have a team dedicated to maintenance and bug fixes and so the quality of it continues to improve with every release.  For example; windows switching workspaces when a monitor gets unplugged is fixed, if you have a mouse with 6 buttons it works, support for the new version of Metacity (incase you want to use the Gnome2 desktop) – added (and incidentally, a lot of that work was done by a community contributor – thanks Alberts!)

Unity 7 is the desktop environment for a lot of software developers, devops gurus, cloud platform managers and millions of users who rely on it to help them with their everyday computing.  We don’t want to stop you being able to get work done.  This is why we continue to maintain Unity 7 while we develop Unity 8.  If you want to take Unity 8 for a spin and see how its coming along then you can; if you want to get your work done, we’re making that experience better for you every day.  Best of all, both of these options are available to you with no detriment to the other.

Things that we’re getting in the new Ubuntu Desktop

  1. Applications decoupled from the OS updates.  Traditionally a given release of Ubuntu has shipped with the versions of the applications available at the time of release.  Important updates and security fixes are back-ported to older releases where required, but generally you had to wait for the next release to get the latest and greatest set of applications.  The new desktop packaging system means that application developers can push updates out when they are ready and the user can benefit right away.
  2. Application isolation.  Traditionally applications can access anything the user can access; photos, documents, hardware devices, etc.  On other platforms this has led to data being stolen or rendered otherwise unusable.  Isolation means that without explicit permission any Click packaged application is prevented from accessing data you don’t want it to access.
  3. A full SDK for writing Ubuntu apps.  The SDK which many people are already using to write apps for the phone will allow you to write apps for the desktop as well.  In fact, your apps will be write once run anywhere – you don’t need to write a “desktop” app or a “phone” app, just an Ubuntu app.

What we have now

The easiest way to try out the Unity 8 Desktop Preview is to use the daily Ubuntu Desktop Next live image:   http://cdimage.ubuntu.com/ubuntu-desktop-next/daily-live/current/   This will allow you to boot into a Unity 8 session without touching your current installation.  An easy 10 step way to write this image to a USB stick is:

  1. Download the ISO
  2. Insert your USB stick in the knowledge that it’s going to get wiped
  3. Open the “Disks” application
  4. Choose your USB stick and click on the cog icon on the righthand side
  5. Choose “Restore Disk Image”
  6. Browse to and select the ISO you downloaded in #1
  7. Click “Start restoring”
  8. Wait
  9. Boot and select “Try Ubuntu….”
  10. Done *

* Please note – there is currently a bug affecting the Unity 8 greeter which means you are not automatically logged in when you boot the live image.  To log in you need to:

  1. Switch to vt1 (ctrl-alt-f1)
  2. type “passwd” and press enter
  3. press enter again to set the current password to blank
  4. enter a new password twice
  5. Check that the password has been successfully changed
  6. Switch back to vt7 (ctrl-alt-f7)
  7. Enter the new password to login

 

Here are some screenshots showing what Unity 8 currently looks like on the desktop:

00000009000000190000003100000055000000690000011000000183000001950000020700000255000002630000032800000481

The team

The people working on the new desktop are made up of a few different disciplines.  We have a team dedicated to Unity 7 maintenance and bug fixes who are also responsible for Unity 8 on the desktop and feed in a lot of support to the main Unity 8 & Mir teams. We have the Ubuntu Desktop team who are responsible for many aspects of the underlying technologies used such as GNOME libraries, settings, printing etc as well as the key desktop applications such as Libreoffice and Chromium.  The Ubuntu desktop team has some of the longest serving members of the Ubuntu family, with some people having been here for the best part of ten years.

How you can help

We need to log all the bugs which need to be fixed in order to make Unity 8 the best desktop there is.  Firstly, we need people to test the images and log bugs.  If developers want to help fix those bugs, so much the better.  Right now we are focusing on identifying where the work done for the phone doesn’t work as expected on the desktop.  Once those bugs are logged and fixed we can rely on the CI system described above to make sure that they stay fixed.

Link to daily ISOs:  http://cdimage.ubuntu.com/ubuntu-desktop-next/daily-live/current/

Bugs:  https://bugs.launchpad.net/ubuntu/+source/unity8-desktop-session

IRC:  #ubuntu-desktop on Freenode

14 October, 2014 04:42PM

Serge Hallyn: Live container migration – on its way

The criu project has been working hard to make application checkpoint/restart feasible. Tycho has implemented lxc-checkpoint and lxc-restart on top of that (as well as of course contributing the needed bits to criu itself), and now shows off first steps toward real live migration: http://tycho.ws/blog/2014/09/container-migration.html

Excellent!


14 October, 2014 12:42PM

hackergotchi for Whonix

Whonix

Qubes + Whonix 9 and more!

Since my original release of Qubes + Whonix back in late August 2014, some interesting developments have happened that I’m excited to share with everyone!

Qubes + Whonix Primary Sources:
=======================

The primary sources of Qubes + Whonix information are located at:

- User Documentation: whonix.org/wiki/Qubes

- Dedicated Forum: whonix.org/forum/Qubes

Qubes + Whonix Summary:
==================

First a summary of what Qubes + Whonix is about…

The Whonix OS (whonix.org), based on Debian, like Tails or TorVM, torifies all of your internet traffic at an OS level, preventing remote leaks of unique identifiers, such as your IP address, MAC address, hardware serials, etc, designed with hardcore anonymous threat models in mind.

The Qubes OS (qubes-os.org) is a security focused, user friendly virtualization platform, based on Xen, which offers hardcore isolation of your system level resources and VM desktops, even helping to prevent serious endpoint attacks, such as kernel compromises, BadUSB, Evil Maid, etc.

Qubes + Whonix is the beautiful marriage of these two hardcore security and anonymity focused platforms, for the aim of integrating the best in endpoint security and internet torification. Qubes + Whonix runs as dual VMs, inside of Qubes, isolating the Whonix-Workstation (user desktop applications) and the Whonix-Gateway (Tor networking proxy), all within one single host machine.

Inside Host: Whonix-Workstation –> Whonix-Gateway –> Torified Internet

You can even establish multiple Whonix-Workstations and Whonix-Gateways for multiple independent and isolated Tor identity environments.

Qubes + Whonix News:
================

Now on to the news…

————————————
Whonix 9 Availability:
————————————

Whonix 9 was recently released which brought several system level improvements over the prior Whonix 8.2, and helped us further streamline our Qubes + Whonix implementation.

Qubes + Whonix 9 is now supported and available with step-by-step install guides here:

https://www.whonix.org/wiki/Qubes

——————————————————————-
New Whonix Source Code Install Guide:
——————————————————————-

In addition to our step-by-step install guide for importing the Whonix binary images, we now offer a new step-by-step guide for installing from Whonix source code.

This is a great option for those who would prefer not to trust binary VM images or who would like to customize their build of Whonix.

———————————————-
New Whonix Qubes Forum:
———————————————-

At the personal request of Patrick Schleizer (Whonix founder), I have become the official maintainer of Qubes + Whonix for the Whonix community.

Along with this, we have recently launched a new dedicated forum space for Qubes + Whonix community, support, and development. It is being hosted as part of the Whonix forums at:

https://www.whonix.org/forum/Qubes

Over the past few weeks, several people from around the world have begun learning about, installing onto their computers, and getting excited about the advantages of the newly combined Qubes + Whonix platform.

Feel free to come join us and help improve the Qubes + Whonix platform! :)

—————————————————————-
New ProxyVM + AppVM Development:
—————————————————————-

My initial port of Whonix to Qubes was only achieved mere weeks ago in late August 2014. The initial focus then was just on getting it up and running. It was a barebones implementation which included a number of compromises. The primary compromise being that I utilized a dual Standalone HVM (HardwareVM) architecture in Qubes for the Whonix-Gateway and Whonix-Workstation.

I’m happy to annouce that we have an awesome contributor/developer, nicknamed “nrgaway”, who got inspired after seeing my initial Qubes + Whonix release and is now actively working to take the architecture of Qubes + Whonix to the next level.

The optimal Qubes architecture for Whonix is not to use dual HVMs, but, rather to utilize the native Qubes ProxyVM + AppVM configuration.

Our new hero, nrgaway, is actively working on implementing Qubes + Whonix as a native ProxyVM + AppVM configuration. The Whonix-Workstation will be the desktop AppVM that connects through the Whonix-Gateway as a torifying ProxyVM inside of the ultra secure Qubes virtualization platform.

The big benefits of this new ProxyVM + AppVM architecture will likely be:

  • - Easy and fast GUI-based setup of new Whonix VMs from pre-configured templates
  • - Native integration with Qubes user friendly desktop features, like:
    • - Native Qubes application isolated desktop windows
    • - Application shortcut menus in launcher
    • - Dynamically resizable application windows
    • - Secure VM-to-VM file move/copy user interface
    • - Easy GUI-based start/stop of Whonix VMs

We are supporting and cheering nrgaway on in his continued awesome work to develop this next paradigm shift for Qubes + Whonix that all of us will greatly benefit from!

You can follow along and join us in furthering this exciting development work in the Whonix Qubes forum here…

ProxyVM + AppVM Development thread:

https://www.whonix.org/forum/index.php/topic,537.0.html

———————————————————————————–
Genuine Interest in Offical Qubes OS Integration:
———————————————————————————–

Joanna Rutkowska (Qubes founder) has much appreciated our Qubes + Whonix work.

As recently annouced with their Qubes R2 final release, the Qubes team is now officially working with an esteemed board member of the Tor Project, privacy expert, Mr. Caspar Bowden, to further the adoption and optimization of Qubes as a strong platform for privacy services and applications.

And, along these same lines, Joanna has expressed interest to me for wanting to integrate Whonix and TorVM as super simple clickable user experiences, pre-installed and pre-configured, into the official Qubes OS distro, for easy OS level torification.

Our above mentioned ProxyVM + AppVM development work with nrgaway will likely be a big leap forward in further realizing this vision of Joanna’s for official integration of Whonix into the official Qubes user friendly GUI installer.

=====================

So there you have it…

- Qubes + Whonix 9 is now available.

- A new step-by-step source code install guide, along with binay images.

- A new dedicated community forum for the Qubes + Whonix platform.

- A new paradigm of ProxyVM + AppVM architecture is being developed.

- Hardcore Whonix torification may be coming to a Qubes installer near you.

Very exciting times for Qubes + Whonix as a super secure Tor platform! Join us! :)

WhonixQubes

The post Qubes + Whonix 9 and more! appeared first on Whonix.

14 October, 2014 12:38PM by WhonixQubes

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Server blog: Server team meeting minutes: 2014-10-07

Agenda

  • Review ACTION points from previous meeting

ACTION: all to review blueprint work items before next weeks meeting

  • U Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair

Minutes

Final Freeze 9 days out
  • Check on FTBFS packages — seems like there has been good progress
  • Make sure are up to date, if resources are needed now is the time to ask.
  • Release bugs, no high priority ones, juju mirs and openstack bits are being worked.
  • kickinz1 brought up two bcache bugs (LP #1377130 and LP #1377142) to the kernel team for help.
Meeting Actions

None

Agree on next meeting date and time

Next meeting will be on Tuesday, Oct 14th at 16:00 UTC in #ubuntu-meeting.

IRC Log

http://ubottu.com/meetingology/logs/ubuntu-meeting/2014/ubuntu-meeting.2014-10-07-16.03.html

14 October, 2014 06:51AM

hackergotchi for Cumulus Linux

Cumulus Linux

Dude… cover me, I’m going in…

You know it needs to be done, it could be easy… or it could get messy, and you’re sure that the world will be a better place when you’re finished.

That’s the dilemma that some of our enterprise customers have when grappling with “the cloud.”

We’ve noticed a distinct trend among customers that grew up outside the “cloud era”; they’ve been trying to bolt “cloud” onto their legacy IT blueprint and it has been a struggle. They expected to realize operational and capital efficiencies that approximate high scale Internet businesses. Unfortunately, they are missing by a long shot.

At some point along the way, these customers realize that they need to be willing to drive structural change. They need to create a “cloud blueprint” for their applications and IT infrastructure. In some cases, this means a transition to public/hosted infrastructure; in other cases, it means building new private infrastructure based on cloud principles. In many cases, it’s a mixture of both.

When private cloud is part of the answer, we’ve consistently found design patterns built on infrastructure platforms like VMware vSphere and OpenStack and big data platforms such as Hortonworks.  Customers want to get these services operational quickly so they often stay with legacy practices, with full knowledge that these practices are breaking their backs.

Our job is to be “Dude”; to help provide the cover these customers need to make it out unscathed. To that end, we’ve pulled together features, partnerships, and validated solutions that allow them to be successful in their deployments whether they initially use a modern network architecture or leverage a traditional network architecture.

We empower companies of all sizes to reap the benefits of modern networking in their data centers by taking steps instead of taking a leap.  This is why Cumulus Networks customers are so excited about open networking.

The post Dude… cover me, I’m going in… appeared first on Cumulus Networks Blog.

14 October, 2014 05:01AM by JR Rivers

Accelerating Hadoop With Cumulus Linux

One of the questions I’ve encountered in talking to our customers has been “What environments are a good example of working on top of the Layer-3 Clos design?”  Most engineers are familiar with the classic Layer-2 based Core/Distribution/Access/Edge model for building a data center.  And while that has served us well in the older client-server north-south traffic flow approaches and in smaller deployments, modern distributed applications stress the approach to its breaking point.  Since L2 designs normally need to be built around pairs of devices, relying on individual platforms to carry 50% of your data center traffic can present a risk at scale.  On top of this you have to have a long list of protocols that can result in a brittle and operationally complex environment as you deploy 10’s of devices.

Hence the rise of the L3 Clos approach allowing for combining many small boxes, each carrying only a subset of your traffic, along with running industry standard protocols that have a long history of operational stability and troubleshooting ease.  And, while the approach can be applied to many different problems, building a practical implementation of a problem is the best way to show it to be true.  With that in mind we recently setup a Hadoop cluster leading to a solution validation guide we are publishing with our new release.

Big Data analytics is becoming increasingly common across businesses of all sizes.  With the growth of genomic, geographic, social-graph, search indexing and other large data sources, the ability for a single computer to process across these sets in a reasonable time has diminished.  Distributed processing models like Hadoop have become increasingly the way to approach the data, breaking down the processing into steps that can be distributed along with the data across the compute nodes.

Many of the Hadoop solutions being published have been built around assuming a high cost of the network, so they have focused on 1Gig Ethernet attached servers, pressing the issues of locality to keep traffic on the same ToR and optimizing keeping traffic off the network.  And while the speed of even 10Gig Ethernet can not keep up with locally attached storage, being able to build a low-to-no oversubscription network fabric at 10Gig and higher, in concert with most Big Data class servers shipping with integrated 10Gig Ethernet on the motherboard (LOM), the prices to accomplish this rival solutions built around 1Gig Ethernet and remove your Big Data results from having to be so tied to the locality of the data in your environment.

When it comes to building a network for Hadoop, we chose the enterprise grade Hortonworks Data Platform (HDP) driven by Hortonworks as the platform to stand up and test for a new validated solution.  Hortonworks, being a major contributor to open source initiatives (Apache Hadoop, HDFS, Pig, Hive, HBase, Zookeeper), has extensive experience managing production level Hadoop clusters.  Given the open nature of both Cumulus Linux and Hortonworks we were able to stand up our environment quickly and validate the operations on the topology.  As a follow-on to this project, by combining in the automation powers of tools like Ansible, we will have a demo in Cumulus Workbench to show how you can automate, both on the network and server, the environment and to deploy using a single tool.  Keep your eyes out for it.

The post Accelerating Hadoop With Cumulus Linux appeared first on Cumulus Networks Blog.

14 October, 2014 05:00AM by David Sinn