October 23, 2016

BunsenLabs Linux

October 22, 2016

hackergotchi for Qlustar


[QSA-1022162] Security Bundle

Qlustar Security Advisory 1022162

October 22, 2016


Security bundle. A Qlustar security bundle is a cumulative update of packages that are taken from upstream Debian/Ubuntu without modification. Only packages that are used in a typical HPC/Storage cluster installation are mentioned in Qlustar Security Advisories. Other non-HPC related updates also enter the Qlustar repository, but their functionality is not separately verified by the Qlustar team. To track these updates subscribe to the general security mailing lists of Debian/Ubuntu.

    Package(s)       : see upstream description of individual package
    Qlustar releases : 9.1
    Affected versions: All versions prior to this update
    Vulnerability    : see upstream description of individual package
    Problem type     : see upstream description of individual package
    Qlustar-specific : no
    CVE Id(s)        : see upstream description of individual package

This update includes several security related package updates from Debian/Ubuntu. The following list provides references to the upstream security report of the corresponding packages. You can view the original upstream advisory by clicking on the corresponding title.

Bind vulnerability

Toshifumi Sakaguchi discovered that Bind incorrectly handled certain packets with malformed options. A remote attacker could possibly use this issue to cause Bind to crash, resulting in a denial of service.

NTP vulnerabilities

Aanchal Malhotra discovered that NTP incorrectly handled authenticated broadcast mode. A remote attacker could use this issue to perform a replay attack.

Matt Street discovered that NTP incorrectly verified peer associations of symmetric keys. A remote attacker could use this issue to perform an impersonation attack.

Jonathan Gardner discovered that the NTP ntpq utility incorrectly handled dangerous characters in filenames. An attacker could possibly use this issue to overwrite arbitrary files.

Stephen Gray discovered that NTP incorrectly handled large restrict lists. An attacker could use this issue to cause NTP to crash, resulting in a denial of service.

Aanchal Malhotra discovered that NTP incorrectly handled authenticated broadcast mode. A remote attacker could use this issue to cause NTP to crash, resulting in a denial of service.

Jonathan Gardner discovered that NTP incorrectly handled origin timestamp checks. A remote attacker could use this issue to spoof peer servers.

Jonathan Gardner discovered that the NTP ntpq utility did not properly handle certain incorrect values. An attacker could possibly use this issue to cause ntpq to hang, resulting in a denial of service.

It was discovered that the NTP cronjob incorrectly cleaned up the statistics directory. A local attacker could possibly use this to escalate privileges.

Stephen Gray and Matthew Van Gundy discovered that NTP incorrectly validated crypto-NAKs. A remote attacker could possibly use this issue to prevent clients from synchronizing.

Miroslav Lichvar and Jonathan Gardner discovered that NTP incorrectly handled switching to interleaved symmetric mode. A remote attacker could possibly use this issue to prevent clients from synchronizing.

Matthew Van Gundy, Stephen Gray and Loganaden Velvindron discovered that NTP incorrectly handled message authentication. A remote attacker could possibly use this issue to recover the message digest key.

Yihan Lian discovered that NTP incorrectly handled duplicate IPs on unconfig directives. An authenticated remote attacker could possibly use this issue to cause NTP to crash, resulting in a denial of service.

Yihan Lian discovered that NTP incorrectly handled certail peer associations. A remote attacker could possibly use this issue to cause NTP to crash, resulting in a denial of service.

Jakub Prokes discovered that NTP incorrectly handled certain spoofed packets. A remote attacker could possibly use this issue to cause a denial of service.

Miroslav Lichvar discovered that NTP incorrectly handled certain packets when autokey is enabled. A remote attacker could possibly use this issue to cause a denial of service.

Miroslav Lichvar discovered that NTP incorrectly handled certain spoofed broadcast packets. A remote attacker could possibly use this issue to cause a denial of service.

PHP vulnerabilities

Taoguang Chen discovered that PHP incorrectly handled certain invalid objects when unserializing data. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

Taoguang Chen discovered that PHP incorrectly handled invalid session names. A remote attacker could use this issue to inject arbitrary session data.

It was discovered that PHP incorrectly handled certain gamma values in the imagegammacorrect function. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

It was discovered that PHP incorrectly handled certain crafted TIFF image thumbnails. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly expose sensitive information.

It was discovered that PHP incorrectly handled unserializing certain wddxPacket XML documents. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

Taoguang Chen discovered that PHP incorrectly handled certain failures when unserializing data. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

It was discovered that PHP incorrectly handled certain flags in the MySQL driver. Malicious remote MySQL servers could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

It was discovered that PHP incorrectly handled ZIP file signature verification when processing a PHAR archive. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

It was discovered that PHP incorrectly handled certain locale operations. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

It was discovered that PHP incorrectly handled SplArray unserializing. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

Ke Liu discovered that PHP incorrectly handled unserializing wddxPacket XML documents with incorrect boolean elements. A remote attacker could use this issue to cause PHP to crash, resulting in a denial of service, or possibly execute arbitrary code.

Samba vulnerability

Stefan Metzmacher discovered that Samba incorrectly handled certain flags in SMB2/3 client connections. A remote attacker could use this issue to disable client signing and impersonate servers by performing a man in the middle attack.

Samba has been updated to 4.3.11. In addition to the security fix, the updated packages contain bug fixes, new features, and possibly incompatible changes.

Bind vulnerability

It was discovered that Bind incorrectly handled building responses to certain specially crafted requests. A remote attacker could possibly use this issue to cause Bind to crash, resulting in a denial of service.

OpenSSL vulnerabilities

Shi Lei discovered that OpenSSL incorrectly handled the OCSP Status Request extension. A remote attacker could possibly use this issue to cause memory consumption, resulting in a denial of service.

César Pereida, Billy Brumley, and Yuval Yarom discovered that OpenSSL did not properly use constant-time operations when performing DSA signing. A remote attacker could possibly use this issue to perform a cache-timing attack and recover private DSA keys.

Quan Luo discovered that OpenSSL did not properly restrict the lifetime of queue entries in the DTLS implementation. A remote attacker could possibly use this issue to consume memory, resulting in a denial of service.

Shi Lei discovered that OpenSSL incorrectly handled memory in the TS_OBJ_print_bio() function. A remote attacker could possibly use this issue to cause a denial of service.

It was discovered that the OpenSSL incorrectly handled the DTLS anti-replay feature. A remote attacker could possibly use this issue to cause a denial of service.

Shi Lei discovered that OpenSSL incorrectly validated division results. A remote attacker could possibly use this issue to cause a denial of service.

Karthik Bhargavan and Gaetan Leurent discovered that the DES and Triple DES ciphers were vulnerable to birthday attacks. A remote attacker could possibly use this flaw to obtain clear text data from long encrypted sessions. This update moves DES from the HIGH cipher list to MEDIUM.

Shi Lei discovered that OpenSSL incorrectly handled certain ticket lengths. A remote attacker could use this issue to cause a denial of service.

Shi Lei discovered that OpenSSL incorrectly handled memory in the MDC2_Update() function. A remote attacker could possibly use this issue to cause a denial of service.

Shi Lei discovered that OpenSSL incorrectly performed certain message length checks. A remote attacker could possibly use this issue to cause a denial of service.

Libgcrypt vulnerability

Felix Dörre and Vladimir Klebanov discovered that Libgcrypt incorrectly handled mixing functions in the random number generator. An attacker able to obtain 4640 bits from the RNG can trivially predict the next 160 bits of output.

GnuPG vulnerability

Felix Dörre and Vladimir Klebanov discovered that GnuPG incorrectly handled mixing functions in the random number generator. An attacker able to obtain 4640 bits from the RNG can trivially predict the next 160 bits of output.

OpenJDK 7 vulnerabilities

Multiple vulnerabilities were discovered in the OpenJDK JRE related to information disclosure, data integrity, and availability. An attacker could exploit these to cause a denial of service, expose sensitive data over the network, or possibly execute arbitrary code.

A vulnerability was discovered in the OpenJDK JRE related to data integrity. An attacker could exploit this to expose sensitive data over the network or possibly execute arbitrary code.

Multiple vulnerabilities were discovered in the OpenJDK JRE related to availability. An attacker could exploit these to cause a denial of service.

A vulnerability was discovered in the OpenJDK JRE related to information disclosure. An attacker could exploit this to expose sensitive data over the network.

OpenSSH vulnerabilities

Eddie Harari discovered that OpenSSH incorrectly handled password hashing when authenticating non-existing users. A remote attacker could perform a timing attack and enumerate valid users.

Tomas Kuthan, Andres Rojas, and Javier Nieto discovered that OpenSSH did not limit password lengths. A remote attacker could use this issue to cause OpenSSH to consume resources, leading to a denial of service.

GD library vulnerabilities

It was discovered that the GD library incorrectly handled certain malformed TGA images. If a user or automated system were tricked into processing a specially crafted TGA image, an attacker could cause a denial of service.

It was discovered that the GD library incorrectly handled memory when using gdImageScale(). A remote attacker could possibly use this issue to cause a denial of service or possibly execute arbitrary code.

curl vulnerabilities

Bru Rom discovered that curl incorrectly handled client certificates when resuming a TLS session.

It was discovered that curl incorrectly handled client certificates when reusing TLS connections.

Marcelo Echeverria and Fernando Muñoz discovered that curl incorrectly reused a connection struct, contrary to expectations.

Update instructions:

The problem can be corrected by updating your system to the following Qlustar package versions in addition to the package versions mentioned in the upstream reports (follow the Qlustar Update Instructions):


22 October, 2016 06:00PM by root

[QSA-1022161] Linux kernel vulnerabilities

Qlustar Security Advisory 1022161

October 22, 2016


The system could crash or be made to run programs as an administrator.
This update contains a fix for the critical "Dirty Cow" vulnerability (CVE-2016-5195). You're urged to update your systems as soon as possible.

    Package(s)       : linux-image-ql-generic,
    Qlustar releases : 9.1
    Affected versions: All versions prior to this update
    Vulnerability    : privilege escalation/denial of service
    Problem type     : local
    Qlustar-specific : no
    CVE Id(s)        : CVE-2016-6828, CVE-2016-6480, CVE-2016-5829,
        CVE-2016-5696, CVE-2016-5244, CVE-2016-5195

Several vulnerabilities have been discovered in the Linux kernel that may lead to a denial of service or privilege escalation. The Common Vulnerabilities and Exposures project identifies the following problem:


Marco Grassi discovered a use-after-free condition could occur in the TCP retransmit queue handling code in the Linux kernel. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code.


Pengfei Wang discovered a race condition in the Adaptec AAC RAID controller driver in the Linux kernel when handling ioctl()s. A local attacker could use this to cause a denial of service (system crash).


It was discovered that a heap based buffer overflow existed in the USB HID driver in the Linux kernel. A local attacker could use this cause a denial of service (system crash) or possibly execute arbitrary code.


Yue Cao et al discovered a flaw in the TCP implementation's handling of challenge acks in the Linux kernel. A remote attacker could use this to cause a denial of service (reset connection) or inject content into an TCP stream.


Kangjie Lu discovered an information leak in the Reliable Datagram Sockets (RDS) implementation in the Linux kernel. A local attacker could use this to obtain potentially sensitive information from kernel memory.


It was discovered that a race condition existed in the memory manager of the Linux kernel when handling copy-on-write breakage of private read-only memory mappings. A local attacker could use this to gain administrative privileges.

Update instructions:

The problem can be corrected by updating your system to the following or more recent package versions (follow the Qlustar Update Instructions):

    linux-image-ql-generic                     3.12.66-ql-generic-9.1-79

22 October, 2016 05:54PM by root

hackergotchi for Blankon developers

Blankon developers

Kaka Prakasa: Terima kasih pak dukun

Setelah sekitar 3 minggu akhirnya hammerhead saya bawa ke ‘dukun’ untuk diganti tombol powernya yang sempet bikin gw pusing.

Awalnya tidak telalu mengganggu dia cuma suka tiba-tiba masuk ke mode kamera , akhirnya dia tidak bisa masuk kedalam sistem karena  bootloop .

Tadi pagi saya sampaikan keluhan ke ‘dukun’ dan saya berikan mahar sejumlah uang supaya hammerhead bisa kembali menemani genggaman saya, dan akhirnya sore ini saya sudah bisa bukan hanya menggenggam melainkan memainkan jempol saya di hammerhead.


Filed under: Uncategorized

22 October, 2016 10:45AM

October 21, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Managing your physical infrastructure from the top of rack switch


At the last OpenStack Design Summit in Austin, TX we showed you a preview of deploying your physical server and network infrastructure from the top-of-rack switch, which included OpenStack with your choice of SDN solution.

This was made possible by disaggregating the network stack functionality (the “N” in Network Operating System) to run on general purpose, devices-centric, operating systems. In the world of the Open Compute Project and whitebox switches, a switch can be more than just a switch. Switches are no longer closed systems where you can only see the command line of the network operating system. Whitebox switches are produced by marrying common server components with high powered switching ASICs, loading a Linux OS, and running a network operating system (NOS) functionality as an application.


The user has the ability to not only choose hardware from multiple providers, they can chose the Linux distribution, and the NOS that best matches their environment. Commands can be issued from the Linux prompt or the NOS prompt and most importantly, other applications can be securely installed alongside the NOS. This new switch design opens up the ability to architect secure distributed data center networks with higher scale and more efficient utilization of existing resources in each rack.


Since the last ODS we have witnessed a continued trend for whitebox switches to provide more server like and general purpose functionality from increases in CPU, memory, storage, internal bandwidth between the CPU and ASIC, to power-management (BMC), and secure boot options (UEFI+PXE). This month Mellanox announced the availability of their standard Linux kernel driver included in Ubuntu Core 16 (and classic Ubuntu) for their Open Ethernet Spectrum switch platforms. More recently Facebook announced the acceptance of the Wedge 100 into OCP that includes Facebook’s OpenBMC and their continued effort to disaggregate the stack.
“We are excited to work with Facebook on next generation switch hardware, adding Facebook’s Wedge OpenBMC power driver to our physical cloud (‘Metal-As-A-Service’) MAAS 2.1, and packaging the Facebook Open Switch System (FBOSS) as a snap.” said David Duffey, Director of Technical Partnerships, Canonical. “Facebook with OCP is leading the way to modern, secure, and flexible datacenter design and management. Canonical’s MAAS and snaps give the datacenter operator free choice of network bootloader, operating system, and network stack.”

At this OpenStack Design Summit we are also going to show you the latest integration with MAAS, how you can use snaps as a universal way to install across Linux distributions (including non-Ubuntu non-Debian based distributions), and deploying WiFi-based solutions, like OpenWrt, as a snap.

Please stop by our booth and let us help you plan your transition to a fully automated, secure modern datacenter.

21 October, 2016 03:25PM

hackergotchi for Tanglu developers

Tanglu developers

colord-kde 0.5.0 released!

Last official stable release was done more than 3 years ago, it was based on Qt/KDE 4 tech, after that a few fixes got in what would be 0.4.0 but as I needed to change my priorities it was never released.

Thanks to Lukáš Tinkl it was ported to KF5, on his port he increased the version number to 0.5.0, still without a proper release distros rely on a git checkout.

Since I started writing Cutelyst I had put a break on other open source projects by mostly reviewing a few patches that comes in, Cutelyst allows me to create more stuff that I get paid to do, so I’m using my free time to pick paid projects now. A few months ago Cristiano Bergoglio asked me about a KF5 port, and getting a calibration tool not to depend on gnome-color-manager, and we made a deal to do these stuff.

This new release got a few bugs fixed, pending patches merged and I did some code modernization to make some use of Qt5/C++11 features. Oh and yes it doesn’t crash on Wayland anymore but since on Wayland the color correction is a task for the compositor you won’t be able set an ICC profile for a monitor, only for other devices.

For the next release, you will be able to calibrate your monitor without the need for the GNOME tools.

Download: http://download.kde.org/stable/colord-kde/0.5.0/src/colord-kde-0.5.0.tar.xz.mirrorlist

21 October, 2016 12:23PM by dantti

hackergotchi for Univention Corporate Server

Univention Corporate Server

Release of UCS 4.2 planned for April 2017

After having released a new version of UCS with new features each November for the last few years, we have decided this year to reschedule the release of UCS 4.2 for April 2017.

There are a number of reasons for this move, one of the primary ones being the migration of apps from the App Center to the use of the container technology Docker. This results in increased security during operation and the possibility of running apps with different system requirements on one and the same system. In addition, this will also allow us to render the updating of UCS itself and the individual apps more independent of each other, thus significantly reducing the efforts required on the part of app developers and users in the case of new releases.

Docker was introduced with UCS 4.1, and since then some of the newer apps are utilizing this technology. The task at hand now is to transfer existing apps to containers without excessive extra efforts being required of the app developers. We have already managed to approve the first three apps in this process and hope that others will follow suit soon.

Independently of these events, we are also noticing that the market demands on our product have increased enormously. UCS is being employed ever more frequently in very large environments. For example, more and more municipalities are tapping the potential offered by our solution for the centralized management of the identities and rights of more than 100,000 users in their schools. In the past, such large-scale projects required some elements to be set up manually, which inevitably led to mistakes being made. We have now invested a great deal of energy in making UCS more robust in such situations and ensuring the selective replication of Active Directory domains in these scenarios is also more stable. We have already passed a number of important milestones and intend to keep following this path in the interest of continuous improvement.

Additionally, we are also continuing to develop the App Center. In the future, app developers will not only be able to manage their own applications autonomously via self services, but will also be able to offer their apps to partners and eventually end customers directly for purchase via the platform. The financial transactions can be processed entirely via the App Center.

We believe that it is far more important at this point in time to offer our customers and partners improved scalability of UCS, increased security when operating apps and an all-round more robust product than to adhere stubbornly to a release tradition. The users and partners we consulted on this matter have also confirmed our perception.

As such, we now intend to launch our release candidate for UCS 4.2 at CeBIT 2017. In addition to an updated Debian basis, we also want to develop a completely new approach for the operation concept: By implementing a central portal page, we aim to allow rapid access to all the applications in the domain as well as administration of the different UCS instances. In doing so, we are making it simple for UCS users throughout an organization to access approved applications quickly and easily.

Der Beitrag Release of UCS 4.2 planned for April 2017 erschien zuerst auf Univention.

21 October, 2016 11:50AM by Peter Ganten

UCC 3.0 Now Verified as Citrix Ready

A new major version of Univention Corporate Client (UCC), Version 3.0, was released in mid-August. Due to a problem with Citrix Receiver, however, Citrix was not fully supported in that version. Thanks to an update, it has now proved possible to resolve the issue, and complete Citrix support with UCS is now guaranteed once again.

The last release brought with it a changeover of the operating system basis from UCC 3.0 to Ubuntu 16.04 Long Term Support (LTS). The Ubuntu substructure allows users of UCC to benefit both from the large, continuously updated software selection and from the broad hardware support offered by Ubuntu’s use of the latest Linux kernel versions.

In the version of UCC released in August, Citrix was not fully supported, as there was an error in the Citrix Receiver software provided by Citrix and preinstalled by us.

UCC_Icon_592x588_auf_transparentCitrix Receiver allows access to Citrix terminal server services. The use of UCC gives organizations the possibility of customizing their thin client hardware thanks to the wide scope of hardware support offered by Ubuntu in UCC. At the same time, the software administration via Univention Corporate Server (UCS) is performed independently of the thin client hardware employed.

The error in the Citrix Receiver software prevented USB devices operated on the client from being used in the terminal server session. This error affected the Ubuntu version of the Citrix software and thus also UCC. After we alerted Citrix to the issue, they worked intensively in the following weeks to rectify the situation and have now been able to resolve the issue in their own software, with the result that it no longer occurs in the new Citrix Receiver version 13.4. We have now verified UCC 3.0 as Citrix Ready with this new error-free version, and so it is already preinstalled in the new UCC images delivered this week. As a user of UCC 3.0, you will receive them along with an update to your system.

Citrix Ready is a partner program of Citrix and is aimed at hardware and software partners whose products are compatible with Citrix products. In order to ensure that the solutions are compatible, they need to successfully complete the Citrix Ready Verification Program. This process has already been successfully completed for UCC 3.0 following resolution of the error described above, and so UCC 3.0 is now “Citrix Ready verified”. The compatibility of UCC 3.0 with Citrix terminal server environments is thus guaranteed.

UCC 3.0 is available to download via the Univention App Center and can be installed on both standard PC clients and thin clients alike. The clients are administrated centrally via UCS.


Der Beitrag UCC 3.0 Now Verified as Citrix Ready erschien zuerst auf Univention.

21 October, 2016 11:29AM by Nico Gulden

hackergotchi for Blankon developers

Blankon developers

i15n: Aku

Penerjemahan BlankOn Authentication

Resources: django.po

21 October, 2016 09:01AM

BlankOn Malang: Pelatihan Migrasi Open Source di Dinkes Lamongan

Pada tanggal 28 dan 29 Oktober 2011  Komunitas Open Source dan Linux Lamongan  dan Dinas Kesehatan Kabupaten Lamongan mengadakan pelatihan migrasi ke Open Source. Sistem Operasi yang dipilih setelah disepakati adalah BlankonLinux. Ada beberapa hal yang menyebabkan Dinkes Lamongan dan KOSLA memilih BlankOn Linux. Ubuntu memang bagus, tapi karena terlalu cepat update rilis baru (tiap 6 bulan sekali) menyebabkan Sistem Operasi yang diinstall terasa jadul. ...

21 October, 2016 09:01AM

hackergotchi for ArcheOS


Torre dei Sicconi - Chapter 6 - Excavation

The archaeological excavation is still one of the most important steps during a research project like "Torre dei Sicconi". 
The main goal was to understand the construction phases, to get information about the composition and ornamentation of the interiors and the every day live of the inhabitants of the castle.  
Watch in the next chapter of our "Torre dei Sicconi" series Arc-Team excavating between the walls of the medieval castle ruin.


Torre dei Sicconi - Chapter 6 - Excavation

21 October, 2016 07:33AM by Rupert Gietl (noreply@blogger.com)

October 20, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Kees Cook: CVE-2016-5195

My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

So, here are the graphs updated for the 668 CVEs known today:

  • Critical: 3 @ 5.2 years average
  • High: 44 @ 6.2 years average
  • Medium: 404 @ 5.3 years average
  • Low: 216 @ 5.5 years average

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

20 October, 2016 11:02PM

Jono Bacon: All Things Open Next Week – MCing, Talks, and More

Last year I went to All Things Open for the first time and did a keynote. You can watch the keynote here.

I was really impressed with All Things Open last year and have subsequently become friends with the principle organizer, Todd Lewis. I loved how the team put together a show with the right balance of community and corporation, great content, exhibition and more.

All Thing Open 2016 is happening next week and I will be participating in a number of areas:

  • I will be MCing the keynotes for the event. I am looking forward to introducing such a tremendous group of speakers.
  • Jeremy King, CTO of Walmart Labs and I will be having a fireside chat. I am looking forward to delving into the work they are doing.
  • I will also be participating in a panel about openness and collaboration, and delivering a talk about building a community exoskeleton.
  • It is looking pretty likely I will be doing a book signing with free copies of The Art of Community to be given away thanks to my friends at O’Reilly!

The event takes place in Raleigh, and if you haven’t registered yet, do so right here!

Also, a huge thanks to Red Hat and opensource.com for flying me out. I will be joining the team for a day of meetings prior to All Things Open – looking forward to the discussions!

The post All Things Open Next Week – MCing, Talks, and More appeared first on Jono Bacon.

20 October, 2016 08:20PM

Canonical Design Team: Working to make Juju more accessible

In the middle of July the Juju team got together to work towards making Juju more accessible. For now the aim was to reach Level AA compliant, with the intention of reaching AAA in the future.

We started by reading through the W3C accessibility guidelines and distilling each principle into sentences that made sense to us as a team and documenting this into a spreadsheet.

We then created separate columns as to how this would affect the main areas across Juju as a product. Namely static pages on jujucharms.com, the GUI and the inspector element within the GUI.




GUI live on jujucharms.com




Inspector within the GUI




Example of static page content from the homepage




The Juju team working through the accessibility guidelines



Tackling this as a team meant that we were all on the same page as to which areas of the Juju GUI were affected by not being AA compliant and how we could work to improve it.

We also discussed the amount of design effort needed for each of the areas that isn’t AA compliant and how long we thought it would take to make improvements.

You can have a look at the spreadsheet we created to help us track the changes that we need to make to Juju to make more accessible:




Spreadsheet created to track changes and improvements needed to be done



This workflow has helped us manage and scope the tasks ahead and clear up uncertainties that we had about which tasks done or which requirements need to be met to achieve the level of accessibility we are aiming for.



20 October, 2016 04:22PM

Canonical Design Team: Download Ubuntu Yakkety Yak 16.10 wallpaper

The Yakkety Yak 16.10 is released and now you can download the new wallpaper by clicking here. It’s the latest part of the set for the Ubuntu 2016 releases following Xenial Xerus. You can read about our wallpaper visual design process here.

Ubuntu 16.10 Yakkety Yak


Ubuntu 16.10 Yakkety Yak (light version)


20 October, 2016 02:54PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Integrate Cloud Service Google Apps for Work in UCS

Browser-supported Office solutions such as Office 365 or Google Apps for Work (now G Suite) make mobile working much easier and reduce administrative efforts, because they are not anymore installed on the computer but run in the cloud. Administrators don’t need to maintain license lists anymore, nor do regular software updates, and incompatibility issues are a thing of the past.

With the connectors “Google App for Work Connector” and „Office 365 Connector“ we developed two apps that help you facilitate administrative tasks as well as make user access safer and easier. Administrators thus manage all users access centrally via UCS while the users themselves access the cloud services from within their working environment with their usual passwords.

In this short video we will show you how you can easily download and integrate the „Google App for Work Connector“ from the Univention App Center and integrate it in your UCS environment.

Thanks to the connector, you will keep full control of all identities in your IT environment when you use this Google cloud service, because all encrypted user passwords will be kept in your environment. The authentification of the users against the service takes place in UCS and no data will be transferred to the cloud service itself.

After the installation of the connector via the Univention App Center you will be guided step by step through the configuration process by a wizard. After the successful configuration you go to the Univention Management Console where you can provide your users with the access to this service or create new users who shall be able to use Google Apps for Work (G Suite).

Users who want to use Google Apps for Work respectively G Suite will then access this service by logging in to the UMC with their normal user credentials. They don’t need to set up their own Google Apps for Work accounts. This handling also increases the overall security of the network, because the access rights of all employees, also former ones, are managed centrally. The central management of licenses is another advantage, because it allows the administrator to have central control of all licenses and license costs.

Der Beitrag Integrate Cloud Service Google Apps for Work in UCS erschien zuerst auf Univention.

20 October, 2016 02:04PM by Maren Abatielos

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S09E34 – The Mutant Killer Zombie Manhattan Project Thingy - Ubuntu Podcast

It’s Episode Thirty-Four of Season-Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Emma Marshall are connected and speaking to your brain.

The three amigos are back with our new amiga!

In this week’s show:

Thing Explainer Competition!

  • Prize: Signed copies of “What If?” and “Thing Explainer” by Randall Munroe (creator of XKCD)
  • Question: Listen to the podcast for instructions
  • Send your entries to competition AT ubuntupodcast DOT org. We’ll pick our favourite and announce the winner on the show.
  • Here are some examples to help get you in the groove:


I write words that are read by a computer. Students who want to learn about something ask their computer for part of a book. Their computer talks to another computer over phone lines, and that computer uses the words I’ve written to send them the book part they want. Sometimes students want new types of book parts that they can use to share their learning with other students. I have to work out the right words for the computer to let them do this, and write them. When I can, I share my words with other people so that their computers can send better book parts to their students.


I talk to people about computer things to help make the stuff they make and the stuff we make better. Also I sometimes write things that the computer gets but I am not great at that. We give away a lot of the things we make which is not like the way some other people share their work. It makes me happy inside that we do this.


I help write a group of books that a computer reads and stores. These books make the computer work much better. When a computer has stored the books I help make you can do things with your computer, like write to people and send what you wrote to the other peoples computers. Or you can ask your computer to talk to other computers to learn things, look at moving pictures, listen to music or buy shopping.

The group of books I help write are free for anyone to give to their computer. You are also free to change these books and share those changes with anyone. This way everyone can help make the books even better so your computer can do more for you.


I help people change their computer to something better. I fix things that are broken and make people happy again. I talk to a lot of people about computers all day. I put my heart into every conversation so people feel like they are talking with a human instead of speaking with a pretend human.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

20 October, 2016 02:00PM

Ubuntu Insights: Live kernel patching from Canonical now available for Ubuntu 16.04 LTS


We are delighted to announce the availability of a new service for Ubuntu which any user can enable on their current installations – the Canonical Livepatch Service.

This new live kernel patching service can be used on any Ubuntu 16.04 LTS system (using the generic Linux 4.4 kernel) to minimise unplanned downtime and maintain the highest levels of security.

First a bit of background…

Since the release of the Linux 4.0 kernel about 18 months ago, users have been able to patch and update their kernel packages without rebooting. However, until now, no other Linux distribution has offered this feature for free to their users. That changes today with the release of the Canonical Livepatch Service:

  • The Canonical Livepatch Service is available for free to all users up to 3 machines.
  • If you want to enable the Canonical Livepatch Service on more than three machines, please purchase an Ubuntu Advantage support package from buy.ubuntu.com or get in touch.

Beyond securing your desktop, server, IoT device or virtual guest, the Canonical Livepatch Service is particularly useful in container environments since every container will share the same kernel.

“Kernel live patching enables runtime correction of critical security issues in your kernel without rebooting. It’s the best way to ensure that machines are safe at the kernel level, while guaranteeing uptime, especially for container hosts where a single machine may be running thousands of different workloads,” says Dustin Kirkland, Ubuntu Product and Strategy for Canonical.

Here’s how to enable the Canonical Livepatch Service today

First, go to the Canonical Livepatch Service portal and retrieve your livepatch token.

Next, install the livepatch ‘Snap’ using the first command below, and then enable your account using the token obtained in the second command below:

sudo snap install canonical-livepatch
sudo canonical-livepatch enable [Token]

That’s it! You’ve just enabled kernel live patching for your Ubuntu system, and you can do that, for free, on two more installations! However, if you want to enable the Canonical Livepatch Service on more than three systems you’ll need to purchase an Ubuntu Advantage support package from as little as $12 per month.

Need a bit more help?

Here’s a quick video to guide you through the steps in less than a minute:

For further details on the Canonical Livepatch Service please read Dustin Kirkland’s useful list of FAQs.

20 October, 2016 01:00PM

Daniel Pocock: Choosing smartcards, readers and hardware for the Outreachy project

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.

Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Choice of smart card

For standard PGP use, the OpenPGP card provides a good choice.

For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.

Choice of card reader

The technical factors to consider are most easily explained with a table:

On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad
Software Free/open Mostly free/open, Proprietary firmware in reader
Key extraction Possible Not generally possible
Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers)
Other factors No hardware Small, USB key form-factor Largest form factor

Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.

Choice of computer to run the clean room environment

There are a wide array of devices to choose from. Here are some principles that come to mind:

  • Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
  • Even better if there is no wired networking either
  • Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
  • Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
  • No hard disks required
  • Having built-in SD card readers or the ability to add them easily

SD cards and SD card readers

The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.

It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.

For convenience, it would be desirable to use a multi-card reader:

although the software experience will be much the same if lots of individual card readers or USB flash drives are used.

Other devices

One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.

Can you help with ideas or donations?

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

20 October, 2016 07:25AM

October 19, 2016

Dustin Kirkland: Hotfix Your Ubuntu Kernels with the Canonical Livepatch Service!

Introducting the Canonical Livepatch Service

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.

I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:


      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!

      19 October, 2016 03:50PM by Dustin Kirkland (noreply@blogger.com)

      hackergotchi for Cumulus Linux

      Cumulus Linux

      Introducing the Solutions Marketplace

      Traditional enterprise networking is under siege — threatened by choice, by open source, and by open standards. The same revolution that made Linux the standard for server operating systems is now happening to network switching. With over 1.5 million ports in production, 50+ certified hardware platforms across 8 hardware vendors, Cumulus Linux® is the de-facto platform for Open Networking, and a perfect example of what the next generation data centers should include.

      But the age-old claim that “It’s Just Linux, you can do whatever you want!” can complicate solving specific problems customers have in the enterprise. Based on feedback from community members, we’ve created the Solutions Marketplace on the Cumulus Networks Community Website. The Solutions Marketplace is a repository of community-submitted projects, user space applications, automation scripts, and extensions to Cumulus Linux. This enables collaboration and fosters innovation through a common platform to develop upon openly and freely using Cumulus VX.

      The Solutions Marketplace with Cumulus Linux expedites the path to production due to the availability of existing community expertise. Best practices are shared, which means you don’t have to start from zero when building out your data center. A disaggregated hardware/software model enables flexible environments and leverages open standards. The result is a highly interoperable model — one that challenges legacy proprietary and single-vendor models.

      As an industry first, Cumulus Linux offers the following:

      • Cumulus VX — A virtual instance of Cumulus Linux that works seamlessly with open source DevOps tools such as vagrant
      • A rich partner ecosystem to help guide and validate solutions for all types of use cases and business verticals
      • A thriving user community (Note: Check out our updated Community Portal for events, docs, user forums and more!)

      We’ve designed the Solutions Marketplace to cater to those that are tasked to evaluate, implement, and maintain Cumulus Linux. Part of the evaluation process is understanding the possibilities of Cumulus Linux, as well as what customers are actually using as critical pieces of their networking deployment. Having a good plan during evaluation and proof-of-concept phases helps streamline the deployment and maintenance phases later.

      As an intuitive platform, users can search the Solutions Marketplace by:

      • Deployment Life Cycle: Solutions are dependent on where customers are in their deployment “path to production” story. That is, there are solutions that help demonstrate or validate an initial proof-of-concept before a formal deployment. Also, some solutions could just be code “snippets” that accomplish a specific task, configuration, or maintenance activity.
      • Networking Technology: As the number of networking solutions become more specialized, many are highly dependent on the networking topologies and underlying networking standards and protocols for use. For example, search for layer 2 (L2) or layer 3 (L3) solutions, or OSPF or BGP routing protocols based on specific needs.
      • Automation Type: Ansible, Chef, Puppet, Salt and others are foundations for integrating DevOps methodologies across servers as well as switches. The vast majority of solutions listed include automation in order to achieve proper states across the entire network (and not just a single switch).
      • Software Integration: Cumulus Linux has a natural fit with such larger solutions such as Containers, OpenStack, and even Cumulus Routing on a Host that includes the open source Quagga Linux package.

      The quality of the Solutions Marketplace is only as good as the original content. The community is highly encouraged to submit their solutions via the submission function, and the Cumulus Community team will work to have it approved and listed as soon as possible. With more community submissions, the Solutions Marketplace will highlight and showcase what can be done with Cumulus Linux used as a platform, and connect people with solutions in order to search, view, apply, and improve shared solutions.

      The Solutions Marketplace is just one exciting way we hope to transform the networking industry. Visit the Solutions Marketplace to find, implement and contribute projects with Cumulus switches today.

      The post Introducing the Solutions Marketplace appeared first on Cumulus Networks Blog.

      19 October, 2016 03:48PM by Andrius Benokraitis

      hackergotchi for SparkyLinux


      Budgie Desktop 10.2.8


      The latest version of Budgie Desktop 10.2.8 just landed in our repository.

      If you have an older version already installed, simply perform your system upgrade:
      sudo apt-get update
      sudo apt-get dist-upgrade

      If you would like to make fresh installations, do:
      sudo apt-get update
      sudo apt-get install budgie-desktop

      Then reboot/relogin to the new desktop to take affects.

      Changes made in the config file:
      – downgraded ‘ibus’ from 1.5.13 to 1.5.11 as we have in Debian’s repos.


      19 October, 2016 01:13PM by pavroo

      hackergotchi for Ubuntu developers

      Ubuntu developers

      Ubuntu Insights: DataArt to deploy Juju for “Big Software” collaboration


      DataArt to deploy Juju for “Big Software” collaboration and faster project delivery

      • Signed Charm Author partnership and SI extends partnership
      • Juju Charm Ecosystem will be used in DataArt NFV Telco and Enterprise deployments to speed up project delivery
      • DataArt will provide expertise and contribute to the development of Juju Charms

      NEW YORK and LONDON, U.K. Oct 19th, Canonical and DataArt announced today that DataArt will use Juju for the model and management of “Big Software” implementations such as Network Function Virtualization (NFV) solutions for Telcos and Enterprise.

      DataArt, a global network of independent technology consulting and software services firms, also becomes a Charm Author Partner and Systems Integrator. By providing charms to the Juju Charm ecosystem and creating a new NFV Telco bundle DataArt empowers clients deploying the new charms to scale telco infrastructure both horizontally and vertically, a true service on demand to its customers.

      “Juju is one of the most extensible ways to deploy cloud applications efficiently and seamlessly”, said Michael Lazar, Vice President of Telecom at DataArt. “Working with Canonical over the past few years, designing NFV solutions and developing charms within the ecosystem proves the model is effective in allowing telecoms and enterprises to deliver new services quicker and with less friction.”

      Customer demand has forced Telecoms to fundamentally change their business models, becoming more like “over the top” and cloud service providers than traditional telecom operators. NFV has become the go-to solution to address these new realities and DataArt’s services will enable telcos to rapidly shift legacy platforms and infrastructure and deploy revenue generating services at scale.

      “Cloud, software as a service, open source, big data, scale-out, containers, and microservices, while these terms and technologies represent a new world of opportunity, they also bring complexity that most IT departments are ill-equipped to pursue,” said Stefan Johansson, Director of Global Software Alliances, Canonical. “IT departments need to model and automate infrastructure and software operations – that is what Juju was created to do.”

      Juju Charms have become an established ecosystem of best-in-class applications which use shared, open source operations code for common components, so CIOs can focus precious resources on creating software that is unique to their business. Whether companies want to spin up an OpenStack cloud or manage a big data cluster, or if they are interested in container orchestration or machine learning, the Juju charm store includes open source and enterprise software solutions that dramatically simplify operations for those classes of big software.

      If you are attending OpenStack Barcelona next week, Canonical and DataArt will be demoing solutions at booth B24. Please stop by to learn more.

      About DataArt

      DataArt is a global network of independent technology consulting and software services firms that create end-to-end solutions, from concept and strategy, to design, implementation and support. We help global clients in the finance, healthcare & life sciences, travel & hospitality, media, telecom, and IoT sectors achieve important business outcomes. Rooted in deep domain knowledge and technology expertise, DataArt designs new products, modernizes enterprise systems and provides managed services delivered by outstanding development teams in the U.S., UK, Central and Eastern Europe, and Latin America. As a recognized leader in business and technology services, DataArt has earned the trust of some of the world’s leading brands and most discerning clients, including Nasdaq, S&P, Coller Capital, BankingUp, Ocado, artnet, Betfair, Skyscanner, Collette Vacations, Booker and Charles River Laboratories.

      19 October, 2016 01:00PM

      Raphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2016

      A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

      Individual reports

      In September, about 152 work hours have been dispatched among 13 paid contributors. Their reports are available:

      • Balint Reczey did 15 hours (out of 12.25 hours allocated + 7.25 remaining, thus keeping 4.5 extra hours for October).
      • Ben Hutchings did 6 hours (out of 12.3 hours allocated + 1.45 remaining, he gave back 7h and thus keeps 9.75 extra hours for October).
      • Brian May did 12.25 hours.
      • Chris Lamb did 12.75 hours (out of 12.30 hours allocated + 0.45 hours remaining).
      • Emilio Pozuelo Monfort did 1 hour (out of 12.3 hours allocated + 2.95 remaining) and gave back the unused hours.
      • Guido Günther did 6 hours (out of 7h allocated, thus keeping 1 extra hour for October).
      • Hugo Lefeuvre did 12 hours.
      • Jonas Meurer did 8 hours (out of 9 hours allocated, thus keeping 1 extra hour for October).
      • Markus Koschany did 12.25 hours.
      • Ola Lundqvist did 11 hours (out of 12.25 hours assigned thus keeping 1.25 extra hours).
      • Raphaël Hertzog did 12.25 hours.
      • Roberto C. Sanchez did 14 hours (out of 12.25h allocated + 3.75h remaining, thus keeping 2 extra hours).
      • Thorsten Alteholz did 12.25 hours.

      Evolution of the situation

      The number of sponsored hours reached 172 hours per month thanks to maxcluster GmbH joining as silver sponsor and RHX Srl joining as bronze sponsor.

      We only need a couple of supplementary sponsors now to reach our objective of funding the equivalent of a full time position.

      The security tracker currently lists 39 packages with a known CVE and the dla-needed.txt file 34. It’s a small bump compared to last month but almost all issues are affected to someone.

      Thanks to our sponsors

      New sponsors are in bold.

      No comment | Liked this article? Click here. | My blog is Flattr-enabled.

      19 October, 2016 10:29AM

      Kees Cook: Security bug lifetime

      In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.

      As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:

       break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2

      This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.

      Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:

      CVE lifetimes 2011-2016

      And here it is zoomed in to just Critical and High:

      Critical and High CVE lifetimes 2011-2016

      The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:

      • Critical: 2 @ 3.3 years
      • High: 34 @ 6.4 years
      • Medium: 334 @ 5.2 years
      • Low: 186 @ 5.0 years

      This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.

      While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888.

      (Edit: see my updated graphs that include CVE-2016-5195.)

      © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
      Creative Commons License

      19 October, 2016 04:46AM

      October 18, 2016

      Sebastian Kügler: Plasma’s road ahead

      My Plasma Desktop in 2016My Plasma Desktop in 2016
      On Monday, KDE’s Plasma team held its traditional kickoff meeting for the new development cycle. We took this opportunity to also look and plan ahead a bit further into the future. In what areas are we lacking, where do we want or need to improve? Where do we want to take Plasma in the next two years?

      Our general direction points towards professional use-cases. We want Plasma to be a solid tool, a reliable work-horse that gets out of the way, allowing to get the job done quickly and elegantly. We want it to be faster and of better quality than the competition.

      With these big words out there, let’s have a look at some specifics we talked about.

      Release schedule until 2018

      Our plan is to move from 4 to 3 yearly releases in 2017 and 2018, which we think strikes a nice balance between our pace of development, and stabilization periods around that. Our discussion of the release schedule resulted in the following plan:

      • Plasma 5.9: 31 January 2017
      • Plasma 5.10: May 2017
      • Plasma 5.11: September 2017
      • Plasma 5.12: December 2017
      • Plasma 5.13: April 2018
      • Plasma 5.14 LTS: August 2018

      A cautionary note, we can’t know if everything exactly plays out like this, as this schedule, to a degree depends on external factors, such as Qt’s release schedule. Here’s what we intend to do, it is really our “best guess”. Still, this aligns with Qt’s plans, who are also looking at an LTS release in summer 2018. So, what will these upcoming releases bring?

      Breeze Look and Feel
      Breeze Look and Feel

      UI and Theming

      The Breeze icon theme will see further completion work and refinements in its existing icons details. Icon usage over the whole UI will see more streamlining work as well. We also plan to tweak the Breeze-themed scrollbars a bit, so watch out for changes in that area. A Breeze-themed Firefox theme is planned, as well as more refinement in the widget themes for Qt, GTK, etc.. We do not plan any radical changes in the overall look and feel of our Breeze theme, but will further improve and evolve it, both in its light and dark flavors.

      Feature back-log

      The menu button is a first sign of the global menu returning to PlasmaThe menu button is a first sign of the global menu returning to Plasma
      One thing that many of our users are missing is support for a global menu similar to how MacOS displays application menus outside of the app’s window (for example at the top of the screen). We’re currently working on bringing this feature, which was well-supported in Plasma 4 back in Plasma 5, modernized and updated to current standards. This may land as soon as the upcoming 5.9 release, at least for X11.

      Better support for customizing the locale (the system which shows things like time, currencies, numbers in the way the user expects them) is on our radar as well. In this area, we lost some features due to the transition to Frameworks 5, or rather QLocale, away from kdelibs’ custom, but sometimes incompatible locale handling classes.


      The next releases overall will bring further improvements to our Wayland session. Currently, Plasma’s KWin brings an almost feature-complete Wayland display server, which already works for many use-cases. It hasn’t seen the real-world testing it needs, and it is lacking certain features that our users expect from their X11 session, or new features which we want to offer to support modern hardware better.
      We plan to improve multi-screen rendering on Wayland and the input stack in areas such as relative pointers, pointer confinement, touchpad gestures, wacom tablet support, clipboard management (for example, Klipper). X11 dependencies in KWin will be further reduced with the goal to make it possible to start up KWin entirely without hard X11 dependencies.
      One new feature which we want to offer in our Wayland session is support for scaling the contents of each output individually, which allows users to use multiple displays with vastly varying pixel densities more seamlessly.
      There are also improvements planned around virtual desktops under Wayland, as well as their relation to Plasma’s Activities features. Output configuration as of now is also not complete, and needs more work in the coming months. Some features we plan will also need changes in QtWayland, so there’s some upstream bug-fixing needed, as well.

      One thing we’d like to see to improve our users’ experience under Wayland is to have application developers test their apps under Wayland. It happens still a bit too often that an application ends up running into a code-path that makes assumptions that X11 is used as display server protocol. While we can run applications in backwards-compatible XWayland mode, applications can benefit from the better rendering quality under Wayland only when actually using the Wayland protocol. (This is mostly handled transparantly by Qt, but applications do their thing, so unless it’s tested, it will contain bugs.)


      Plasma’s Mobile flavor will be further stabilized, and its stack cleaned up, we are further reducing the stack’s footprint without losing important functionality. The recently-released Kirigami framework, which allows developers to create convergent applications that work on both mobile and desktop form-factors, will be adjusted to use the new, more light-weight QtQuick Controls 2. This makes Kirigami a more attractive technology to create powerful, yet lean applications that work across a number of mobile and desktop operating systems, such as Plasma Mobile, Android, iOS, and others.

      Plasma DiscoverDiscover, Plasma’s software center integrates online content from the KDE Store, its convergent user-interface is provided by the Kirigami framework

      Online Services

      Planned improvements in our integration of online services are dependency handling for assets installed from the store. This will allow us to support installation of meta-themes directly from the KDE Store. We want to also improve our support for online data storage, prioritizing Free services, but also offer support for proprietary services, such as the GDrive support we recently added to Plasma’s feature-set.

      Developer Recruitment

      We want to further increase our contributor base. We plan to work towards an easier on-boarding experience, through better documentation, mentoring and communication in general. KDE is recruiting, so if you are looking for a challenging and worthwhile way to work as part of a team, or on your individual project, join our ranks of developers, artists, sysadmins, translators, documentation writers, evangelists, media experts and free culture activists and let us help each other.

      18 October, 2016 12:29PM

      hackergotchi for Univention Corporate Server

      Univention Corporate Server

      Ansible Modules for the Automation of UCS-Specific Tasks

      As a long-term Univention partner, we at Adfinis Sygroup operate UCS environments for many of our customers. We employ Ansible for automation when running different Linux distributions as it standardizes the roll-out of UCS among other things.

      Up until now there weren’t any Ansible modules available for UCS-specific tasks. To remedy this, we developed modules based on the standard script interface of Univention Directory Manager for recurring tasks in the maintenance of the directory service with the goal of simplifying the process. These currently include the following:


      These modules are included in the Ansible extra modules as of Ansible Version 2.2 and can be used accordingly with Ansible, as can other modules. If additional Ansible modules are developed in the future (and not yet included in Ansible itself), it will be possible to add them to individual projects. The following offers a brief explanation of how these additional Ansible modules can be installed and then provides a brief introduction to the modules listed above.


      Additional Ansible modules can either be installed on an individual project basis or installed in the Ansible source code. For it to be possible to install additional modules for individual projects, they need to be copied into the “library” folder below the top directories of the project. This looks something like this:

      $ ls
      |- ansible.cfg
      |- group_vars/
      | |- all/
      |- inventory
      |- library/
      | |- README.md
      | |- ucr.py
      | |- udm_dns_record.py
      | |- udm_dns_zone.py
      | |- udm_group.py
      | |- udm_share.py
      | |- udm_user.py
      |- README.md
      |- site.yml

      If the modules are installed in the Ansible source code, the entire Ansible source code needs to be cloned:

      $ git clone https://github.com/ansible/ansible.git
      $ cd ansible/
      $ git submodule update --init --recursive

      Ansible can then be installed with the help of pip:

      $ virtualenv -p /usr/bin/python2 venv
      $ . venv/bin/activate
      $ pip install -e ansible/

      The additional Ansible modules then simply need to be copied into the ansible/lib/ansible/modules/extras/ or a subfolder. The Univention modules, for example, still belong in the subfolder univention.


      To create a group with the name employee and the LDAP DN cn=employee,cn=groups,ou=company,dc=example,dc=org, you need to run the following Ansible task:

      - udm_group: name=employee

      If only the attribute name is specified, the group is created with the DN cn=<name>,cn=groups,<LDAP Base DN>.


      A user object spans a great number of possible attributes, as such only a few are displayed below as an example. All the available attributes are documented directly in the Ansible module.

      For example, to create a user Hans Muster with the user name hans.muster and the password secure_password, you need to run the following task:

      - udm_user: name=hans.muster

      It is also possible to specify the complete LDAP path as for udm_group. If no further data is entered, the user will be created with the LDAP DN uid=hans.muster,cn=users,dc=example,dc=com.


      DNS zones do not have many possible attributes. One special aspect is that the interfaces, NS records, and MX records are defined in the zone. The interfaces are comparable with BIND 9 Views. These define where the responses to the corresponding DNS queries come from. The NS and MX records are treated specially in UCS and for this reason are configured via udm_dns_zone and not udm_dns_record.

      For example, the forward zone example.com with the responsible name server ucs.example.com, which responds to DNS queries on the IP address,would be set up as follows:

      - udm_dns_zone: zone=example.com


      Individual DNS records can be created with udm_dns_record. Possible entries are:

      • host_record (A und AAAA Records)
      • alias (CNAME Records)
      • ptr_record
      • srv_record
      • txt_record

      To add the entry www.example.com IN A to the zone example.com, you need to run the following task:

      - udm_dns_zone: name=www
      data=['a': '']


      The module udm_share can be used to handle Samba and NFS shares. A share object contains a variety of attributes, all of which are documented in the Ansible module.

      To create the share homes on the Ansible target system, you need to run the following task.

      - udm_share: name=homes
      host='{{ ansible_fqdn }}'

      Further links

      Univention Common Code
      Module udm_group
      Module udm_user
      Module udm_dns_zone
      Module udm_dns_record
      Module udm_share

      Der Beitrag Ansible Modules for the Automation of UCS-Specific Tasks erschien zuerst auf Univention.

      18 October, 2016 11:48AM by Maren Abatielos

      hackergotchi for Ubuntu developers

      Ubuntu developers

      Paul White: More on bug reports, September 1973 and a jumping mouse

      Incomplete bug reports

      Since writing an earlier post on the subject I've continued to monitor new bug reports. I have been very disappointed to see that so many have to be marked as being "incomplete" as they give little information about the problem and don't really give anyone an incentive to work on and help fix. So many are very vague about the problem being reported while some are just an indication that a problem exists. Reports which just say something along the lines of:
      • help
      • bug
      • i don't know
      • dont remember
      don't do a lot to point to the problem that is being reported. May be some information can be gleaned from any attached log files but please bug reporters, tell us what the problem is as it will greatly increase the chances of your issue being fixed, investigated or (re)assigned to the correct package. Reporters need to reply when asked for further information about the bug or the version of Ubuntu being used even if it is to say that, for whatever reason, the problem no longer affects them. And I say to all novice reporters: "Please don't keep the Ubuntu version or flavour that you are using a secret!"

      Bug report or support request?

      Some reports are probably submitted as a desperate measure when help is needed and no-one is around to help. Over the last couple of months I've seen dozens of bug reports being closed as they have "expired" because there has been no response to a request for information within 59 days of the request being made, Obviously Ubuntu users are having problems but are their issues being resolved? Are those users moving back to Windows or to another Linux distribution because they aren't getting help they need and don't know how to ask for it?

      Many of the issues that I'm referring to should have been posted initially as support requests at the Ubuntu ForumsAsk Ubuntu or Launchpad Answers and then filed as bug reports once sufficient help and guidance had been obtained and the presence of a bug confirmed.

      A bug with the bug reporting tool ubuntu-bug?

      Sometimes trying to establish the correct package against which to file a bug is a difficult task especially if you are not conversant with the inner workings of Ubuntu. Launchpad can often guide the reporter but it seems many reports are being incorrectly filed against the xorg package in error. Bug #1631748 (ubuntu-bug selects wrong category) seems to confirm this widespread problem. If a bug is reported against the wrong package and no description of the issue is given there is no chance of the issue being investigated.

      Further reading

      The following links will give those who are new to bug reporting some help in filing a good bug report that can be worked on by a bug triager or developer.

      How to Report Bugs
      How to Report Bugs Effectively
      Improving Ubuntu: A Beginners Guide to Filing Bug Reports
      How do I report a bug?

      To the future and some events of September 1973

      In just a couple of weeks I'll no longer have to worry about getting up early for work, fighting my way though the local traffic and aiming for an 8 o'clock start which is something that I seldom manage to achieve these days. No doubt I'll be able to devote much more time to work on Ubuntu and who knows I may well revisit some of the teams and projects that I've left over the past couple of years.

      Looking at Mark Shuttleworth's Wikipedia page it seems that he was born just a week or two after I started my working life in September 1973. A lot has changed since then. We didn't have personal computers or mobile phones and as far as I can remember we managed perfectly well without them. Back then I had very different interests, some of which I've recently returned to but obviously I had no idea what was in store for me around 40 years later.

      Thanks for everything so far Mark!

      zz = Zesty Zapus, a mouse that jumped

      So we now have a code-name for the next Ubuntu release which Mark has confirmed will be Zesty Zapus, Apparently a zapus is a North American mouse that jumps. So, now that we've reached the end of the alphabet, what next?

      Prediction: There will be much discussion about the code-name for the 17.10 release and it's announcement will probably be the most anticipated yet.

      18 October, 2016 05:16AM by Paul White (noreply@blogger.com)

      Stephen Michael Kellat: From Yakkety To Zesty

      I've seen Ms. Belkin go ahead and wrap up the Y (Yakkety) season while giving a look ahead to the Z (Zesty) season. I'm afraid I cannot give as much of a report. My participation in the Ubuntu realm has been a bit held back by things beyond my control.

      During the Y season I was stuck at work. I have a hefty commute to work which pretty much wrecks my day when included with my working hours. My work is considered seasonal which for a federal civil servant means that it is subject to workload constraints. Apparently we did not have a proper handle on workload this year. The estimate was that our work would be done by a certain date and we would go on "seasonal release" or furlough until we are recalled to duty. We missed that date by quite a longshot. After quite a bit of attrition, angry resignations, people checking into therapy, people developing cardiac issues, and worse my unit received "seasonal release" only last Friday. Recall could be as soon as two weeks away.

      The only main action I really wanted to handle during Y was to get a backport in of dianara and pumpa if they dropped new releases. I was a little late in doing so but I just got the backport bug for dianara filed tonight. I kept stating I would wait for furlough to do the testing but furlough took long enough that a couple versions of dianara went by in the interim. Folks looking at pump.io have to remember that even the website to a server is itself a client and new features have to be implemented in clients for the main server engine to pass around. The website isn't the point to pump.io but rather the use of a client of your choice is and a list is being maintained.

      I don't really know what the plan is for Z for me. Right now many eyes around the world are focused on the election for President of the United States. People regard that office as the so-called leader of the free world. That person also happens to be head of the civil service in the United States. Neither of the major party candidates have nice plans for my employing agency. Both scare me. A good chunk of the attrition and angry resignations at work has been people fleeing for the safety of the private sector in light of what is expected from either major party candidate.

      Backporting will continue subject to resource restrictions. I remain a student in the Emergency Management and Planning Administration program at Lakeland Community College with graduation expected in May 2017 subject to course availability. Right now I'm working on learning about the Incident Command System and how it is applied in addition to Continuity of Operations.

      Graphic From FEMA's Emergency Management Institute IS-800 Class of the Incident Command SystemGraphic From FEMA's Emergency Management Institute IS-800 Class of the Incident Command System

      Time will tell where things go. Clues are not readily available to me. I wish they were, perhaps...

      18 October, 2016 02:43AM

      The Fridge: Ubuntu Weekly Newsletter Issue 484

      Welcome to the Ubuntu Weekly Newsletter. This is issue #484 for the weeks October 3 – 16, 2016, and the full version is available here.

      In this issue we cover:

      The issue of The Ubuntu Weekly Newsletter is brought to you by:

      • Elizabeth K. Joseph
      • Chris Guiver
      • Simon Quigley
      • Mary Frances Hull
      • Chris Sirrs
      • And many others

      If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

      Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

      18 October, 2016 12:47AM

      Sebastian Kügler: to not breathe

      I’ve always loved diving down while snorkeling or swimming, and it’s been intriguing to me how long I can hold my breath, how far and deep I could go just like that. (The answer so far, 14m.)

      Last week, I met with Jeanine Grasmeijer. Jeanine is one of the world’s top freedivers, two times world record holder, 11 times Dutch national record holder. She can hold her breath for longer than 7 minutes. Just last month she dove down to -92m without fins. (For the mathematically challenged, that’s 6.6 times 14m.)

      Diving with Jeanine GrasmeijerDiving with Jeanine Grasmeijer
      Jeanine showed me how to not breathe properly.
      We started with relaxation and breathing exercises on dry land. Deep relaxation, breathing using the proper and most effective technique, then holding  breath and recovering.
      In the water, this actually got a bit easier. Water has better pressure characteristics on the lungs, and the mammalian diving reflex helps shutting off the air ways, leading to a yet more efficient breath hold. A cycle starts with breathing in the water through the snorkel for a few minutes, focusing on a calm and regular, relaxed breathing rhythm. After a few cycles of static apnea (breath holding under water, no movement), I passed the three-minute-mark at 3:10.
      We then moved on to dynamic apnea (swimming a horizontal distance under water on one breath). Jeanine did a careful weight check with me, making sure my position would need as little as possible correction movements while swimming. With a reasonable trim achieved, I swam some 50m, though we mainly focused not on distance, but on technique of finning, arms usage and horizontal trim.
      The final exercise in the pool was about diving safety. We went over the procedure to surface an unconscious diver, and get her back to her senses.

      Freediving, as it turns out, is a way to put the world around on pause for a moment. You exist in the here and now, as if the past and future do not exist. The mind is in a completely calm state, while your body floats in a world of weightless balance. As much as diving is a physical activity, it can be a way to enter a state of Zen in the under water world.

      Jeanine has not only been a kind, patient and reassuring mentor to me, but opened the door to a world which has always fascinated and intrigued me. A huge, warm thanks for so much inspiration of this deep passion!

      Harbor porpoise MichaelThe cutest whale in the world!

      In other news on the “mammals that can hold their breath really well” topic: I’ve adopted a cute tiny orphaned whale!

      18 October, 2016 12:10AM

      October 17, 2016

      Elizabeth K. Joseph: Seeking a new role

      Today I was notified that I am being laid off from the upstream OpenStack Infrastructure job I have through HPE. It’s a workforce reduction and our whole team at HPE was hit. I love this job. I work with a great team on the OpenStack Infrastructure team. HPE has treated me very well, supporting travel to conferences I’m speaking at, helping to promote my books (Common OpenStack Deployments and The Official Ubuntu Book, 9th and 8th editions) and other work. I spent almost four years there and I’m grateful for what they did for my career.

      But now I have to move on.

      I’ve worked as a Linux Systems Administrator for the past decade and I’d love to continue that. I live in San Francisco so there are a lot of ops positions around here that I can look at, but I really want to find a place where my expertise with open source, writing and public speaking can will be used and appreciated. I’d also be open to a more Community or Developer Evangelist role that leverages my systems and cloud background.

      Whatever I end up doing next the tl;dr (too long; didn’t read) version of what I need in my next role are as follows:

      • Most of my job to be focused on open source
      • Support for travel to conferences where I speak at (6-12 per year)
      • Work from home
      • Competitive pay

      My resume is over here: http://elizabethkjoseph.com

      Now the long version, and a quick note about what I do today.

      OpenStack project Infrastructure Team

      I’ve spent nearly four years working full time on the OpenStack project Infrastructure Team. We run all the services that developers on the OpenStack project interact with on a daily basis, from our massive Continuous Integration system to translations and the Etherpads. I love it there. I also just wrote a book about OpenStack.

      HPE has paid me to do this upstream OpenStack project Infrastructure work full time, but we have team members from various companies. I’d love to find a company in the OpenStack ecosystem willing to pay for me to continue this and support me like HPE did. All the companies who use and contribute to OpenStack rely upon the infrastructure our team provides, and as a root/core member of this team I have an important role to play. It would be a shame for me to have to leave.

      However, I am willing to move on from this team and this work for something new. During my career thus far I’ve spent time working on both the Ubuntu and Debian projects, so I do have experience with other large open source projects, and reducing my involvement in them as my life dictates.

      Most of my job to be focused on open source

      This is extremely important to me. I’ve spent the past 15 years working intensively in open source communities, from Linux Users Groups to small and large open source projects. Today I work on a team where everything we do is open source. All system configs, Puppet modules, everything but the obvious private data that needs to be private for the integrity of the infrastructure (SSH keys, SSL certificates, passwords, etc). While I’d love a role where this is also the case, I realize how unrealistic it is for a company to have such an open infrastructure.

      An alternative would be a position where I’m one of the ops people who understands the tooling (probably from gaining an understanding of it internally) and then going on to help manage the projects that have been open sourced by the team. I’d make sure best practices are followed for the open sourcing of things, that projects are paid attention to and contributors outside the organization are well-supported. I’d also go to conferences to present on this work, write about it on a blog somewhere (company blog? opensource.com?) and be encouraging and helping other team members do the same.

      Support for travel to conferences where I speak at (to chat at 6-12 per year)

      I speak a lot and I’m good at it. I’ve given keynotes at conferences in Europe, South America and right here in the US. Any company I go to work for will need to support me in this by giving me the time to prepare and give talks, and by compensating me for travel for conferences where I’m speaking.

      Work from home

      I’ve been doing this for the past ten years and I’d really struggle to go back into an office. Since operations, open source and travel doesn’t need me to be in an office, I’d prefer to stick with the flexibility and time working from home gives me.

      For the right job I may be willing to consider going into an office or visiting client/customer sites (SF Bay Area is GREAT for this!) once a week, or some kind of arrangement where I travel to a home office for a week here and there. I can’t relocate for a position at this time.

      Competitive pay

      It should go without saying, but I do live in one of the most expensive places in the world and need to be compensated accordingly. I love my work, I love open source, but I have bills to pay and I’m not willing to compromise on this at this point in my life.

      Contact me

      If you think your organization would be interested in someone like me and can help me meet my requirements, please reach out via email at lyz@princessleia.com

      I’m pretty sad today about the passing of what’s been such a great journey for me at HPE and in the OpenStack community, but I’m eager to learn more about the doors this change is opening up for me.

      17 October, 2016 11:23PM

      hackergotchi for ARMBIAN


      hackergotchi for Ubuntu developers

      Ubuntu developers

      Ubuntu Insights: Canonical and ARM collaborate on OpenStack


      Canonical and ARM collaborate to offer commercial availability of Ubuntu OpenStack and Ceph for 64-bit ARM-based servers

      • Availability of Ubuntu OpenStack and Ceph support included with Canonical’s Ubuntu Advantage enterprise-grade offering
      • Partnership extends Canonical’s support for ARM server which dates back to Ubuntu 12.04 LTS

      CAMBRIDGE and LONDON, U.K. Oct 17, Canonical, the company behind Ubuntu, the leading platform and operating system for container, cloud and scale-out computing, and ARM, the industry’s leading semiconductor IP company, announced today that Ubuntu OpenStack and Ceph are now commercially available and  supported on processors and servers based on 64-bit ARM® v8-A architecture.

      Corporations deploying OpenStack and Ceph are actively searching for more choice and innovation in the data center. This expanded partnership will make Ubuntu OpenStack and Ceph Storage solutions, including Ubuntu Advantage support, available to address growing demand in enterprise and telco markets for ARMv8-A based enterprise solutions.

      The focus will be on direct customer use cases, driving scale out computing solutions in the server and cloud ecosystem. ARM and Canonical will actively work with Ubuntu certified System on Chip (SoC) partners, original design manufacturers (ODMs) and original equipment manufacturers (OEMs) to ensure production grade server systems, storage platforms, and networking solutions are available in the market with Ubuntu Advantage support.

      “With the growth in scale-out computing and storage, we wanted to ensure we had the best OpenStack and Ceph storage solutions and enterprise grade support available,” said Lakshmi Mandyam, senior marketing director of server programs, ARM. “The commercial availability of Ubuntu OpenStack and Ceph is another milestone that demonstrates open source software on ARM is ready for deployment now. The ARM and Canonical ecosystems can now simply write once and deploy anywhere on ARM-based servers.”

      The ARM ecosystem has invested heavily in maturing the 64-bit ARMv-8-A architecture, and server-grade chips are now available from multiple sources. Canonical has built a solid ecosystem program which ensures that enterprises can confidently deploy ARM-based systems from a variety of vendors all covered by Canonical’s professional services and support.

      “We have seen our Telecom and Enterprise customers start to radically depart from traditional server design to innovative platform architectures for scale-out compute and storage. In partnering with ARM we bring more innovation and platform choice to the marketplace,”, said Mark Baker, Product Manager, OpenStack, Canonical. “The next generation of scale-out applications are causing our customers to completely revisit compute and storage architectures with a focus on scale and automation.  The ARM and Canonical ecosystems offer more choice in data center solutions with a range of products that can be optimized to run standard server software and the next generation of applications.”

      Independent analysis in June by the OpenStack user survey again showed that more than 55 percent of the world’s largest production OpenStack deployments run Ubuntu OpenStack, more than all other vendor solutions combined. From AWS to OpenStack, Ubuntu has become the most popular operating system for the cloud with over two million Ubuntu Linux instances launched in the cloud in 2015.

      Ubuntu OpenStack underpins some of the most exciting cloud projects happening today in areas such as telco (NFV), Retail, Finance, Media with large cloud customers such as Deutsche Telekom, Tele2, Sky, AT&T, Cisco, Bloomberg and Time Warner Cable choosing Ubuntu.

      If you are attending OpenStack Barcelona later this month, please stop by the ARM booth (B29) or the Canonical booth (B24) to learn more and see a demo. Please do stop by to see it in action.

      Supporting quotes

      Applied Micro

      “As part of our long standing relationship, AppliedMicro has worked jointly with Canonical and ARM to implement and productize OpenStack on our X-Gene family of 64-bit ARMv8-A SoCs,” said Kumar Sankaran, associate vice president, software and platform engineering at AppliedMicro. “OpenStack and CEPH provide the right framework for rapid deployment and customization of work-loads in a variety of applications. The availability of a commercially supported OpenStack solution with Ubuntu goes a long way in providing a production and stable solution to end users and we are excited to be a part of this key development.”


      “Today’s announcement is a continuation of the collaboration between Canonical and Cavium on bringing innovative technology and solutions to the ARMv8-A server market in key areas such as dual socket cache coherency, application optimized accelerator support and fully integrated I/O,” said Larry Wikelius, Vice President Software Ecosystem and Solutions Group at Cavium.  “With Cavium’s ThunderX® leading the way as the only ARMv8-A certified SoC for Ubuntu 16.04 LTS Canonical is aggressively enabling our customers and partners to deploy production systems at scale with the assurance of the Ubuntu Advantage support model.”


      “ARM, Canonical and Qualcomm have been collaborating closely in upstream enablement of various open source projects for ARM servers,” said Ram Peddibhotla, senior director, product management, Qualcomm Datacenter Technologies.  “OpenStack and Ceph are critical ingredients in enterprise cloud deployments and commercial availability and support from Canonical underscore the continued momentum of enterprise-class, ARM-based solutions for the cloud.”

      Penguin Computing

      “Penguin’s Valkre family of systems, built on the latest ARMv8-A based silicon in conventional and Open Compute Project (OCP) form factors, is now available with Canonical’s Ubuntu and OpenStack software, delivered and supported worldwide by Penguin and Canonical,” said Jussi Kukkonen, Vice President of Advanced Solutions at Penguin. “ARM is our valued partner as we pursue our mission of enabling and delivering the efficient, virtualized, ‘Software Defined’ data center of the future.”

      About ARM

      ARM technology is at the heart of a computing and connectivity revolution that is transforming the way people live and businesses operate. From the unmissable to the invisible; our advanced, energy-efficient processor designs are enabling the intelligence in 86 billion silicon chips and securely powering products from the sensor to the smartphone to the supercomputer. With more than 1,000 technology partners including the world’s most famous business and consumer brands, we are driving ARM innovation into all areas compute is happening inside the chip, the network and the cloud.

      All information is provided “as is” and without warranty or representation. This document may be shared freely, attributed and unmodified. ARM is a registered trademark or registered trademarks of ARM Limited (or its subsidiaries). All other brands or product names are the property of their respective holders. © 1995-2016 ARM Group.

      17 October, 2016 01:00PM

      Mark Shuttleworth: The mouse that jumped

      The naming of Ubuntu releases is, of course, purely metaphorical. We are a diverse community of communities – we are an assembly of people interested in widely different things (desktops, devices, clouds and servers) from widely different backgrounds (hello, world) and with widely different skills (from docs to design to development, and those are just the d’s).

      As we come to the end of the alphabet, I want to thank everyone who makes this fun. Your passion and focus and intellect, and occasionally your sharp differences, all make it a privilege to be part of this body incorporate.

      Right now, Ubuntu is moving even faster to the centre of the cloud and edge operations. From AWS to the zaniest new devices, Ubuntu helps people get things done faster, cleaner, and more efficiently, thanks to you. From the launch of our kubernetes charms which make it very easy to operate k8s everywhere, to the fun people seem to be having with snaps at snapcraft.io for shipping bits from cloud to top of rack to distant devices, we love the pace of change and we change the face of love.

      We are a tiny band in a market of giants, but our focus on delivering free software freely together with enterprise support, services and solutions appears to be opening doors, and minds, everywhere. So, in honour of the valiantly tiny leaping long-tailed over the obstacles of life, our next release which will be Ubuntu 17.04, is hereby code named the ‘Zesty Zapus’.


      17 October, 2016 12:23PM

      Stéphane Graber: LXD is now available in the Ubuntu Snap Store

      LXD logo

      What are snaps?

      Snaps were introduced a little while back as a cross-distro package format allowing upstreams to easily generate and distribute packages of their application in a very consistent way, with support for transactional upgrade and rollback as well as confinement through AppArmor and Seccomp profiles.

      It’s a packaging format that’s designed to be upstream friendly. Snaps effectively shift the packaging and maintenance burden from the Linux distribution to the upstream, making the upstream responsible for updating their packages and taking action when a security issue affects any of the code in their package.

      The upside being that upstream is now in complete control of what’s in the package and can distribute a build of the software that matches their test environment and do so within minutes of the upstream release.

      Why distribute LXD as a snap?

      We’ve always cared about making LXD available to everyone. It’s available for a number of Linux distribution already with a few more actively working on packaging it.

      For Ubuntu, we have it in the archive itself, push frequent stable updates, maintain official backports in the archive and also maintain a number of PPAs to make our releases available to all Ubuntu users.

      Doing all that is a lot of work and it makes tracking down bugs that much harder as we have to care about a whole lot of different setups and combination of package versions.

      Over the next few months, we hope to move away from PPAs and some of our backports in favor of using our snap package. This will allow a much shorter turnaround time for new releases and give us more control on the runtime environment of LXD, making our lives easier when dealing with bugs.

      How to get the LXD snap?

      Those instructions have only been tested on fully up to date Ubuntu 16.04 LTS or Ubuntu 16.10 with snapd installed. Please use a system that doesn’t already have LXD containers as the LXD snap will not be able to take over existing containers.

      LXD snap example

      1. Make sure you don’t have a packaged version of LXD installed on your system.
        sudo apt remove --purge lxd lxd-client
      2. Create the “lxd” group and add yourself to it.
        sudo groupadd --system lxd
        sudo usermod -G lxd -a <username>
      3. Install LXD itself
        sudo snap install lxd

      This will get the current version of LXD from the “stable” channel.
      If your user wasn’t already part of the “lxd” group, you may now need to run:

      newgrp lxd

      Once installed, you can set it up and spawn your first container with:

      1. Configure the LXD daemon
        sudo lxd init
      2. Launch your first container
        lxd.lxc launch ubuntu:16.04 xenial

      Channels and updates

      The Ubuntu Snap store offers 4 different release “channels” for snaps:

      • stable
      • candidate
      • stable
      • edge

      For LXD, we currently use “stable”, “candidate” and “edge”.

      • “stable” contains the latest stable release of LXD.
      • “candidate” is a testing area for “stable”.
        We’ll push new releases there a couple of days before releasing to “stable”.
      • “edge” is the current state of our development tree.
        This channel is entirely automated with uploads triggered after the upstream CI confirms that the development tree looks good.

      You can switch between channels by using the “snap refresh” command:

      snap refresh lxd --edge

      This will cause your system to install the current version of LXD from the “edge” channel.

      Be careful when hopping channels though as LXD may break when moving back to an earlier version (going from edge to stable), especially when database schema changes occurred in between.

      Snaps automatically update, either on schedule (typically once a day) or through push notifications from the store. On top of that, you can force an update by running “snap refresh lxd”.

      Known limitations

      Those are all pretty major usability issues and will likely be showstoppers for a lot of people.
      We’re actively working with the Snappy team to get those issues addressed as soon as possible and will keep maintaining all our existing packages until such time as those are resolved.

      Extra information

      More information on snap packages can be found at: http://snapcraft.io
      Bug reports for the LXD snap: https://github.com/lxc/lxd-pkg-ubuntu/issues

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      PS: I have not forgotten about the remaining two posts in the LXD 2.0 series, the next post has been on hold for a while due to some issues with OpenStack/devstack.

      17 October, 2016 05:55AM

      hackergotchi for ev3dev


      Announcing ev3dev-jessie-2016-10-17 Release

      Hey look, we have a new release! It has been quite a while since our last release. We made some breaking changes to the motors back in April and it took quite a while to get some of the libraries back in a working state. In the mean time we changed our image building infrastructure to be based on docker, which makes the build process much better for the future, but had the side effect of introducing some new (and old) problems that had to be tracked down and fixed.

      But now we feel like we have repaired most of the regressions and other problems caused by the changes and have something working well enough to call it an official release. Please let us know if you run into any problems.


      If you are upgrading from the 2015-12-30 image and you never upgraded the kernel, you will most likely find that your motors no longer work because of changes in the motor drivers. Follow the link to read about the changes.

      Etcher and Bmaps

      In our Getting Started guide, we now recommend using Etcher for flashing the disk image to your SD card. Even though it is still in beta, we have found that it works quite well.

      Starting with this release, the image downloads now include a bmap. Etcher uses this to only write the parts of the image file with important information and skips the parts that are unused space. This makes writing the image go two to three times faster than our previous release!

      Ev3dev Tools

      There are a couple new programs of interest. Try running ev3dev-sysinfo in a terminal. Be sure to include this information when you report an issue on GitHub.

      robot@ev3dev:~$ ev3dev-sysinfo
      Image file:         ev3dev-jessie-ev3-generic-2016-10-17
      Kernel version:     4.4.24-16-ev3dev-ev3
      Board:              LEGO MINDSTORMS EV3 Programmable Brick
      Revision:           0006
      Brickman:           0.8.0
      ev3devKit:          0.4.2

      Also, sudo ev3dev-config will help you with some basic administrative tasks.

      Arrow keys to navigate / <ENTER> to select / <ESC> to exit menu
      ┌─────────────┤ ev3dev Software Configuration Tool (ev3dev-config) ├─────────────┐
      │                                                                                │
      │      1 Change User Password   Change password for the default user (robot)     │
      │      2 Hardware Configuration Configure EV3-related drivers                    │
      │      3 Update                 Update all packages                              │
      │      4 Advanced Options       Configure advanced settings                      │
      │      5 System Info            Get information on your ev3dev system            │
      │                                                                                │
      │                                                                                │
      │                                                                                │
      │                                                                                │
      │                                                                                │
      │                                                                                │
      │                                                                                │
      │                     <Select>                     <Finish>                      │
      │                                                                                │


      We have a fancy new download page. Check it out!

      17 October, 2016 12:00AM by @dlech

      October 16, 2016

      hackergotchi for SparkyLinux


      Linux kernel 4.8.2


      Linux kernel 4.8.2 landed in Sparky “unstable” repository.

      The Linux kernel is available in Sparky “unstable” repository, so enabled it to upgrade or make fresh installation:

      Follow the Wiki page: http://sparkylinux.org/wiki/doku.php/linux_kernel to install the latest Sparky’s Linux kernel.

      Then reboot your machine to take effects.

      To quick remove older version of the Linux kernel, simply run APTus-> Remove-> Uninstall Old Kernel script.


      16 October, 2016 10:16PM by pavroo

      hackergotchi for Ubuntu developers

      Ubuntu developers

      David Mohammed: budgie-remix 16.10 released

      I’m very pleased to announce the release of budgie-remix based on the solid 16.10 Ubuntu foundations. For the uninitiated, budgie-remix utilises the wonderful budgie-desktop graphical interface from the Solus team. This is our first release following the standard Ubuntu release … Continue reading

      16 October, 2016 07:31PM

      Stuart Langridge: My nominations for the Silicon Canal Tech Awards 2016

      The Silicon Canal tech awards are coming up here in Birmingham, so I thought I’d write down who I’ve nominated and why! Along with a few categories where I had difficulty deciding, in which an honourable mention or two may be awarded, although such things do not get submitted to the actual award ceremony :-)

      Best Tech Start-Up

      ImpactHub Birmingham

      As ImpactHub say, “We want to empower a collective movement to bring about change in our city, embracing a diverse range of people and organisations with a whole host of experiences and skills.” ImpactHub is a place enabling the tech scene in Birmingham, which is the most important part of it all; that’s what makes Birmingham great and more than just some half-baked clone of London or San Francisco. Bringing tech companies together with the rest of the city also hugely increases the number of connections made and opportunities created right here in Birmingham itself, and helps tech entrepreneurs meet other communities and unify everyone’s goals.

      Most Influential Female in Technology

      Jessica Rose

      Jess tirelessly advocates technology and Birmingham, both inside and outside the city. She’s great at connecting dots, showing people who they can work with to get things done, and advising on how best to grow a community or a company into areas you might not have otherwise pursued. And she’s helpful and engaging and good to work with, and knows basically everyone. That’s influence, and she’s using it to better the Brum tech scene as a whole, and that deserves reward.

      Runner up: Immy Kaur for setting up ImpactHub :-)

      Small Tech Company of the Year

      Technical Team Solutions

      TTS are heavily invested in the tech life of Birmingham itself. They sponsor events, they’ve partnered with Silicon Canal as exclusive recruitment agents, and most importantly they’re behind Fusion, a regular and vibrant quarterly tech conference drawn from the city and supporting both local tech and local street food vendors. This isn’t like some other conferences which basically are in Birmingham by coincidence; Fusion is intimately involved with the Brum tech scene, as are TTS themselves, and that should be massively encouraged.

      Runner up: Jump 24, web design and development studio getting good stuff done and run by a very smart and very short Welshman1 :-)

      Large Tech Company of the Year (revenue over £10 million)


      Talis are strong supporters of the Birmingham tech scene, a successful large scaleup here in the city, and willing to work openly with others in pursuit of those goals. They regularly sponsor tech events with money or by providing space to host meetups, hold hack days and write about them afterwards, donate time and money to helping others in the city including events for entrepreneurs as well as developers, and run their own events (such as Codelicious) to add more to the growing vibrancy of Brum. It’s great to see a company of this size be cognisant of the city and their life within it, and this certainly deserves to be recognised.

      Most Influential Male in Technology

      Roy Meredith

      A jolly good way to make connections in the city is through Roy, who is connected to all sorts of people via being responsible for the tech sectors in Marketing Birmingham. I’m not sure the government marketing agency are always perfect, but I am sure that Roy is a person to know. He’s an engaging public speaker, he’s got a background in industry (with a list of AAA games he’s worked on that’d blow your mind), and he’s approachable and smart and everyone listens to him. If that’s not influence, I don’t know what is.

      Outstanding Technology Individual of the Year

      Mary Matthews from Memrica

      Mary describes herself as “passionate about using technology to make a difference to people’s lives” and, unlike quite a few people who might say that, I think she actually means it. It was marvellous to see Memrica get recognised as part of the UberPITCH consultancy earlier this year, and her trip out to meet Travis Kalanick not only will have helped her continue her long history of doing good tech things but also helped elevate Birmingham’s profile as a place for internationally recognised startups. That’s pretty outstanding, in my opinion.

      Runner up: Jess Rose


      Best Angel or Seed Investor of the Year: no nomination here because I have no idea! I know a couple, but haven’t worked with them.

      Graduate Developer of the Year: no nomination here because I don’t know enough graduates. I’d have nominated @jackweirdy if he hadn’t left us :)

      Developer of the Year: no nomination here because, well, too contentious. I don’t know who I’d pick as the best, and I do know that everyone I don’t pick will never buy me a pint again, so I’m not sure who to say here. Maybe I should have just picked myself :-)

      Now, your turn

      Maybe you agree, maybe you don’t. That’s what I think. You will notice that I primarily care about the tech life of the city; if you do a bunch of good stuff here in Birmingham and you’re proud of that, I like what you do. If you do interesting things but never talk about them here in the city, I’m less interested in your things. Perhaps you have different criteria: you should now go and say what you think. Go and add your nominations, Birmingham people; let’s get everyone’s voices heard.

      1. sorry, Dan; Fusion just tips it for TTS, but maybe you should run a conference as well to lobby for the vote :)

      16 October, 2016 06:02PM

      Svetlana Belkin: Goals for Z Cycle And Reflection on Goals from Y Cycle

      It’s hard to believe that Ubuntu 16.10 is already released (I think I may of lost track of time this cycle) and it seems that it’s time for the next cycle’s goals.  But first, I wish to reflect on the goals from the last cycle.  I found out that I’m (somehow) no mood for coding and/or hacking, but I was able to do something for Linux Padawn, which was Buddy Press, but it needs tweaking to get it to work right.

      As for this cycle’s goals, they will be centered around the Grailville Pond project, Ubuntu (Touch), and Linux Padawan:

      Grailville Pond Project

      Work on the Raspberry Pi as I stated here and also work on a temperature inversion catching script for R.

      Ubuntu (Touch)

      Work on a demo because I’m planning to go Ohio Linux Fest next year and I want to bring something cool.

      Linux Padawan

      Work on community building in order to increase the growth and also try to get Buddy Press to work.

      Hopefully this time I can complete them.

      16 October, 2016 04:57PM

      October 15, 2016

      Kubuntu: Kubuntu 16.10 Released!



      We, the Kubuntu Team are very happy to announce that Kubuntu 16.10 is finally here!


      After 6 months of hard but fun work we have a bright new release for you all!


      We packaged some great updates from the KDE Community such as:

      – Plasma 5.7.5
      – Applications 16.04.3
      – Frameworks 5.26.0

      We also have updated to version 4.8 of the Linux kernel with improvements across the board such as Microsoft Surface 3 support.

      For a list of other application updates, upgrading notes and known bugs be sure to read our release notes!

      Download 16.10 or read about how to upgrade from 16.04.

      15 October, 2016 10:49PM

      Aaron Honeycutt: A very bright future ahead.


      It’s only half way though October but it has already been a very busy month for us at Kubuntu. We have welcomed Rik Mills (acheronuk on IRC) as a new Kubuntu/Ubuntu member, Clive Johnston (clivejo on IRC) as a Kubuntu Developer and pushed a new Kubuntu release out the doors!

      Be sure to let us know how much you love it in the #kubuntu-devel IRC Channel, Telegram group or the Mailing List. Your reply might be featured in the next Kubuntu Podcast!

      15 October, 2016 01:34PM

      hackergotchi for Blankon developers

      Blankon developers

      Ahmad Haris: Mi Notebook Air 12.5″ with openSUSE Tumbleweed

      Well … I bought new machine for fun. Xiaomi Mi Notebook Air 12.5″ (http://xiaomi-mi.com/notebooks/xiaomi-mi-notebook-air-125-silver/). It was good machine. Good price, comparing to Apple Macbook 12.🙂

      I killed Microsoft Windows Chinese version that already on it. Replace it with BlankOn Linux for a while. My plan exactly put openSUSE Tumbleweed.

      First installation was not good enough. My pointer not working. So I use keyboard for it. Another issue is when USB installer plugged before computer on, installation will not going well (my experience, the system install on USB it self).

      After installation finish, everything going well and good.

      openSUSE Tumbelweed on Mi Notebook Air 12.5"

      openSUSE Tumbelweed on Mi Notebook Air 12.5″

      15 October, 2016 01:12PM

      Ahmad Haris: BlankOn Tambora di Mi Notebook Air 12.5″

      Saya iseng mencoba (baca: membeli) Xiaomi Mi Notebook Air 12.5″ (http://xiaomi-mi.com/notebooks/xiaomi-mi-notebook-air-125-silver/). Barang datang, dinyalakan, dan langsung babat pasang BlankOn Tambora (http://cdimage.blankonlinux.or.id/blankon/livedvd-harian/current/).

      Impresi awal, laptop ini cukup keren dan bagus, kualitas bikinannya juga rapi serta dengan harga yang tidak terlalu mahal.

      Instalasi BlankOn cukup berjalan mulus, semua perangkat dikenali dengan baik. Tidak ada kendala yang berarti. Just Work.

      BlankOn Tambora di Mi Notebook Air 12.5"

      BlankOn Tambora di Mi Notebook Air 12.5″

      Batrenya juga tahan lama. Sementara pemakaian hanya seputar browsing, muter youtube dan nonton filem.🙂

      15 October, 2016 12:56PM

      hackergotchi for ev3dev


      Kernel Release Cycle 16

      There is nothing too exciting in this release. Most of the changes are for adding support for a new BeagleBone cape that is being developed.

      This release includes one bug fix. Support for polling the status attribute of the tacho motor class has been restored for EV3. (It is still broken for BrickPi and PiStorms.)

      Here is an example of how to make use of this feature:

      #!/usr/bin/env python3
      # 1. Connect motors first!
      # 2. Run this program in a terminal
      # 3. Control the motors in a second terminal
      # 4. Watch the output of this program
      # 5. Press CTRL+C to stop this program
      import glob
      import select
      motors = glob.glob('/sys/class/tacho-motor/motor*')
      p = select.poll()
      lookup = {}
      for m in motors:
          # get the output port name for each motor
          with open(m + '/address', 'r') as a:
              name = a.read().strip()
          # get a handle to the status attribute
          s = open(m + '/state', 'r')
          # register the status attribute with the poll object. POLLPRI is important!
          p.register(s, select.POLLPRI)
          # save these for later
          lookup[s.fileno()] = (name, s)
      while True:
          # wait for a an event
          for fd, event in p.poll():
              # get the info for the motor that caused the event
              name, s = lookup[fd]
              if event & select.POLLPRI:
                  # print info about the event
                  print(name, s.read().strip())
                  print(name, 'Poll error!')

      Version Info

      In this round of releases, we have:

      • v4.4.24-16-ev3dev-ev3 for EV3.
      • v4.4.24-ti-rt-r55-16-ev3dev-bb.org for BeagleBone.
      • v4.4.23-16-ev3dev-rpi for Raspberry Pi 0/1.
      • v4.4.23-16-ev3dev-rpi2 for Raspberry Pi 2/3.

      You can also find this kernel in snapshot build 2016-10-15.


      For a more complete changelog, follow the link for your platform: EV3, BB, RPi or RPi2.

      15 October, 2016 12:00AM by @dlech

      October 14, 2016

      hackergotchi for Ubuntu developers

      Ubuntu developers

      Ubuntu Studio: Ubuntu Studio 16.10 Released

      We are happy to announce the release of our latest version, Ubuntu Studio 16.10 Yakkety Yak! As a regular version, it will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list […]

      14 October, 2016 11:26PM

      hackergotchi for ArcheOS


      Soil triangle integration in a PostgreSQL based system for archaeological recording sheets

      This is the second presentation we gave at ArcheoFOSS 2016. This time the topic is more related with geoarchaeology and regards geTTexture (the open source application we developed in order to speed up the sedimentation est).

      Here below is the link to the original presentation, for the reader who wants to see it directly online:


      For those who prefer to see it on youtube, I just uploaded it on our channel:

      Like for last post, I report here below a short abstract, describing shortly each slide of the presentation:

      SLIDE 1

      Title and overview

      SLIDE 2

      Compiling the archaeological recording sheet is one of the most time-expensive operation during an archaeological project both doing it manually...

      SLIDE 3

      ... or using a database.

      SLIDE 4

      Considering the Italian standards (ICCD, "Istituto Centrale per il Catalogo e la Documentazione"), often new archaeologists have difficulties in describing the composition of the archaeological layer.

      SLIDE 5 and 6

      SLIDE 7 and 8

      No particular difficulties are detected in describing the artificial elements.

      SLIDE 9 and 10

      A little bit more complicated is considered to describe the organic and oranogenic elements.

      SLIDE 11 and 12

      The most difficult field is considered the geological one.

      SLIDE 13

      Geological materials are splitted into two categories: skeleton and fine earth

      SLIDE 14 and 15

      The skeleton is normally simpler to identify (both in the field and in the lab).

      SLIDE 16 and 17

      The fine earth is maybe the most complicated archaeological element to identify on the field, while specialist (geoarchaeologists) need to use specific equipement in the lab.

      SLIDE 18

      Fine earth definition on the field is foten carried on with anametric and sobjective methodology.

      SLIDE 19

      Like feel, ball and ribbon test

      SLIDE 20

      The sedimentation test gives more objective results with a minimum metric value.

      SLIDE 21

      Arc-Team used validated the use of sedimentation test also in emergency excavation (which have a stricter time-table respect other archaeological projects)

      SLIDE 22

      Thank to +Mattia Segata  (Arc-Team's geoarchaeologist at ATLAB), the basic methodology has been improved considering the Strokes' Law.

      SLIDE 23

      +Giuseppe Naponiello  (Arc-Team DataBase and WebGIS expert) improved a PostreSQL dabatase, developed on the Italian archaeological recording sheet. The Database is able to integrate the data coming from the sedimentation test.

      SLIDE 24

      Future integration are planned for basic analytical chemistry analyses on the field.

      SLIDE 25

      And for more specific laboratory analyses (e.g. Energy Disperive X-ray Spectrometry).

      SLIDE 26

      The DataBase can be easily integrated into a WebGIS

      SLIDE 27

      The slides is just a demonstration of the software (the code is taken from a prototype).

      SLIDE 28

      The slide is just an example of one of the videotutorial Arc-Team is producing to explain the sedimentation test and the use of geTTexture.

      SLIDE 29

      geTTexture will be one of the open source application for archaeology which Arc-Team is developing and that will compose the suite Arc-Tool.

      SLIDE 30

      Another extension of geTTexture Arc-Team is working on is related with colorimetry. The idea is to integrate a tool to record anametric analyses

      SLIDE 31

      or metric data coming from Open Hardware devices (e.g. Public Lab spectrometer)

      SLIDE 32

      Thak you for your attention

      Have a nice day!

      14 October, 2016 01:10PM by Luca Bezzi (noreply@blogger.com)

      hackergotchi for Ubuntu developers

      Ubuntu developers

      Will Cooke: What to do with Unity 8 now

      As you’re probably aware Ubuntu 16.10 was released yesterday and brings with it the Unity 8 desktop session as a preview of what’s being worked on right now and a reflection of the current state of play.

      You might have already logged in and kicked the proverbial tyres.  If not I would urge you to do so.  Please take the time to install a couple of apps as laid out here:


      The main driver for getting Unity 8 in to 16.10 was the chance to get it in the hands of users so we can get feedback and bug reports.  If you find something doesn’t work, please, log a bug.  We don’t monitor every forum or comments section on the web so the absolute best way to provide your feedback to people who can act on it is a bug report with clear steps on how to reproduce the issue (in the case of crashes) or an explanation of why you think a particular behaviour is wrong.  This is how you get things changed or fixed.

      You can contribute to Ubuntu by simply playing with it.

      Read about logging bugs in Ubuntu here: https://help.ubuntu.com/community/ReportingBugs

      And when you are ready to log a bug, log it against Unity 8 here: https://bugs.launchpad.net/ubuntu/+source/unity8




      14 October, 2016 01:09PM

      Jonathan Riddell: KDE 1 neon LTS Released: 20 Years of Supporting Freedom

      To celebrate KDE’s 20th birthday today, the great KDE developer Helio Castro has launched KDE 1, the ultimate in long term support software with a 20 year support period.

      KDE neon has now, using the latest containerised continuous integration technologies released KDE1 neon Docker images for your friendly local devop to deploy.

      Give it a shot with:

      apt install docker xserver-xephyr
      adduser <username> docker
      <log out and in again>
      Xephyr :1 -screen 1024×768 &
      docker pull jriddell/kde1neon
      docker run -v /tmp/.X11-unix:/tmp/.X11-unix jriddell/kde1neon

      (The Docker image isn’t optimised at all and probably needs to download 10GB, have fun!)

      Facebooktwittergoogle_pluslinkedinby feather

      14 October, 2016 11:12AM

      hackergotchi for Maemo developers

      Maemo developers

      How to draw a line interpolating 2 colors with opencv

      The build-in opencv line drawing function allows to draw a variety of lines. Unfortunately it does not allow drawing a gradient line interpolating the colors at its start and end.

      However implementing this on our own is quite easy:

      using namespace cv;
      void line2(Mat& img, const Point& start, const Point& end, 
                           const Scalar& c1,   const Scalar& c2) {
          LineIterator iter(img, start, end, LINE_8);
          for (int i = 0; i < iter.count; i++, iter++) {
             double alpha = double(i) / iter.count;
             // note: using img.at<T>(iter.pos()) is faster, but 
             // then you have to deal with mat type and channel number yourself
             img(Rect(iter.pos(), Size(1, 1))) = c1 * (1.0 - alpha) + c2 * alpha;
      0 Add to favourites0 Bury

      14 October, 2016 08:31AM by Pavel Rojtberg (pavel@rojtberg.net)

      hackergotchi for Ubuntu developers

      Ubuntu developers

      The Fridge: Juju 2.0 is here!

      Juju 2.0 is here! This release has been a year in the making. We’d like to thank everyone for their feedback, testing, and adoption of juju 2.0 throughout its development process! Juju brings refinements in ease of use, while adding support for new clouds and features.

      New to juju 2?

      You can check our documentation at https://jujucharms.com/docs/2.0/getting-started

      Need to install it?

      If you are running Ubuntu, you can get it from the juju stable ppa:

      sudo add-apt-repository ppa:juju/stable
      sudo apt update
      sudo apt install juju-2.0

      Or install it from the snap store

      snap install juju --beta --devmode

      Windows, Centos, and MacOS users can get a corresponding installer at:


      Want to upgrade to GA?

      Those of you running an RC version of juju 2 can upgrade to this release by running:

      juju upgrade-juju

      Feedback Appreciated!

      We encourage everyone to subscribe the mailing list at juju at lists.ubuntu.com and join us on #juju on freenode. We would love to hear your feedback and usage of juju.

      Originally posted to the juju mailing list on Fri Oct 14 04:34:41 UTC 2016 by Nicholas Skaggs

      14 October, 2016 04:49AM

      hackergotchi for Blankon developers

      Blankon developers

      Rahman Yusri Aftian: BlankOn at OpenSUSE Asia SUMMIT 2016

      Friday, September 30, 2016, I traveled from station Surabaya Gubeng to Jogjakarta Tugu Station, departing at 17:00 pm until 21:30 pm up at the monument station Yogyakarta.
      This trip is full of worries, I leave the children at home who are ill.
      But I vigorously in attending the Asian OpenSUSE SUMMIT 2016 UIN Sunan Kalijaga Yogyakarta.

      I attended the Summit Asia 2016 OpenSUSE as a speaker “BlankOn Development Package at OpenSUSE”.


      14 October, 2016 02:20AM

      October 13, 2016

      hackergotchi for Ubuntu developers

      Ubuntu developers

      Valorie Zimmerman: Kubuntu 16.10 is released today

      Kubuntu is a friendly, elegant operating system. The system uses the Linux kernel and Ubuntu core. Kubuntu presents KDE software and a selection of other essential applications.

      We focus on elegance and reliability. Please join us and contribute to an exciting international Free and Open Source Software project.

      Install Kubuntu and enjoy friendly computing. Download the latest version:

      Download kubuntu 64-bit (AMD64) desktop DVD    Torrent

      Download kubuntu (Intel x86) desktop DVD            Torrent

      PCs with the Windows 8 logo or UEFI firmware, choose the 64-bit download. Visit the help pages for more information.

      Ubuntu Release notes
      For a full list of issues and features common to Ubuntu, please refer to the Ubuntu release notes.
      Known problems
      For known problems, please see our official Release Announcement.

      13 October, 2016 11:38PM by Valorie Zimmerman (noreply@blogger.com)

      Lubuntu Blog: Lubuntu 16.10 (Yakkety Yak) Released!

      Thanks to all the hard work from our contributors, Lubuntu 16.10 has been released! With the codename Yakkety Yak, Lubuntu 16.10 is the 11th release of Lubuntu, with support until July 2017. We even have Lenny the Lubuntu mascot dressed up for the occasion! What is Lubuntu? Lubuntu is an official Ubuntu flavor based on […]

      13 October, 2016 11:19PM

      Sean Davis: Xubuntu 16.10 “Yakkety Yak” Released

      Another six months have come and gone, and it’s been a relatively slow cycle for Xubuntu development.  With increased activity in Xfce as it heads towards 4.14 and full GTK+3 support, few changes have...

      13 October, 2016 10:30PM

      Xubuntu: Xubuntu 16.10 Released

      The Xubuntu team is pleased to announce the immediate release of Xubuntu 16.10. Xubuntu 16.10 is a normal release and will be supported for 9 months.

      This release has seen little visible change since April’s 16.04, however much has been done towards supplying Xubuntu with Xfce packages built with GTK3, including the porting of many plugins and Xfce Terminal to GTK3. Those GTK3 ports can, if one wishes to test them, be installed from one of the team’s development PPAs

      The final release images are available as Torrents and direct downloads from

      As the main server will be very busy in the first few days after release, we recommend using the Torrents wherever possible.


      For support with the release, navigate to Help & Support for a complete list of methods to get help.

      Known Issues

      • Thunar is still the subject of a few bugs, though they all appear to revolve around similar issues. A further patch has been applied since 16.04.
      • On some hardware the password is sometimes required twice when returning from suspend.

      For more information on affecting bugs please refer to the Release Notes.

      Thanks to all who have contributed to Xubuntu, not least those who test for us when called upon, and generally anyone can do that for us all. We will name you all in time – you deserve one last mention. Thank you on behalf of all installing Xubuntu – you all rock!

      13 October, 2016 04:43PM

      hackergotchi for Ubuntu


      Ubuntu 16.10 (Yakkety Yak) released

      Codenamed “Yakkety Yak”, Ubuntu 16.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard atwork through this cycle, introducing new features and fixing bugs.

      Under the hood, there have been updates to many core packages, including a new 4.8-based kernel, a switch to gcc-6, and much more. Ubuntu Desktop has seen incremental improvements, with newer versions of GTK and Qt, updates to major packages like Firefox and LibreOffice, and stability improvements to Unity.

      Ubuntu Server 16.10 includes the Newton release of OpenStack, alongside deployment and management tools that save devops teams time when deploying distributed applications – whether on private clouds, public clouds, x86, ARM, or POWER servers, z System mainframes, or on developer laptops. Several key server technologies, from MAAS to juju, have been
      updated to new upstream versions with a variety of new features.

      The newest Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu are also being released today. More details can be found for these at their individual release notes:


      Maintenance updates will be provided for 9 months for all flavours releasing with 16.10.

      To get Ubuntu 16.10

      In order to download Ubuntu 16.10, visit:


      Users of Ubuntu 16.04 will be offered an automatic upgrade to 16.10 if they have selected to be notified of all releases, rather than just LTS upgrades. For further information about upgrading, see:


      As always, upgrades to the latest version of Ubuntu are entirely free of charge.

      We recommend that all users read the release notes, which document caveats, workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:


      Find out what’s new in this release with a graphical overview:

      If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

      Help Shape Ubuntu

      If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:


      About Ubuntu

      Ubuntu is a full-featured Linux distribution for desktops, laptops, netbooks and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

      Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:


      More Information

      You can learn more about Ubuntu and about this release on our website listed below:


      To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:


      Originally posted to the ubuntu-announce mailing list on Thu Oct 13 15:14:49 UTC 2016 by Adam Conrad, on behalf of the Ubuntu Release Team

      13 October, 2016 04:10PM by lyz

      hackergotchi for Ubuntu developers

      Ubuntu developers

      Ubuntu Insights: Unity 8 preview session in Ubuntu 16.10 Yakkety Yak

      Ubuntu 16.10 Yakkety Yak has landed and brings with it the Unity 8 technical preview desktop session. Unity 8 has been the face of the Ubuntu phone and tablet for a few years and has quickly evolved its convergence feature set to allow it to work seamlessly between form factors such as phones, tablets and desktops. You can now chose to log in to a Unity 8 session directly from the greeter.

      Choose your session

      The current experience comes with a minimal set of applications; a browser, terminal and systems settings.  Here are a few things you might like to try to flesh out your Unity 8 session and get the most from it.

      Window Management

      Unity 8 is designed from the ground up to work well with touch based displays. Try pressing three fingers on a window to open the touch friendly controls. You can move and resize windows with controls which work well for a touchscreen.

      Touch controls

      Enable More Scopes

      Unity 8 ships with only the Apps scope enabled by default. There are many others installed and you can enable them by:

      1. Click on the up arrow at the bottom of the Apps Scope (or if you are using a touch screen drag it up)

      Manage scopes

      1. Browse the list of available scopes and enable them by clicking on the star icon
        When you click the star you will notice the scope move to the top of the list in the Home section
      2. When you are done you can return to the main scope by clicking on the left arrow next to “Manage”
      3. You can now swipe left and right between the scopes you have enabled

      Enable Rich Multi-Media Scopes

      You can browse and play your media directly from Scopes or launch the Media Player app. Make sure you have some supported media in your Music and Videos directories first.

      1. Open the Terminal App and install the required software:
      2. sudo apt install mediaplayer-app mediascanner2.0 unity-scope-mediascanner2 ubuntu-restricted-extras
      3. Open the Manage Scopes menu as in the previous step and enable the My Music, Music and My Videos scopes.
      4. The Media Scanner may take a few minutes to index your media. Try logging out and logging back in again if nothing appears in the Scopes.
      5. Scroll through to the Music scope and you should see your media complete with artwork where available.

      Music Scope

      1. You can click play directly from the Scope to listen.

      My Music Scope

      1. If you have Videos available as well, scroll through to the My Videos scope, find a video and click Play. The Media Player app will open and play your video.

      Video playback

      Add More Apps

      It wasn’t possible to include all the Unity 8 core apps in the 16.10 image but you can easily install some additional apps as Snaps and from a PPA.

      Snappy Apps!

      These Snap apps are cutting edge and will be updated frequently. Because they are packaged as Snaps you will not be stuck with the version of the app which shipped with 16.10, they will be frequently updated and you will benefit from the freshest version as soon as they are available. (There is a known issue with Snaps opening two windows.  This will be fixed soon.) Once you have installed the apps pull down the Apps Scope to refresh.

      • Gallery App
        • sudo snap install --edge --devmode gallery-app
      • Camera App
        • sudo snap install --edge --devmode camera-app
      • Address Book App
        • sudo snap install --edge --devmode address-book-app
      • Calendar App
        • sudo snap install --edge --devmode ubuntu-calendar-app

      Install a snap

      PPA Apps

      You can also add this PPA to give you access to other Unity 8 apps which are built as .deb packages:

      • To add the PPA open the Terminal app and run:
        •  sudo add-apt-repository ppa:convergent-apps/testing
      • Then run:
        • sudo apt update
      • Music App
        • sudo apt install music-app
      • Calculator
        • sudo apt install ubuntu-calculator-app
      • Doc Viewer
        • sudo apt install ubuntu-docviewer-app

      deb apps

      Enable Xorg dependent Applications

      Unity 8 uses the Mir display server but many older applications need the Xorg display server to work. To support these applications you can install Libertine which will create a container to host the applications and Xmir to provide the Xorg display server. Install the Libertine packages from the archive by typing:

       sudo apt install libertine libertine-scope libertine-tools

      Once installed you will see the Libertine manager icon in the Apps Scope.

      • Open Libertine and click on Install
      • Leave the input boxes blank and click OK
      • Wait while Libertine creates the container and installs the necessary packages. This will take a few minutes.
      • Once this is complete click on the “Ubuntu ‘Yakkety Yak’” text and then click on the plus symbol
      • Libertine - add
      • Click “Enter package name” and enter “gimp” and click OK
      • Wait while The Gimp is installed.
      • Once complete you should see a Gimp icon in the Apps Scope, click on this and you’re running Gimp under Unity 8 and Mir.
      • gimp

      Points to note

      Mir runs on vty8, whereas Xorg runs on vty7. If you switch to a text console (e.g. vty1) and want to get back to your Unity 8 graphical session press ctrl-alt-f8.

      13 October, 2016 03:01PM

      hackergotchi for Tails


      Why we need donations

      Today we are starting a donation campaign to fund our work in 2017.

      Unlike most other tools on the Internet, Tails comes for free as in freedom. We are not selling your data, sending you targeted advertising, nor will ever sell our project to a big company. We give out Tails for free simply because everybody deserves to be protected from surveillance and censorship. But also because being free software is a necessary requirement for our tools to be safe, and protect you as intended. If our source code was closed, there would be no way of actually verifying that our software is trustworthy.

      Since 2014, we raised 210'000€ on average each year, coming from:

      • People like you
      • Private companies like Mozilla or DuckDuckGo
      • Foundations and NGOs like Hivos and Access Now
      • Entities related to the US government like the Open Technology Fund (OTF) or the National Democratic Institute (NDI)

      Related to US government: 34%, Foundations & NGOs: 34%, Individuals: 17%, Companies: 15%

      We often hear complaints about the fact that many software projects that are meant to fight surveillance, like Tor and Tails, get a lot of funding from the US government whose own surveillance projects are severely criticized. We completely share this concern and we will worry about our accountability and sustainability as long as the survival of our project depends on a few small grants, some of them coming from organizations linked to governments.

      Now, we would like you to think about it: where should our funding come from?

      The answer is clear to us: the survival of Tails should be guaranteed by our users themselves, so that in return, we can continue to use our money in their best interest, with complete independence.

      From anonymized statistics on our website we know that Tails is used by around 18 000 people every day. If each of them gave 12€, the price of a USB stick, our budget for the whole year would be raised within one day. As you can see, funding Tails through donations is a realistic and our budget ridiculously small compared to the multibillionaire companies and agencies running the surveillance business.

      But many of our users could actually get in trouble if they donated to an anti-surveillance tool like Tails. So when donating to Tails you are also helping all of these people by keeping Tails alive. Please consider setting up a yearly or monthly donation.

      If you want Tails to remain independent, please take one minute to make a donation.

      13 October, 2016 03:00PM