December 16, 2017

Cumulus Linux

Web-scale data: Are you letting stability outweigh innovation?

At Cumulus Networks, we’re dedicated to listening to feedback about what people want from their data centers and developing products and functionality that the industry really needs. As a result, in early 2017, we launched a survey all about trends in data center and web-scale networking to get a better understanding of what the landscape looks like. With over 130 respondents from various organizations and locations across the world, we acquired some pretty interesting data. This blog post will take you through a little teaser of what we discovered (although if you just can’t wait to read the whole thing, you can check out the full report here) and a brief analysis of what this data means. So, what exactly are people looking for in their data centers this coming year? Let’s look over some of our most fascinating findings.

What initiatives are organizations most invested in?

There are a lot of exciting ways to optimize a data center, but what major issues are companies most concerned with? Well, according to the data we acquired, cost-effective scalability is the most pressing matter on organizations’ minds. Improved security follows behind at a close second, as we can tell from the graph below.

What is the primary network challenge that you anticipate your organization might face in the next 3 years?

web-scale data

In addition to these necessary initiatives, companies are also looking to incorporate new innovations. As our survey demonstrates, the networking features that people are most interested in are automation, network virtualization, and monitoring and troubleshooting.

As it relates to new networking gear/software, which features do you really care about? (select all that apply)

web-scale data

What is standing in the way of innovation?

Web-scale practices can provide organizations with the solutions and innovations they need to optimize their data centers. So, what’s holding them back from achieving those goals? From what we gathered, it appears that there are two factors at play. As demonstrated in the graph below, the main reasons companies are reluctant to make the switch are that they are still reliant on traditional networking and do not support open networking principles.

What has prevented your organization from adopting web-scale and open networking principles?

web-scale data
It seems that, despite the desire to improve the data center, people let the status quo hold them back. In this respect, stability is also a major concern for organizations since they are reluctant to move away from what they are comfortable with, even if it means sacrificing the innovations they hope to achieve.

Conclusion: it’s all about security, scalability, and stability

Our research makes it very clear that the initiatives organizations are most concerned with are security and scalability. However, in spite of the desire to innovate, they’re letting themselves be held back by clinging to traditional solutions. So, if you want to ensure that your data center isn’t being brought down by the cost of stability, check out the rest of our web-scale data report. You’ll find more of the intriguing data points we pulled from our survey, plus tips and tricks about bringing security and scalability to your data center.

The post Web-scale data: Are you letting stability outweigh innovation? appeared first on Cumulus Networks Blog.

16 December, 2017 03:00PM by Madison Emery

hackergotchi for VyOS


A VyOS 1.2.0-alpha image with FRR instead of Quagga is available for testing (and we've found a GPL violation in VyOS)


Now that 1.1.8 release candidate is out and is (hopefully) being tested by community members, we can get back to building the future of VyOS.

It's been obvious that Quagga needs a replacement, and, since we've been using a Quagga fork inherited from Vyatta Core that includes features that never made it to the mainline quagga, even more so. The mainline Quagga still doesn't have usable commands for configuring multiple routing tables, nor they seem to actively accept patches that would be OS-specific.

The options were discussed many times and so far it seems FreeRangeRouting is the best option. It's a fork of Quagga that is being actively developed, actively accepts contributions, and already includes a number of features that Quagga lacks, such as support for network namespaces, PIM-SM, working IS-IS and more. There's also work being done on non-disruptive config reload.

While FRR is more or less a drop-in replacement for Quagga, it's not identical, and many CLI adjustments will be needed to make VyOS work with it. It needs a lot of testing. For this I've built a custom image that has vyatta-quagga replaced with FRR.

You can download the image here: Please test it and report any issues in the routing protocols configuration you find. It's obviously experimental and you shouldn't use it in real routers, the best way to test is to load your production configs into test virtual machines.

GPL violation

Now to the GPL violation we've found. That violation in fact has been there for over five years and no one noticed it! Then again, it's relatively indirect and subtle.

Quagga is licensed under GPL (and so is FRR). In Vyatta/VyOS, Quagga has been built with SNMP support, so it links with net-snmp. In turn, net-snmp is built with SSL support and links with OpenSSL. This is where the problem is, OpenSSL is licensed under a four-clause BSD license that is not compatible with GPL.

Sadly, there is no easy way out, so it will take some time to fix this violation. The options are:

  • Build Quagga/FRR without SNMP support, which means routing protocol data will not be available through SNMP
  • Build net-snmp without SSL support, which means SNMPv3 will stop working
  • Patch net-snmp to support another cryptographic library that is GPL-compatible

The third option is hardest to implement, but it's the most appealing of all since all functionality will be preserved. We'd like to hear your suggestions regarding the libraries that would be license compatible and that can co-exist with OpenSSL.

16 December, 2017 07:35AM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Serge Hallyn: Pockyt and edbrowse

I use r2e and pocket to follow tech related rss feeds. To read these I sometimes use the nook, sometimes use the pocket website, but often I use edbrowse and pockyt on a terminal. I tend to prefer this because I can see more entries more quickly, delete them en masse, use the terminal theme already set for the right time of day (dark and light for night/day), and just do less clicking.

My .ebrc has the following:

# pocket get
function+pg {
!pockyt get -n 40 -f '{id}: {link} - {excerpt}' -r newest -o ~/readitlater.txt > /dev/null 2>&1
e ~/readitlater.txt

# pocket delete
function+pd {
!awk -F: '{ print $1 }' ~/readitlater.txt > ~/pocket.txt
!pockyt mod -d -i ~/pocket.txt

It’s not terribly clever, but it works – both on linux and macos. To use these, I start up edbrowse, and type <pg. This will show me the latest 10 entries. Any which I want to keep around, I delete (5n). Any which I want to read, I open (4g) and move to a new workspace (M2).

When I'm done, any references which I want deleted are still in ~/readitlater.txt. Those which I won't to keep, are deleted from that file. (Yeah a bit backwards from normal 🙂 ) At that point I make sure to save (w), then run <pd to delete them from pocket.


The opinions expressed in this blog are my own views and not those of Cisco.

16 December, 2017 05:18AM

hackergotchi for VyOS


1.1.8 release is available for download

1.1.8, the major minor release, is available for download from (mirrors are syncing up).

It breaks the semantic versioning convention, while the version number implies a bugfix-only release, it actually includes a number of new features. This is because 1.2.0 number is already assigned to the Jessie-based release that is still in beta, but not including those features that have been in the codebase for a while and a few of them have already been in production for some users would feel quite wrong, especially considering the long delay between the releases. Overall it's pretty close in scope to the original 1.2.0 release plan before Debian Squeeze was EOLd and we had to switch the effort to getting rid of the legacy that was keeping us from moving to a newer base distro.

You can find the full changelog here.

The release is available for both 64-bit and 32-bit machines. The i586-virt flavour, however, was discontinued since a) according to web server logs and user comments, there is no demand for it, unlike a release for 32-bit physical machines b) hypervisors capable of running on 32-bit hardware went extinct years ago. The current 32-bit image is built with paravirtual drivers for KVM/Xen, VMware, and Hyper-V, but without PAE, so you shouldn't have any problem running it on small x86 boards and testing it on virtual machines.

We've also made a 64-bit OVA that works with VMware and VirtualBox.


Multiple vulnerabilities in OpenSSL, dnsmasq, and hostapd were patched, including the recently found remote code execution in dnsmasq.


Some notable bugs that were fixed include:

  • Protocol negation in NAT not working correctly (it had exactly opposite effect and made the rule match the negated protocol instead)
  • Inability to reconfigure L2TPv3 interface tunnel and session ID after interface creation
  • GRUB not getting installed on RAID1 members
  • Lack of USB autosuspend causing excessive CPU load in KVM guests
  • VTI interfaces not coming back after tunnel reset
  • Cluster failing to start on boot if network links take too long to get up

New features

User/password authentication for OpenVPN client mode

A number of VPN providers (and some corporate VPNs) require that you use user/password authentication and do not support x.509-only authentication. Now this is supported by VyOS:

set interfaces openvpn vtun0 authentication username jrandomhacker
set interfaces openvpn vtun0 authentication password qwerty
set interfaces openvpn vtun0 tls ca-cert-file /config/auth/ca.crt
set interfaces openvpn vtun0 mode client
set interfaces openvpn vtun0 remote-host

Bridged OpenVPN servers no longer require subnet settings

Before this release, OpenVPN would always require subnet settings, so if one wanted to setup an L2 OpenVPN bridged to another interface, they'd have to specify a mock subnet. Not anymore, now if the device-type is set to "tap" and bridge-group is configured, subnet settings are not required.

New OpenVPN options exposed in the CLI

A few OpenVPN options that formerly would have to be configured through openvpn-option are now available in the CLI:

set interfaces openvpn vtun0 use-lzo-compression
set interfaces openvpn vtun0 keepalive interval 10
set interfaces openvpn vtun0 keepalive failure-count 5

Point to point VXLAN tunnels are now supported

In earlier releases, it was only possible to create multicast, point to multipoint VXLAN interfaces. Now the option to create point to point interfaces is also available:
set interfaces vxlan vxlan0 address
set interfaces vxlan vxlan0 remote
set interfaces vxlan vxlan0 vni 10

AS-override option for BGP

The as-override option that is often used as an alternative to allow-as-in is now available in the CLI:

set protocols bgp 64512 neighbor as-override

as-path-exclude option for route-maps

The option for removing selected ASNs from AS paths is available now:
set policy route-map Foo rule 10 action permit
set policy route-map Foo rule 10 set as-path-exclude 64600

Buffer size option for NetFlow/sFlow

The default buffer size was often insufficient for high-traffic installations, which caused pmacct to crash. Now it is possible to specify the buffer size option:
set system flow-accounting buffer-size 512 # megabytes
There are a few more options for NetFlow: source address (can be either IPv4 or IPv6) and maximum number of concurrenct flows (on high traffic machines setting it too low can cause netflow data loss):
set system flow-accounting netflow source-ip
set system flow-accounting netflow max-flows 2097152

VLAN QoS mapping options

It is now possible to specify VLAN QoS values:
set interfaces ethernet eth0 vif 42 egress-qos 1:6
set interfaces ethernet eth0 vif 42 ingress-qos 1:6

Ability to set custom sysctl options

There are lots of sysctl options in the Linux kernel and it would be impractical to expose them all in the CLI, since most of them only need to be modified under special circumstances. Now you can set a custom option is you need to:
set system sysctl custom $key value $value

Custom client ID for DHCPv6

It  is now possible to specify custom client ID for DHCPv6 client:
set interfaces ethernet eth0 dhcpv6-options duid foobar

Ethernet offload options

Under "set interfaces ethernet ethX offload-options" you can find a number of options that control NIC offload.

Syslog level "all"

Now you can specify options for the *.* syslog pattern, for example:

set system syslog global facility all level notice

Unresolved or partially resolved issues

Latest ixgbe driver updates are not included in this release.

The issue with VyOS losing parts of its BGP config when update-source is set to an address belonging to a dynamic interface such as VTI and the interface takes long to get up and acquire its address was resolved in its literal wording, but it's not guaranteed that the BGP session will get up on its own in this case. It's recommended to set the update-source to an address of an interface available right away on boot, for example, a loopback or dummy interface.

The issue with changing the routing table number in PBR rules is not yet resolved. The recommended workaround is to delete the rule and re-create it with the new table number, e.g. by copying its commands from 'run show configuration commands | match "policy route "'.


I would like to say thanks to everyone who contributed and made this release possible, namely: Kim Hagen, Alex Harpin, Yuya Kusakabe, Yuriy Andamasov, Ray Soucy, Nikolay Krasnoyarski, Jason Hendry, Kevin Blackham, kouak, upa, Logan Attwood, Panagiotis Moustafellos, Thomas Courbon, and Ildar Ibragimov (hope I didn't forget anyone).

A note on the downloads server

The original server is still having IO performance problems and won't handle a traffic spike associated with release well. We've setup the server on our new host specially for release images and will later migrate the rest of the old server including package repositories and the rsync setup.

16 December, 2017 02:13AM by Daniil Baturin

December 15, 2017

hackergotchi for Grml developers

Grml developers

Michael Prokop: Usage of Ansible for Continuous Configuration Management

It all started with a tweet of mine:

Screenshot of

I received quite some feedback since then and I’d like to iterate on this.

I’m a puppet user since ~2008 and since ~2015 also ansible is part of my sysadmin toolbox. Recently certain ansible setups I’m involved in grew faster than I’d like to see, both in terms of managed hosts/services as well as the size of the ansible playbooks. I like ansible for ad hoc tasks, like `ansible -i ansible_hosts all -m shell -a 'lsb_release -rs'` to get an overview what distribution release systems are running, requiring only a working SSH connection and python on the client systems. ansible-cmdb provides a nice and simple to use ad hoc host overview without much effort and overhead. I even have puppetdb_to_ansible scripts to query a puppetdb via its API and generate host lists for usage with ansible on-the-fly. Ansible certainly has its use case for e.g. bootstrapping systems, orchestration and handling deployments.

Ansible has an easier learning curve than e.g. puppet and this might seem to be the underlying reason for its usage for tasks it’s not really good at. To be more precise: IMO ansible is a bad choice for continuous configuration management. Some observations, though YMMV:

  • ansible’s vaults are no real replacement for something like puppet’s hiera (though Jerakia might mitigate at least the pain regarding data lookups)
  • ansible runs are slow, and get slower with every single task you add
  • having a push model with ansible instead of pull (like puppet’s agent mode) implies you don’t get/force regular runs all the time, and your ansible playbooks might just not work anymore once you (have to) touch them again
  • the lack of a DSL results in e.g. each single package management having its own module (apt, dnf, yum,….), having too many ways how to do something, resulting more often than not in something I’d tend to call spaghetti code
  • the lack of community modules comparable to Puppet’s Forge
  • the lack of a central DB (like puppetdb) means you can’t do something like with puppet’s exported resources, which is useful e.g. for central ssh hostkey handling, monitoring checks,…
  • the lack of a resources DAG in ansible might look like a welcome simplification in the beginning, but its absence is becoming a problem when complexity and requirements grow (example: delete all unmanaged files from a directory)
  • it’s not easy at all to have ansible run automated and remotely on a couple of hundred hosts without stumbling over anything — Rudolph Bott
  • as complexity grows, the limitations of Ansible’s (lack of a) language become more maddening — Felix Frank

Let me be clear: I’m in no way saying that puppet doesn’t have its problems (side-rant: it took way too long until Debian/stretch was properly supported by puppets’ AIO packages). I had and still have all my ups and downs with it, though in 2017 and especially since puppet v5 it works fine enough for all my use cases at a diverse set of customers. Whenever I can choose between puppet and ansible for continuous configuration management (without having any host specific restrictions like unsupported architectures, memory limitations,… that puppet wouldn’t properly support) I prefer puppet. Ansible can and does exist as a nice addition next to puppet for me, even if MCollective/Choria is available. Ansible has its use cases, just not for continuous configuration management for me.

The hardest part is to leave some tool behind once you reached the end of its scale. Once you feel like a tool takes more effort than it is worth you should take a step back and re-evaluate your choices. And quoting Felix Frank:

OTOH, if you bend either tool towards a common goal, you’re not playing to its respective strengths.

Thanks: Michael Renner and Christian Hofstaedtler for initial proof reading and feedback

15 December, 2017 10:29PM

hackergotchi for Ubuntu developers

Ubuntu developers

Clive Johnston: Love KDE software? Show your love by donating today

It is the season of giving and if you use KDE software, donate to KDE.  Software such as Krita, Kdenlive, KDE Connect, Kontact, digiKam, the Plasma desktop and many many more are all projects under the KDE umbrella.

KDE have launched a fund drive running until the end of 2017.  If you want to help make KDE software better, please consider donating.  For more information on what KDE will do with any money you donate, please go to

15 December, 2017 10:16PM

Ubuntu Insights: Ubuntu Desktop Weekly Update – December 15th 2017

Here’s the last newsletter for 2017, we’ll be back in the new year. On behalf of the whole Desktop Team I wish you all a happy holiday and a peaceful 2018.


Some important fixes landed in fonts-noto-emoji.  🍔  🍺 🧀

Work continues on removing as much gtk2 as we can. A fresh 18.04 install now only installs three gtk2 rdepends: Firefox, Thunderbird and libgtk2-perl (for debconf). Firefox 58 (in beta) has dropped the gtk2 dependency.

Found and fixed a shadow offset bug in gtk+ related to the fraction scaling work.


The MIR for spice-vdagent is progressing. This adds some nice features to guest systems running with GNOME Boxes such as copy-and-paste between host & guest and arbitrary screen resolution support.

We’ve ported our Selenium autopackage tests for Chromium over to Firefox as well. This makes a wider range of automatic integration tests available to Firefox.

Avahi: We received the patch for advertising services on localhost. It is actually a one-liner! Courtesy of Rithvik Patibandla one of the GSoC 2017 Open Printing students. 18.04 will ship the complete driverless printing fun, including USB.

We’re looking for more help to verify the Unity 7 fixes. If you can spare an hour or two please see this thread for more info.


A couple more upstream projects have merged our snapcraft.yaml files in to their trees. You can see the current state of the GNOME snaps we are maintaining here.


  • Chromium
    • Updated chromium beta to 63.0.3239.84 and updated snap in beta channel
    • Updated chromium dev to 64.0.3282.14 and updated snap in edge channel
    • Updated chromium stable to 63.0.3239.84 (all supported series + bionic-proposed), snap in candidate channel
  • Libreoffice
    • Promoted 5.4.3 snap to stable channel

15 December, 2017 04:50PM

Matthew Helmke: Learn Java the Easy Way

This is an enjoyable introduction to programming in Java by an author I have enjoyed in the past.

Learn Java the Easy Way: A Hands-On Introduction to Programming was written by Dr. Bryson Payne. I previously reviewed his book Teach Your Kids to Code, which is Python-based.

Learn Java the Easy Way covers all the topics one would expect, from development IDEs (it focuses heavily on Eclipse and Android Studio, which are both reasonable, solid choices) to debugging. In between, the reader receives clear explanations of how to perform calculations, manipulate text strings, use conditions and loops, create functions, along with solid and easy-to-understand definitions of important concepts like classes, objects, and methods.

Java is taught systematically, starting with simple and moving to complex. We first create a simple command-line game, then we create a GUI for it, then we make it into an Android app, then we add menus and preference options, and so on. Along the way, new games and enhancement options are explored, some in detail and some in end-of-chapter exercises designed to give more confident or advancing students ideas for pushing themselves further than the book’s content. I like that.

Side note: I was pleasantly amused to discover that the first program in the book is the same as one that I originally wrote in 1986 on a first-generation Casio graphing calculator, so I would have something to kill time when class lectures got boring.

The pace of the book is good. Just as I began to feel done with a topic, the author moved to something new. I never felt like details were skipped and I also never felt like we were bogged down with too much detail, beyond what is needed for the current lesson. The author has taught computer science and programming for nearly 20 years, and it shows.

Bottom line: if you want to learn Java, this is a good introduction that is clearly written and will give you a nice foundation upon which you can build.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

15 December, 2017 03:53PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 144 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Antoine Beaupré did 8.5h (out of 13h allocated + 3.75h remaining, thus keeping 8.25h for December).
  • Ben Hutchings did 17 hours (out of 13h allocated + 4 extra hours).
  • Brian May did 10 hours.
  • Chris Lamb did 13 hours.
  • Emilio Pozuelo Monfort did 14.5 hours (out of 13 hours allocated + 15.25 hours remaining, thus keeping 13.75 hours for December).
  • Guido Günther did 14 hours (out of 11h allocated + 5.5 extra hours, thus keeping 2.5h for December).
  • Hugo Lefeuvre did 13h.
  • Lucas Kanashiro did not request any work hours, but he had 3 hours left. He did not publish any report yet.
  • Markus Koschany did 14.75 hours (out of 13 allocated + 1.75 extra hours).
  • Ola Lundqvist did 7h.
  • Raphaël Hertzog did 10 hours (out of 12h allocated, thus keeping 2 extra hours for December).
  • Roberto C. Sanchez did 32.5 hours (out of 13 hours allocated + 24.50 hours remaining, thus keeping 5 extra hours for November).
  • Thorsten Alteholz did 13 hours.

About external support partners

You might notice that there is sometimes a significant gap between the number of distributed work hours each month and the number of sponsored hours reported in the “Evolution of the situation” section. This is mainly due to some work hours that are “externalized” (but also because some sponsors pay too late). For instance, since we don’t have Xen experts among our Debian contributors, we rely on credativ to do the Xen security work for us. And when we get an invoice, we convert that to a number of hours that we drop from the available hours in the following month. And in the last months, Xen has been a significant drain to our resources: 35 work hours made in September (invoiced in early October and taken off from the November hours detailed above), 6.25 hours in October, 21.5 hours in November. We also have a similar partnership with Diego Bierrun to help us maintain libav, but here the number of hours tend to be very low.

In both cases, the work done by those paid partners is made freely available for others under the original license: credativ maintains a Xen 4.1 branch on GitHub, Diego commits his work on the release/0.8 branch in the official git repository.

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 55 packages with a known CVE and the dla-needed.txt file 35 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

15 December, 2017 02:15PM


Linux Mint 18.3 “Sylvia” KDE released!

The team is proud to announce the release of Linux Mint 18.3 “Sylvia” KDE Edition.

Linux Mint 18.3 Sylvia KDE Edition

Linux Mint 18.3 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18.3 KDE“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18.3 KDE

System requirements:

  • 2GB RAM.
  • 15GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).


  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

Upgrade instructions:

  • If you are running the BETA, simply use the Update Manager to apply the available updates.
  • To upgrade from Linux Mint 18, 18.1 or 18.2, read “How to Upgrade to Linux Mint 18.3“.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.


We look forward to receiving your feedback. Thank you for using Linux Mint and have a lot of fun with this new release!

15 December, 2017 01:34PM by Linux Mint

Linux Mint 18.3 “Sylvia” Xfce released!

The team is proud to announce the release of Linux Mint 18.3 “Sylvia” Xfce Edition.

Linux Mint 18.3 Sylvia Xfce Edition

Linux Mint 18.3 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18.3 Xfce“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18.3 Xfce

System requirements:

  • 1GB RAM (2GB recommended for a comfortable usage).
  • 15GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).


  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

Upgrade instructions:

  • If you are running the BETA, simply use the Update Manager to apply the available updates.
  • To upgrade from Linux Mint 18, 18.1 or 18.2, read “How to Upgrade to Linux Mint 18.3“.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.


We look forward to receiving your feedback. Thank you for using Linux Mint and have a lot of fun with this new release!

15 December, 2017 01:32PM by Linux Mint

hackergotchi for SparkyLinux



There is a new application available for Sparkers: MotionBox.

From the project page:

MotionBox is a Video Browser for Motion Freedom.
Built to access and traverse decentralized video sources.
Built to load and play multiple video resources.

MotionBox accesses videos directly via DuckDuckGo.
It streams Torrents, Youtube, Dailymotion, Vimeo and SoundCloud.
All of this inside multiple tabs and without ever showing an ad.

MotionBox can be installed from Sparky repository via Synaptic or apt commands:
sudo apt update
sudo apt install motionbox

If you have any problem with running the app, try to run it in a terminal emulator via the command:
and provide an output to our forums, please.



15 December, 2017 01:28PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Dimitri John Ledkov: What does FCC Net Neutrality repeal mean to you?

Sorry, the web page you have requested is not available through your internet connection.

We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.

If you are a home broadband customer, for more information on why certain web pages are blocked, please click here.
If you are a business customer, or are trying to view this page through your company's internet connection, please click here.

15 December, 2017 09:09AM by Dimitri John Ledkov (

December 14, 2017

Sebastian Dröge: A GStreamer Plugin like the Rec Button on your Tape Recorder – A Multi-Threaded Plugin written in Rust

As Rust is known for “Fearless Concurrency”, that is being able to write concurrent, multi-threaded code without fear, it seemed like a good fit for a GStreamer element that we had to write at Centricular.

Previous experience with Rust for writing (mostly) single-threaded GStreamer elements and applications (also multi-threaded) were all quite successful and promising already. And in the end, this new element was also a pleasure to write and probably faster than doing the equivalent in C. For the impatient, the code, tests and a GTK+ example application (written with the great Rust GTK bindings, but the GStreamer element is also usable from C or any other language) can be found here.

What does it do?

The main idea of the element is that it basically works like the rec button on your tape recorder. There is a single boolean property called “record”, and whenever it is set to true it will pass-through data and whenever it is set to false it will drop all data. But different to the existing valve element, it

  • Outputs a contiguous timeline without gaps, i.e. there are no gaps in the output when not recording. Similar to the recording you get on a tape recorder, you don’t have 10s of silence if you didn’t record for 10s.
  • Handles and synchronizes multiple streams at once. When recording e.g. a video stream and an audio stream, every recorded segment starts and stops with both streams at the same time
  • Is key-frame aware. If you record a compressed video stream, each recorded segment starts at a keyframe and ends right before the next keyframe to make it most likely that all frames can be successfully decoded

The multi-threading aspect here comes from the fact that in GStreamer each stream usually has its own thread, so in this case the video stream and the audio stream(s) would come from different threads but would have to be synchronized between each other.

The GTK+ example application for the plugin is playing a video with the current playback time and a beep every second, and allows to record this as an MP4 file in the current directory.

How did it go?

This new element was again based on the Rust GStreamer bindings and the infrastructure that I was writing over the last year or two for writing GStreamer plugins in Rust.

As written above, it generally went all fine and was quite a pleasure but there were a few things that seem noteworthy. But first of all, writing this in Rust was much more convenient and fun than writing it in C would’ve been, and I’ve written enough similar code in C before. It would’ve taken quite a bit longer, I would’ve had to debug more problems in the new code during development (there were actually surprisingly few things going wrong during development, I expected more!), and probably would’ve written less exhaustive tests because writing tests in C is just so inconvenient.

Rust does not prevent deadlocks

While this should be clear, and was also clear to myself before, this seems like it might need some reiteration. Safe Rust prevents data races, but not all possible bugs that multi-threaded programs can have. Rust is not magic, only a tool that helps you prevent some classes of potential bugs.

For example, you can’t just stop thinking about lock order if multiple mutexes are involved, or that you can carelessly use condition variables without making sure that your conditions actually make sense and accessed atomically. As a wise man once said, “the safest program is the one that does not run at all”, and a deadlocking program is very close to that.

The part about condition variables might be something that can be improved in Rust. Without this, you can easily end up in situations where you wait forever or your conditions are actually inconsistent. Currently Rust’s condition variables only require a mutex to be passed to the functions for waiting for the condition to be notified, but it would probably also make sense to require passing the same mutex to the constructor and notify functions to make it absolutely clear that you need to ensure that your conditions are always accessed/modified while this specific mutex is locked. Otherwise you might end up in debugging hell.

Fortunately during development of the plugin I only ran into a simple deadlock, caused by accidentally keeping a mutex locked for too long and then running into conflict with another one. Which is probably an easy trap if the most common way of unlocking a mutex is to let the mutex lock guard fall out of scope. This makes it impossible to forget to unlock the mutex, but also makes it less explicit when it is unlocked and sometimes explicit unlocking by manually dropping the mutex lock guard is still necessary.

So in summary, while a big group of potential problems with multi-threaded programs are prevented by Rust, you still have to be careful to not run into any of the many others. Especially if you use lower-level constructs like condition variables, and not just e.g. channels. Everything is however far more convenient than doing the same in C, and with more support by the compiler, so I definitely prefer writing such code in Rust over doing the same in C.

Missing API

As usual, for the first dozen projects using a new library or new bindings to an existing library, you’ll notice some missing bits and pieces. That I missed relatively core part of GStreamer, the GstRegistry API, was surprising nonetheless. True, you usually don’t use it directly and I only need to use it here for loading the new plugin from a non-standard location, but it was still surprising. Let’s hope this was the biggest oversight. If you look at the issues page on GitHub, you’ll find a few other things that are still missing though. But nobody needed them yet, so it’s probably fine for the time being.

Another part of missing APIs that I noticed during development was that many manual (i.e. not auto-generated) bindings didn’t have the Debug trait implemented, or not in a too useful way. This is solved now, as otherwise I wouldn’t have been able to properly log what is happening inside the element to allow easier debugging later if something goes wrong.

Apart from that there were also various other smaller things that were missing, or bugs (see below) that I found in the bindings while going through all these. But those seem not very noteworthy – check the commit logs if you’re interested.

Bugs, bugs, bgsu

I also found a couple of bugs in the bindings. They can be broadly categorized in two categories

  • Annotation bugs in GStreamer. The auto-generated parts of the bindings are generated from an XML description of the API, that is generated from the C headers and code and annotations in there. There were a couple of annotations that were wrong (or missing) in GStreamer, which then caused memory leaks in my case. Such mistakes could also easily cause memory-safety issues though. The annotations are fixed now, which will also benefit all the other language bindings for GStreamer (and I’m not sure why nobody noticed the memory leaks there before me).
  • Bugs in the manually written parts of the bindings. Similarly to the above, there was one memory leak and another case where a function could’ve returned NULL but did not have this case covered on the Rust-side by returning an Option<_>.

Generally I was quite happy with the lack of bugs though, the bindings are really ready for production at this point. And especially, all the bugs that I found are things that are unfortunately “normal” and common when writing code in C, while Rust is preventing exactly these classes of bugs. As such, they have to be solved only once at the bindings layer and then you’re free of them and you don’t have to spent any brain capacity on their existence anymore and can use your brain to solve the actual task at hand.

Inconvenient API

Similar to the missing API, whenever using some rather new API you will find things that are inconvenient and could ideally be done better. The biggest case here was the GstSegment API. A segment represents a (potentially open-ended) playback range and contains all the information to convert timestamps to the different time bases used in GStreamer. I’m not going to get into details here, best check the documentation for them.

A segment can be in different formats, e.g. in time or bytes. In the C API this is handled by storing the format inside the segment, and requiring you to pass the format together with the value to every function call, and internally there are some checks then that let the function fail if there is a format mismatch. In the previous version of the Rust segment API, this was done the same, and caused lots of unwrap() calls in this element.

But in Rust we can do better, and the new API for the segment now encodes the format in the type system (i.e. there is a Segment<Time>) and only values with the correct type (e.g. ClockTime) can be passed to the corresponding functions of the segment. In addition there is a type for a generic segment (which still has all the runtime checks) and functions to “cast” between the two.

Overall this gives more type-safety (the compiler already checks that you don’t mix calculations between seconds and bytes) and makes the API usage more convenient as various error conditions just can’t happen and thus don’t have to be handled. Or like in C, are simply ignored and not handled, potentially leaving a trap that can cause hard to debug bugs at a later time.

That Rust requires all errors to be handled makes it very obvious how many potential error cases the average C code out there is not handling at all, and also shows that a more expressive language than C can easily prevent many of these error cases at compile-time already.

14 December, 2017 10:41PM

Simos Xenitellis: multipass, management of virtual machines running Ubuntu

If you want to run a machine container, you would use LXD.  But if you want to run a virtual machine, you would use multipass. multipass is so new, that is still in beta. The name is not known yet to Google, and you get many weird results when you search for it.

Both containers and virtual machines, you can set them up manually without much additional tools. However, if you want to perform real work, it helps if you have a system that supports you. Let’s see what multipass can do for us.

Installing the multipass snap

multipass is available as a snap package. You need a Linux distribution, and the Linux distribution has to have snap support.

Check the availability of multipass as a snap,

$ snap info multipass
name: multipass
summary: Ubuntu at your fingertips
publisher: saviq
description: |
 Multipass gives you Ubuntu VMs in seconds. Just run `multipass.ubuntu create`
 it'll do all the setup for you.
snap-id: mA11087v6dR3IEcQLgICQVjuvhUUBUKM
 stable: – 
 candidate: – 
 beta: 2017.2.2 (37) 44MB classic
 edge: 2017.2.2-4-g691449f (38) 44MB classic

There is a snap available, and is currently in the beta channel. It is a classic snap which means that it has less restrictions that your typical snap.

Therefore, install it as follows,

$ sudo snap install multipass --beta --classic
multipass (beta) 2017.2.2 from 'saviq' installed

Trying out multipass

Now what? Let’s run it.

$ multipass
Usage: /snap/multipass/37/bin/multipass [options] <command>
Create, control and connect to Ubuntu instances.

This is a command line utility for multipass, a
service that manages Ubuntu instances.

 -h, --help Display this help
 -v, --verbose Increase logging verbosity, repeat up to three times for more

Available commands:
 connect Connect to a running instance
 delete Delete instances
 exec Run a command on an instance
 find Display available images to create instances from
 help Display help about a command
 info Display information about instances
 launch Create and start an Ubuntu instance
 list List all available instances
 mount Mount a local directory in the instance
 purge Purge all deleted instances permanently
 recover Recover deleted instances
 start Start instances
 stop Stop running instances
 umount Unmount a directory from an instance
 version Show version details
Exit 1

Just like with LXD, launch should do something. Let’s try it and see what parameters it takes.

$ multipass launch
Launched: talented-pointer

Oh, no. Just like with LXD, if you do not supply a name of the container/virtual machine, they pick one for you AND proceed in creating the container/virtual machine. So, here we are with the virtual machine creatively named talented-pointer.

How do we get some more info about this virtual machine? What defaults were selected?

$ multipass info talented-pointer
Name: talented-pointer
Release: Ubuntu 16.04.3 LTS
Image hash: a381cee0aae4 (Ubuntu 16.04 LTS)
Load: 0.08 0.12 0.07
Disk usage: 1014M out of 2.1G
Memory usage: 37M out of 992M

The default image is Ubuntu 16.04.3, on a 2GB disk and with 1GB RAM.

How should we have created the virtual machine instead?

$ multipass launch --help
Usage: /snap/multipass/37/bin/multipass launch [options] [<remote:>]<image>
Create and start a new instance.

 -h, --help Display this help
 -v, --verbose Increase logging verbosity, repeat up to three times for
 more detail
 -c, --cpus <cpus> Number of CPUs to allocate
 -d, --disk <disk> Disk space to allocate in bytes, or with K, M, G suffix
 -m, --mem <mem> Amount of memory to allocate in bytes, or with K, M, G
 -n, --name <name> Name for the instance
 --cloud-init <file> Path to a user-data cloud-init configuration

 image Ubuntu image to start

Therefore, the default command to launch a new instance would have looked like

$ multipass launch --disk 2G --mem 1G -n talented-pointer

We still do not know how to specify the image name, whether it will be Ubuntu 16.04 or something else. saviq replied, and now we know how to get the list of available images for multipass.

$ multipass find
multipass launch … Starts an instance of Image version
14.04 Ubuntu 14.04 LTS 20171208
 (or: t, trusty)
16.04 Ubuntu 16.04 LTS 20171208
 (or: default, lts, x, xenial)
17.04 Ubuntu 17.04 20171208
 (or: z, zesty)
17.10 Ubuntu 17.10 20171213
 (or: a, artful)
daily:18.04 Ubuntu 18.04 LTS 20171213
 (or: b, bionic, devel)

multipass merges the CLI semantics of both the lxc and the snap clients :-).

That is, there are five images currently available and each has several handy aliases. And currently, the default and the lts point to Ubuntu 16.04. In spring 2018, they will point to Ubuntu 18.04 when it gets released.

Here is the list of aliases in an inverted table.

Ubuntu 14.04: 14.04, t, trusty

Ubuntu 16.04: 16.04, default, lts, x, xenial (at the end of April 2018, it will lose the default and lts aliases)

Ubuntu 17.04: 17.04, z, zesty

Ubuntu 17.10: 17.10, a, artful

Ubuntu 18.04: daily:18.04, daily:b, daily:bionic, daily:devel (at the end of April 2018, it will gain the default and lts aliases)

Therefore, if we want to launch a 8G disk/2GB RAM virtual machine myserver with, let’s say, the current LTS Ubuntu, we would explicitly run

$ multipass launch --disk 8G --mem 2G -n myserver lts

Looking into the lifecycle of a virtual machine

When you first launch a virtual machine for a specific version of Ubuntu, it will download from the Internet the image of the virtual machine, and then cache it locally for any future virtual machines. This happened earlier when we launched talented-pointer. Let’s view it.

$ multipass list
Name State IPv4 Release
talented-pointer RUNNING Ubuntu 16.04 LTS

Now delete it, then purge it.

$ multipass delete talented-pointer
$ multipass list
Name State IPv4 Release
talented-pointer DELETED --
$ multipass purge
$ multipass list
No instances found.

That is, we have a second chance when we delete a virtual machine. A deleted virtual machine can be recovered with multipass recover.

Let’s create a new virtual machine and time it.

$ time multipass launch -n myVM default
Launched: myVM

Elapsed time : 0m16.942s
User mode : 0m0.008s
System mode : 0m0.016s
CPU percentage : 0.14

It took about 17 seconds for a virtual machine. In contrast, a LXD container takes significantly less,

$ time lxc launch ubuntu:x mycontainer
Creating mycontainer
Starting mycontainer

Elapsed time : 0m1.943s
User mode : 0m0.008s
System mode : 0m0.024s
CPU percentage : 1.64

We can stop and start a virtual machine with multipass.

$ multipass list 
Name State IPv4 Release
myVM RUNNING Ubuntu 16.04 LTS

$ multipass stop myVM
$ multipass list
Name State IPv4 Release
myVM STOPPED -- Ubuntu 16.04 LTS

$ multipass start
Name argument or --all is required
Exit 1
$ time multipass start --all
Elapsed time : 0m11.109s
User mode : 0m0.008s
System mode : 0m0.012s
CPU percentage : 0.18

We can start and stop virtual machines, and if we do not want to specify a name, we can use –all (to perform a task to all). Here it took 11 seconds to restart the virtual machine. The time it takes to start a virtual machine is somewhat variable and on my system it is in the tens of seconds. For LXD containers, it is about two seconds or less.

Running commands in a VM with Multipass

From what we saw earlier from multipass –help, there are two actions, connect and exec.

Here is connect to a VM.

$ multipass connect myVM
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-103-generic x86_64)

* Documentation:
 * Management:
 * Support:

Get cloud support with Ubuntu Advantage Cloud Guest:

5 packages can be updated.
3 updates are security updates.

Last login: Thu Dec 14 20:19:45 2017 from
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.


Therefore, with connect, we get a shell directly to the virtual machine! Because this is a virtual machine, it booted a new Linux kernel, Linux 4.4.0 in parallel with the one I use on my Ubuntu system. There are 5 packages that can be updated, and 3 of them are security updates.  Nowadays in Ubuntu, any pending security updates are autoinstalled by default thanks to the unattended-upgrades package and its default configuration. They will get autoupdated sometime within the day and the default configuration will automatically do the security updates only.

We view the available updates, five in total, three are security updates.

ubuntu@myVM:~$ sudo apt update
Get:1 xenial-security InRelease [102 kB]
Hit:2 xenial InRelease 
Get:3 xenial-updates InRelease [102 kB] 
Get:4 xenial-security/main Sources [104 kB]
Get:5 xenial-backports InRelease [102 kB] 
Get:6 xenial-security/universe Sources [48.9 kB]
Get:7 xenial-security/main amd64 Packages [408 kB]
Get:8 xenial-security/main Translation-en [179 kB]
Get:9 xenial-security/universe Translation-en [98.9 kB]
Fetched 1,145 kB in 0s (1,181 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
5 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@myVM:~$ apt list --upgradeable
Listing... Done
cloud-init/xenial-updates 17.1-46-g7acc9e68-0ubuntu1~16.04.1 all [upgradable from: 17.1-27-geb292c18-0ubuntu1~16.04.1]
grub-legacy-ec2/xenial-updates 17.1-46-g7acc9e68-0ubuntu1~16.04.1 all [upgradable from: 17.1-27-geb292c18-0ubuntu1~16.04.1]
libssl1.0.0/xenial-updates,xenial-security 1.0.2g-1ubuntu4.10 amd64 [upgradable from: 1.0.2g-1ubuntu4.9]
libxml2/xenial-updates,xenial-security 2.9.3+dfsg1-1ubuntu0.5 amd64 [upgradable from: 2.9.3+dfsg1-1ubuntu0.4]
openssl/xenial-updates,xenial-security 1.0.2g-1ubuntu4.10 amd64 [upgradable from: 1.0.2g-1ubuntu4.9]

Let’s update them all and get done with it.

ubuntu@myVM:~$ sudo apt upgrade
Reading package lists... Done

Can we reboot the virtual machine with the shutdown command?

ubuntu@myVM:~$ sudo shutdown -r now

$ multipass connect myVM
terminate called after throwing an instance of 'std::runtime_error'
 what(): ssh: Connection refused
Aborted (core dumped)
Exit 134

$ multipass connect myVM
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-104-generic x86_64)

* Documentation:
 * Management:
 * Support:

Get cloud support with Ubuntu Advantage Cloud Guest:

0 packages can be updated.
0 updates are security updates.

Last login: Thu Dec 14 20:40:10 2017 from
ubuntu@myVM:~$ exit

Yes, we can. It takes a few seconds for the virtual machine to boot again. When we try to connect too early, we get an error. We try again and get connect.

There is the exec action as well. Let’s see how it works.

$ multipass exec myVM pwd

$ multipass exec myVM id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),109(netdev),110(lxd)

We specify the VM name, then the command to run. The default user is the ubuntu user (non-root, can sudo without passwords). In contrast, with LXD the default user is root.

Let’s try something else, uname -a.

$ multipass exec myVM uname -a
Unknown option 'a'.
Exit 1

It is a common Unix shell issue, the shell passes the -a parameter to multipass instead of leaving it unprocessed so that it runs in the virtual machine. The solution is to add at the point we want the shell to stop processing parameters, like in

$ multipass exec myVM -- uname -a
Linux myVM 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 12:16:42 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

If you try to exec commands a few times, you should encounter a case where the command hangs. It does not return, and you cannot Ctrl-C it either. It’s a bug, and the workaround is to open another shell in order to multipass stop myVM and then multipass start myVM.


It is cool to have multipass that complements LXD. Both tools make it easy to create virtual machines and machine containers. There are some bugs and usability issues than can be reported at the Issues page. Overall, it makes running virtual machines and machine containers so usable and easy.


14 December, 2017 08:44PM

hackergotchi for SparkyLinux


Sparky Desktop 20171214

There is an update of Sparky Desktop 20171214 available in our repository.
Sparky Desktop can be used via Sparky APTus-> Install tab-> Desktop

– Removed “–yes” option of apt command when installing packages – it shows all packages to be downloaded, the amount of all of them, and needs your confirmation
– Added separate window for arm arch – it displays desktops which can be installed on RaspberryPi only to avoid misunderstanding
– Got separated all desktops to 3 different sub-windows depends of repository source: Debian, Sparky and Others

All desktops available via the tool are installed using sparky’s desktop meta packages.
Desktop packages from the “Debian” tab are installed from Debian repository.
Desktop packages from the “Sparky” tab are installed from Sparky repos and they are built by me (pavroo).
Desktop packages from “Others” tab are installed using sparky meta package, but the installation script configures 3rd party repository, installs its public key and installs packages from the 3rd party repos, which are maintained and built by the project’s developers.

Hope it’s clear now, but if you have any question, place it at our forums.

Desktop Installer


14 December, 2017 07:43PM by pavroo

hackergotchi for ZEVENET


ZVNcloud as a HA layer for critical apps in the cloud

The raise of ubiquitous applications, Internet of Things and broadcasting through the Internet face the need of reliability for critical applications and services in the cloud. For that reason, the Zevenet Team has designed a WAN solution in order to provide high availability for critical applications using the advantage of the cloud computing and ubiquitous networking. What is ZVNcloud? ZVNcloud...


14 December, 2017 05:40PM by Zevenet

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: UbuCon Europe: 27th, 28th, 29th of April 2018 in Gijón/Xixón, Spain

The next Ubucon Europe will take place at the Antiguo Instituto Jovellanos in Gijón/Xixón (Asturias, Spain), the 27th, 28th & 29th of April 2018. The Antiguo Instituto was built in 1797 as a centre for scientific and technical education. Currently, it is a public space maintained and managed by the City Council and it has become a reference place for popularisation of Culture, Science and Technology.


The conference will be held in 3 parallel sessions, taking place in two conference rooms and one workshop room. At the same time, several stands will be placed in the hall of the building.

During the Ubucon Europe 2018 in Gijón/Xixón, there will also be time for social events, bringing users and developers from all around Europe closer together. We will discover and enjoy the ancient Asturian culture, food specialities and drinks. There will be several Ubuntu hours, touristic visits, music… and we will even have the opportunity of experiencing a typical “espicha” (tasting Asturian cider). We would like to start also a new concept: the Ubuntu Shared World Food. Each one of the participants will provide some food or drinks from their own countries for sharing a piece of their own culture with the Ubuntu Community.

More information in the official website:

You are most welcome to join us!
– On behalf of Javier Teruelo, Francisco Molinero, Sergi Quiles, Antonio Fernandes, Paul Hodgetts, Santiago Moreira, Joan CiberSheep, Fernando Laner , Manu Cogolludo & Marcos Costales.

14 December, 2017 05:21PM

Ubuntu Insights: Canonical shows EdgeX on ARM

Beginning in March, I have been assigned to the Linaro organisation to carry on the work with the LITE (Linaro IoT and Embedded) group by researching design and development of software for ARM-based gateway devices. One of my main focus points has been to investigate how complex it would be to run EdgeX on ARM.

EdgeX ( is an IoT framework developed and recently open sourced by Dell. The goal of this Linux Foundation backed project is to provide a common open framework for IoT edge computing. It is designed to be agnostic from the hardware and the underlying OS, yet provide flexibility and standardization resulting in lower development costs and faster time to market. Canonical has been involved in the EdgeX development from the very beginning. Currently Tony Espy of Canonical holds a committee chair position of the EdgeX Device and Device SDK working group.

I was video interviewed during the Linaro Connect San Francisco event in September 2017. In it, I talk about the results of my R&D work around the EdgeX on ARM topic showcasing a cross-host setup where the EdgeX core is running on an Ubuntu Core based Dell Edge gateway next to the EdgeX Device service that is running on ARM (Ubuntu based Raspberry Pi). At the time of this conference, to the best of my knowledge, it was the first time such a solution has been presented. It proves that EdgeX, together with Ubuntu, can be successfully used to couple one high-end master gateway with multiple satellite gateways that, for cost reduction, could use lower end hardware.

The work presented here will continue with work on EdgeX now under active development with ongoing efforts to provide a Go-based SDK. At the same time, Canonical is putting its time into integrating EdgeX with snaps ( so that it can be easily managed on edge devices.


  1. EdgeX Foundry
  2. Linaro
  3. Ubuntu Core

14 December, 2017 03:59PM

Ubuntu Podcast from the UK LoCo: S10E41 – Round Glorious Canvas - Ubuntu Podcast

This week we’ve taken a stroll around a parallel universe and watched some YouTube. Patreon updates it’s fee structure and then realises it was a terrible idea, Mozilla releases a speech-to-text engine, Oumuamua gets probed and Microsoft release the Q# quantum programming language.

It’s Season Ten Episode Forty-One of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

14 December, 2017 03:00PM

Ubuntu Insights: Security Team Weekly Summary: December 14, 2017

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at:

During the last week, the Ubuntu Security team:

  • Triaged 213 public security vulnerability reports, retaining the 65 that applied to Ubuntu.
  • Published 15 Ubuntu Security Notices which fixed 16 security issues (CVEs) across 15 supported packages.

Ubuntu Security Notices

Bug Triage

Mainline Inclusion Requests


  • review tools testsuite updates for resquashfs
  • write-up processes for reviewing base snaps
  • send up PR 4375 and PR 4375 (2.30) to add an app/hook-specific udev rule for hotplugging (fixes mir hotplug issue)
  • debug chromium mknod issue with nvidia GPUs
  • send up PR 4359 and PR 4360 (2.30) policy updates PRs
  • add missing rule to upstream AppArmor fonts abstraction

  • pickup PR 4100 and send up PR 4383 (2.30) for new ssh/gpg keys interfaces
  • send up PR 4366 and PR 4367 (2.30) for small removable-media fix
  • update review-tools for 2.30 interfaces
  • discuss options for possible biometrics interface
  • snapd reviews
    • PR 4365 – allow wayland socket and non-root sockets/wayland slot policy
    • PR 4140 – add an interface for gnome-online-accounts D-Bus service
    • PR 4369 – add write permission to optical-drive interface

What the Security Team is Reading This Week

Weekly Meeting

More Info

14 December, 2017 02:14PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Third Party App Charts in December

After the last charts for 3rd party apps in the App Center we published beginnning of October, it’s now time for a short update.

Current members of the Top Ten Third Party Apps of the Univention App Center are:









 OX App Suite


 Let’s Encrypt


 Bareos Backup Server


Horde Groupware Webmail Edition




 opsi – Client Management


 ONLYOFFICE Document Server


Since the last charts there has been some movement in the Top 10. Kopano and ownCloud are still at the top running a head-to-head race with ownCloud being at the very top. In the next two places, Nextcloud passed Open-Xchange. The first four apps underline the fact that groupware and fileshare & sync continue to be the most important areas for UCS users.

Interesting are the two newcomers Let’s Encrypt and ONLYOFFICE Document Server which joined the App Center in September and made it into the Top 10. Let’s Encrypt issues free and automated certificates and allows everyone to get valid certificates for the Internet. Owning rank 5, it clearly leads the IT infrastructure solutions.

ONLYOFFICE is a web-based document, spreadsheet and presentation processing solution. An interesting article about ONLYOFFICE with details about their story, key technologies in use and benefits it offers to users can be found in the article An Introduction to ONLYOFFICE Document Editors.

Further solutions, which are most important to UCS users according to the charts belong to the areas Backup, VPN and central administration of Windows and Linux Clients.

No longer among the Top Ten are OpenProject and Collabora which have been pushed down to lower places.

Der Beitrag Third Party App Charts in December erschien zuerst auf Univention.

14 December, 2017 12:47PM by Nico Gulden

An Introduction to ONLYOFFICE Document Editors

After our first article ONLYOFFICE joins the Univention family, we are going to give you this time a closer insight on ONLYOFFICE: how we created our editors, what was chosen as a technological basis and how it benefits businesses and individual users in creation of their everyday documents. But first, a little bit of history.

When did it all begin?

The launch of the project dates back to 2009, when a group of ambitious developers with Lev Bannov in charge created a platform for office collaboration called TeamLab. The product was then merged with document editing software, and in 2012 the suite was presented at CeBIT. Two years later, the project opened the source code and was rebranded as ONLYOFFICE. Since then, the team concentrated on document editing as a very center of the solution, looking for the ways to complement the market with highly compatible and rich editing instrument.

The key technology behind ONLYOFFICE editors

ONLYOFFICE Document Server incorporates a 3-in-1 online office suite with text document, spreadsheet and presentation editors. HTML5 Canvas element, on which the suite is built, helps ONLYOFFICE process every visible element with scrutiny and assure that the document will look identical, no matter from where it is opened or printed. Office Open XML formats (.DOCX, .XLSX, .PPTX) are taken as a core of the editors. This provides clean processing of most of the existing documents, while other formats go through inner conversion.

What it brings to our users

Such technological concept ensures wide applicability of ONLYOFFICE editors, making them utilizable in diversified working environments and occupations. People deal with intense flow of documents in their professional life: the software in use must be able to faultlessly process them preserving the initial formatting and objects.
Further editing of the document elements is another vital part of ‘distant’ collaboration, when one needs to work with information coming from outside sources. For example, to quickly extract data from the graph or correct the autoshape-based schemes.
However, co-editing ‘in-house’ is paramount too. And the more fluent it is, the more resources a business is able to save. ONLYOFFICE developed an architecture that assures slight, real-time connection between participants and minimizes server loading.

Screenshot von ONLYOFFICE Co-Editing Funktion

For whom did we create ONLYOFFICE?

Focusing on format compatibility, richness of editing capabilities and collaborative features makes ONLYOFFICE a universal product that suits both individuals and teams and provides tools for any office work: from routine reports, calculations and presentations to professional writing and bookkeeping. The by default free and open-source version of ONLYOFFICE is aimed at small and medium teams and allows collaboration with up to 20 simultaneous connections. Bigger capabilities are provided in Integration edition, a more business-oriented package that is customizable in number of users and offers number of additional editing features along with enhanced editor interface.

Screenshot über die ONLYOFFICE Editoren

Our role in the world of office documents

We created a product that becomes a reliable platform for handling documents and boosting collaboration in every company. Our principle is always giving our users a freedom of choice of how and where to use ONLYOFFICE. This is why our solutions are scalable and implementable in various environments . We created a product that becomes a reliable platform for handling documents and boosting collaboration in every company. Our principle is always giving our users a freedom of choice of how and where to use ONLYOFFICE. This is why our solutions are scalable and implementable in various environments.

Within Univention ONLYOFFICE connects with Nextcloud or ownCloud to carry out the data storage and sharing, which has proved to be the perfect symbiosis already outside Univention. With UCS this combination became easier to install and configure and the user management is already included provided by UCS’ powerful identity management. If you are new to us, try ONLYOFFICE in UCS environment now to see how it works yourself!


P.S. ONLYOFFICE is headed at the Univention Summit in Bremen in February 2018. This is a great opportunity to meet and know a bit more about our products in person.

Der Beitrag An Introduction to ONLYOFFICE Document Editors erschien zuerst auf Univention.

14 December, 2017 12:44PM by Maren Abatielos

hackergotchi for Tails


Tails report for November, 2017



  • We fixed issues regarding reproducible builds (#14924, #14946, #14933) and later realized that one of them fixes did not work in some corner cases… that include the ISO images we build for the Tails official releases. Sadly, due to an internal communication mishap we've announced that Tails 3.3 was reproducible before we had learned about this remaining problem.

  • We uploaded a new version of tails-installer (5.0.2) to Debian and Ubuntu. This version has a simpler interface than previous versions.

Documentation and website

  • We have documented internally how active Tails contributors can be sponsored to attend events on behalf of Tails and are now working towards publishing this documentation so that all contributors are aware of this option (#14727).

User experience

  • We almost finished the work on the new download page and verification extension for Firefox and Chrome. We're stuck blocked by security reviews and improvements on the JavaScript code.

  • Our survey on file storage encryption was answered 1012 times between October 17 and December 1. It was a huge success and we'll not move on to analyzing the results.

Hot topics on our help desk

  1. #14968: Ublock is not working in Tails 3.3

  2. #12328: Tails Firefox Add-on is not working in last Firefox



  • We submitted a funding request for the Secure Operating Systems Summit that we are organizing with Qubes OS, Subgraph OS and Whonix.

  • We applied to the "Good of the Internet" call for proposals by RIPE NCC. Our proposal is titled "Interoperability and communication continuity between mobile, laptop and desktop computers, in privacy and security-sensitive environments".

  • We continued to run our donation campaign.


Past events

  • Some of us attended the Reproducible Builds World summit in Berlin, Germany (report).

  • intrigeri attended the OTF summit in Valencia, then followed-up with people he has met there.

  • ignifugo gave a talk about Tails at a greek hackerspace.

Upcoming events

On-going discussions

Press and testimonials


All the website

  • de: 54% (2902) strings translated, 7% strings fuzzy, 47% words translated
  • fa: 39% (2103) strings translated, 9% strings fuzzy, 41% words translated
  • fr: 89% (4742) strings translated, 0% strings fuzzy, 84% words translated
  • it: 38% (2022) strings translated, 5% strings fuzzy, 33% words translated
  • pt: 24% (1292) strings translated, 9% strings fuzzy, 20% words translated

Total original words: 57402

Core pages of the website

  • de: 80% (1506) strings translated, 12% strings fuzzy, 79% words translated
  • fa: 34% (647) strings translated, 11% strings fuzzy, 34% words translated
  • fr: 99% (1864) strings translated, 0% strings fuzzy, 99% words translated
  • it: 76% (1444) strings translated, 13% strings fuzzy, 77% words translated
  • pt: 45% (851) strings translated, 16% strings fuzzy, 46% words translated

Total original words: 17062


  • Tails has been started more than 655.776 times this month. This makes 21.859 boots a day on average.
  • 12.371 downloads of the OpenPGP signature of Tails ISO from our website.
  • 99 bug reports were received through WhisperBack.

14 December, 2017 10:56AM

December 13, 2017

hackergotchi for SparkyLinux


APTus 0.3.14

There is an update of Sparky APTus 0.3.14 available in our repository.

1. removed “–yes” option of the apt command from installation of Debian, Liquorix amd Sparky kernels:
– it displays amount of all packages to be downloaded and needs your confirmation now
2. re-written ‘old-kernel-remover’ script (honestly written a brand new):
– uses Yad now instead of Zenity
– shows all jobs in a terminal emulator window
– lets you autoremove after
– removes an old initrd.img of a just removed kernel
– displays info window when finshed

Heavy tested already, but let me know if you find something wrong, please.



13 December, 2017 10:40PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: FIPS 140-2 Certified Modules for Ubuntu 16.04 LTS

We are pleased to announce that officially certified FIPS 140-2 level 1 cryptographic packages are now available for Ubuntu 16.04 LTS for Ubuntu Advantage Advanced customers and as a separate, stand-alone product.

In 2016 Canonical began the process of completing the Cryptographic Module Validation Program to obtain FIPS 140-2 validation for Ubuntu 16.04 LTS. This has been successfully completed and Canonical now offers key components of Ubuntu 16.04 LTS compliant with the FIPS 140-2 level 1 standard. The FIPS compliant modules are available to Ubuntu Advantage Advanced subscribers in the Ubuntu Advantage private archive.

We currently use Ubuntu Linux because of its superior development environment and frequent LTS releases. As a business that develops software, one of our customer’s requirements is to utilize FIPS 140-2 validated software. We have been able to start rolling out the Ubuntu FIPS modules without needing to reinstall the operating system. This keeps our developers happy and productive as Ubuntu is their preferred environment and minimizes transition cost. The FIPS modules also include a VPN solution which we look forward to implementing to allow our developers to work remotely but still meet our customer’s requirements.

-Alex Stuart, North Point Defense


Users interested in FIPS 140-2 compliant modules on Ubuntu 16.04 can purchase Ubuntu Advantage at or by contacting the Canonical Sales Team.

For further information please visit



What is FIPS?

FIPS stands for Federal Information Processing Standards which is a set of publications developed and maintained by the National Institute of Standards and Technology (NIST), a United States federal agency. These publications define the security criteria required for government computers and telecommunication systems.

What is the FIPS 140-2 standard?

According to NIST, FIPS 140-2 “specifies the security requirements that will be satisfied by a cryptographic module used within a security system protecting sensitive but unclassified information.”

Why should I use the FIPS 140-2 modules?

Government, defence, healthcare, and finance organizations worldwide operate in highly regulated industries and are required to meet the security requirements defined in the FIPS 140-2 standard. This includes the United States, Canadian, and United Kingdom governments as well as government contractors.

Where can I find out more about FIPS?

General information about the Federal Information Processing Standards can be found on the NIST website. More detailed information about FIPS 140-2 itself can be found in the Federal Information Processing Standards Publication 140-2 document.

Which modules are included?

What versions of Ubuntu have FIPS certified modules?

Currently only Ubuntu 16.04 LTS has FIPS certified modules.

How Can I Find Out More?

Click here to make an inquiry, and somebody from our team will get back to you!

13 December, 2017 02:00PM

Chris Glass: Magic URLs in the Ubuntu ecosystem

Because of the distributed nature of Ubuntu development, it is sometimes a little difficult for me to keep track of the "special" URLs for various actions or reports that I'm regularly interested in.

Therefore I started gathering them in my personal wiki (I use the excellent "zim" desktop wiki), and realized some of my colleagues and friends would be interested in that list as well. I'll do my best to keep this blog post up-to-date as I discover new ones.

A magic book

If you know of other candidates for this list, please don't hesitate to get in touch!

Behold, tribaal's "secret URL" list!

Pending SRUs

Once a package has been uploaded to a -proposed pocket, it needs to be verified as per the SRU process. Packages pending verification end up in this list.

Sponsorship queue

People who don't have upload rights for the package they fixed need to request sponsorship. This queue is the place to check if you're waiting for someone to pick it up and upload it.

Upload queue

A log of what got uploaded (and to which pocket) for a particular release, and also a queue of packages that have been uploaded and are now waiting for review before entering the archive.

For the active development release this is for brand new packages, for frozen releases these are SRU packages. Once approved at this step, the packages enter -proposed.

The launchpad build farm

A list of all the builders Launchpad currently has, broken down by architecture. You can look at jobs being built in real time, and the occupation level of the whole build farm in here as well.

Proposed migration excuses

For the currently in-development Ubuntu release, packages are first uploaded to -proposed, then a set of conditions need to be met before it can be promoted to the released pockets. The list of packages that have failed this automatic migration and the reason why they haven't can be found on this page.


Not really a "magic" URL, but this system gathers information and lists for the automatic merging system, that merges debian packages to the development release of Ubuntu.

Transitions tracker

This page tracks transitions, which are toolchain changes or other package updates with "lots" of dependencies. This tracks the dependencies build status.

13 December, 2017 09:05AM

Rhonda D'Vine: #metoo

I long thought about whether I should post a/my #metoo. It wasn't a rape. Nothing really happened. And a lot of these stories are very disturbing.

And yet it still it bothers me every now and then. I was in school age, late elementary or lower school ... In my hometown there is a cinema. Young as we've been we weren't allowed to see Rambo/Rocky. Not that I was very interested in the movie ... But there the door to the screening room stood open. And curious as we were we looked through the door. The projectionist saw us and waved us in. It was exciting to see a moview from that perspective that was forbidden to us.

He explained to us how the machines worked, showed us how the film rolls were put in and showed us how to see the signals on the screen which are the sign to turn on the second projector with the new roll.

During these explenations he was standing very close to us. Really close. He put his arm around us. The hand moved towards the crotch. It was unpleasantly and we knew that it wasn't all right. But screaming? We weren't allowed to be there ... So we thanked him nicely and retreated disturbed. The movie wasn't that good anyway.

Nothing really happened, and we didn't say anything.

/personal | permanent link | Comments: 2 | Flattr this

13 December, 2017 08:48AM

December 12, 2017

Ubuntu Insights: Ubuntu Server Development Summary – 12 Dec 2017

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: Ubuntu Bionic: Netplan

Josh on the Canonical Server team took a look at Netplan on Ubuntu Bionic. He shows some initial use cases and provides examples of some configurations.


  • Added ‘status’ subcommand to report whether cloud-init is ‘running’, ‘done’ or ‘error’. Also a tool for scripts to block on cloud-init completion with cloud-init status –wait
  • Added ‘clean’ subcommand as a developer tool to easily remove cloud-init artifacts and re-run cloud-init on reboot.
  • Cloud-init datasources now store standardized instance metadata in /run/cloud-init/instance-data.json which can be referenced by scripts to get instance-related variables such as region, availability-zone, instance-id and more.
  • Update pylint to 1.7.4 and run on tests and tools dirs
  • EC2 uses instance-identity doc from metadata to obtain instance-id and region [Andrew Jorgensen]
  • SUSE: remove delta in systemd local template for SUSE [Robert Schweikert]
  • VMware: Support for user provided pre and post-customization scripts [Maitreyee Saikia]
  • Fix ds-identify warning on VMWare platform by correctly identifying the OVF datasource. ds-identify identifies OVF when an iso9660 filesystem exists on cdrom containing ovf.env content (LP: #1731868)

Bug Work and Triage

Contact the Ubuntu Server team

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Supported Releases

apache2, 2.4.29-1ubuntu2, paelzer
asterisk, 1:13.18.3~dfsg-1ubuntu2, doko
asterisk, 1:13.18.3~dfsg-1ubuntu1, costamagnagianfranco
cloud-init, 17.1-58-g703241a3-0ubuntu1, smoser
cloud-init, 17.1-53-ga5dc0f42-0ubuntu1, smoser
cloud-init, 17.1-51-g05b2308a-0ubuntu1, smoser
curtin, 17.0~bzr552-0ubuntu1, smoser
iproute2, 4.14.1-0ubuntu2, paelzer
iproute2, 4.14.1-0ubuntu1, paelzer
python-ldappool, 2.1.0-0ubuntu1, corey.bryant
samba, 2:4.7.3+dfsg-1ubuntu1, mdeslaur
sosreport, 3.5-1ubuntu1, sil2100
sysstat, 11.6.0-1ubuntu2, paelzer
uvtool, 0~git136-0ubuntu1, racb
Total: 14

Uploads to the Development Release

iproute2, artful, 4.9.0-1ubuntu2.1, paelzer
iproute2, zesty, 4.9.0-1ubuntu1.1, paelzer
iproute2, xenial, 4.3.0-1ubuntu3.16.04.3, paelzer
iproute2, trusty, 3.12.0-2ubuntu1.2, paelzer
iproute2, trusty, 3.12.0-2ubuntu1.1, nacc
iscsitarget, trusty,, cascardo
lxd, xenial, 2.0.11-0ubuntu1~16.04.4, stgraber
lxd, xenial, 2.0.11-0ubuntu1~16.04.4, stgraber
lxd, xenial, 2.0.11-0ubuntu1~16.04.3, stgraber
lxd, xenial, 2.0.11-0ubuntu1~16.04.2, stgraber
qemu, artful, 1:2.10+dfsg-0ubuntu3.1, paelzer
rsync, artful, 3.1.2-2ubuntu0.1, leosilvab
rsync, zesty, 3.1.2-1ubuntu0.1, leosilvab
rsync, xenial, 3.1.1-3ubuntu1.1, leosilvab
rsync, trusty, 3.1.0-2ubuntu0.3, leosilvab
sysstat, xenial, 11.2.0-1ubuntu0.2, slashd
sysstat, zesty, 11.4.3-1ubuntu1, slashd
sysstat, artful, 11.5.7-1ubuntu2, slashd
Total: 18

12 December, 2017 08:02PM

Cumulus Linux

Top benefits of using Linux in the modern enterprise

Let’s be honest. There are many enterprise data centers (and data center admins) who aren’t crazy about Linux. But most of that opposition comes from simply not understanding the benefits of Linux and not experiencing Linux hands-on. Fortunately, we’ve got a comprehensive guide to everything Linux that you can use to get familiar with the basics. Once you start testing out Linux for yourself and getting comfortable with it, I think you’ll find that Linux is the best operating system available today.

So what are the benefits, in general, of using Linux? Some of these benefits include:

  • Consistent operating model. No matter what version or distribution of Linux you use, whether you’re on a supercomputer or a tiny embedded device, the general operation of Linux is the same no matter where you go. What this means is that, with some exceptions, the command line syntax is similar, process management is similar, basic network administration is similar, and applications can be (relatively) easily ported between distributions. The end result of this consistent operating model is a cost savings generated by greater staff efficiency and flexibility.
  • Scalability. Linux is eminently scalable and is able to run on everything from wristwatches to supercomputers to globally distributed computing clusters. Of course, the benefit of this scalability isn’t just the device mix, but also that its basic functionality — command line tools, configuration, automation, and code-compatibility — remains the same no matter where you’re using it.
  • Open-source and community optimized. With Linux’s open-source, freely available nature, you might be concerned about future enhancements, bug fixes, and support. Fortunately, you can put those worries aside. If you look at the Linux kernel alone, with its 22 million lines of code, you’ll find a strong community developing it behind the scenes. In 2016, one report said that over 5,000 individual developers representing 500 different corporations around the world contributed to enhancements in the Linux kernel, not to mention all the other surrounding applications and services. A staggering 13,500 developers from more than 1,300 companies have contributed to the Linux kernel since 2005. You might wonder why commercial entities contribute code to Linux. While many open-source advocates see the open-source nature of Linux as purely idealistic, commercial contribution of code is actually a strategic activity. In this sense, the for-profit companies who are dependent on Linux contribute their changes to the core to ensure that those changes carry forward into future distributions without having to maintain them indefinitely.
  • Full function networking. Over the years, Linux has built up a strong set of networking capabilities, including networking tools for providing and managing routing, bridging, DNS, DHCP, network troubleshooting, virtual networking and network monitoring.
  • Package management. The Linux package management system allows you to easily install new services and applications with just a few simple commands.

If you think these benefits are great, then you’re going to be really excited about the capabilities of Linux in the enterprise data center.

Top 5 uses for Linux in the data center

Many modern ideas in data center computing have Linux underpinnings. Here are just a few examples:

  • Automation and orchestration. Automation is used to perform a common task in a programmatic/scripted way, whereas orchestration is used to automate tasks across multiple systems in a data center. Linux is being used to automate and orchestrate just about every process in the enterprise data center.
  • Server virtualization. Server virtualization is the ability to run a full operating system on top of an existing bare metal server. These virtual machines (VMs) can be used to increase server utilization, simplify server testing, or lower the cost of server redundancy. The software that allows VMs to function is called a hypervisor. Linux includes an excellent hypervisor called KVM.
  • Private cloud. Another open-source project called OpenStack, which also runs on Linux, has become a leading cloud management platform for creating a private cloud. With private cloud, companies can leverage many of the same advantages of public cloud (scalability, self-service, multi-tenancy, and more) while running their own IT infrastructure on-premises.
  • Big data. More and more companies are having to deal with exponentially increasing amounts of data in their data center, and because Linux offers such scalability and performance, it has become the go-to operating system for crunching big data via applications like Hadoop. Even Microsoft recently announced a big data solution based on Linux.
  • Containers. Linux can also be used to run containerized applications, such as Docker containers, which are being used more and more by many companies. In fact, Linux is the foundation of the modern container movement; all container packaging and orchestration relies on Linux namespace and isolation mechanisms in order to operate.

Want to learn more about Linux in the data center? Download the full ebook titled The Gorilla Guide to… Linux Networking 101 or visit our learn section to learn more!

The post Top benefits of using Linux in the modern enterprise appeared first on Cumulus Networks Blog.

12 December, 2017 07:17PM by Kelsey Havens

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

Emmabuntüs enters Paris Descartes university

This is within the framework of an academic course on « Computer Tools » that, Wednesday December 6th, Hervé – a member of our collective – made a presentation on Emmabuntüs. This course is led by Antoine Caumel and is intended for students following the Master 1 degree « Contemporary Societies : ethical, political and social challenges » of [...]

12 December, 2017 04:55PM by yves

hackergotchi for SparkyLinux


Sparky 4.7 armhf

Sparky 4.7 armhf for RaspberryPi is out now.
Sparky of the 4.x line is based on the stable branch of Debian 9 “Stretch”.

This release is available in two versions:
• Openbox – with a small set of applications
• CLI – text based

• fixed all known issues
• changed ‘volumeicon’ to ‘pnmixer’ for better compatibility with the default audio server; it lets you scroll down and up the volume level via the panel icon
• changed ‘pcmanfm’ file manager to ‘thunar’
• changed ‘uget’ download manager to text based ‘aria2’
• inserted external volumes are automatically mounted now
• web browser changed to ‘netsurf’; it’s very, very lightweight browser which did not freeze down during my heavy tests; install your favorite web browsers via the package manager or Sparky’s web browser installer tool, if you wish
• the default music player is ‘alsaplayer’ now, which lets you listen to the music from hard, usb and cd drives
• no changes with the movie player – ‘gnome-mplayer’ as a gui of ‘mplayer’ with ‘mencoder’ pre-installed lets you open most popular audio and video files

The user name is: pi ; password: sparky
root password is: toor

Some helpful tips are placed on our Wiki pages.

Get the latest Sparky images from the download/stable page.


12 December, 2017 02:21PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Sean Davis: Development Release: Xfce PulseAudio Plugin 0.3.4

With each new release, the Xfce PulseAudio Plugin becomes more refined and better suited for Xfce users. The latest release adds support for the MPRIS Playlists specification and improves support for Spotify and other media players.

What’s New?

New Feature: MPRIS Playlists Support

  • This is a basic implementation of the MediaPlayer2.Playlists specification.
  • The 5 most recently played playlists are displayed (if supported by the player). Admittedly, I have not found a player that seems to implement the ordering portion of this specification.

New Feature: Experimental libwnck Support

  • libwnck is a window management library. This feature adds the “Raise” method for media players that do not support it, allowing the user to display the application window after clicking the menu item in the plugin.
  • Spotify for Linux is the only media player that I have found which does not implement this method. Since this is the media player I use most of the time, this was an important issue for me to resolve.


  • Unexpected error messages sent via DBUS are now handled gracefully. The previous release of Pithos (1.1.2) displayed a Python error when doing DBUS queries before, crashing the plugin.
  • Numerous memory leaks were patched.

Translation Updates

Chinese (Taiwan), Croatian, Czech, Danish, Dutch, French, German, Hebrew, Japanese, Korean, Lithuanian, Polish, Russian, Slovak, Spanish, Swedish, Thai


The latest version of Xfce PulseAudio Plugin can always be downloaded from the Xfce archives. Grab version 0.3.4 from the below link.

  • SHA-256: 43fa39400eccab1f3980064f42dde76f5cf4546a6ea0a5dc5c4c5b9ed2a01220
  • SHA-1: 171f49ef0ffd1e4a65ba0a08f656c265a3d19108
  • MD5: 05633b8776dd3dcd4cda8580613644c3

12 December, 2017 12:12PM

Ubuntu Insights: Canonical helps DeNA lower operational cost of always-on service

DeNA is one of the most popular mobile and online platforms in Japan, offering games, e-commerce, entertainment, healthcare, and automotive services. The always-on DeNA infrastructure is powered by Ubuntu.

When Canonical released Livepatch in October 2016, with the ability to patch servers without downtime, DeNA saw an opportunity to reduce its operational costs. Traditionally, the process of upgrading and patching an OS can be long and complex, including workload migrations and manual interventions. Moving to an automated way to perform upgrades and security patches of its Ubuntu servers without downtime meant that DeNA could eliminate these OS upgrades.

The Canonical Kernel Livepatch Service enables runtime correction of critical security vulnerabilities in the kernel without the need to reboot. It is the best way to ensure that machines are safe at the kernel level, while guaranteeing uptime, especially for container hosts where a single machine may be running thousands of different workloads.

DeNA now has hundreds of nodes in operation using Livepatch and has found the experience to be outstanding. As Mr. Masaaki Hirose (IT Platform Dept.) says, “Our users and business partners expect nothing less from DeNA than complete availability, reliability and security. Livepatch is like a dream come true, both from a technical and a business standpoint. Our Ubuntu systems now rarely or never have to be rebooted. Service is continuous. That makes a big difference for user and customer satisfaction and loyalty.”

About DeNA
DeNA (pronounced “D-N-A”) develops and operates a broad range of mobile and online services including games, e-commerce, entertainment, healthcare, automotive and other diversified offerings. Founded in 1999, DeNA is headquartered in Tokyo with over 2,000 employees. DeNA Co., Ltd. is listed on the Tokyo Stock Exchange (2432). For more information, visit:

About Canonical
Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.
For more information on the Canonical Livepatch service, please click here. You can also contact the Canonical sales team in Japanese or in English.


12 December, 2017 09:12AM

Ubuntu Insights: LXD Weekly Status #27


This past week was incredibly busy and featureful for both LXC and LXD.

We landed Infinband support in LXD, alongside new configuration keys to control the presence of /dev/lxd in the container and support for pre-migration of memory during container live-migration. That’s on top of a variety of bugfixes and other still ongoing feature work.

On the LXC side of things, we’ve added a new reboot2 function to our API, making it possible to block on container restarts, added a new lxc.init.cwd configuration key to control the working directory of the container’s init process, added a new lxc.sysctl set of configuration keys, all while also fixing numerous new issues reported by Coverity Scan and a number of other bugfixes and refactoring.

We’d like to give a shout out to Adrian Reber from Red Hat for the work on memory pre-migration in LXC and LXD as well as to the students of the University of Texas in Austin for contributing the /dev/lxd work in LXD and a number of refactoring of the LXC tools. It’s always great to get new contributors to those projects!

We’re now slowly preparing for LXD 2.21 due next Tuesday, hopefully getting a couple more features in there and fixing any last minute issues.

We also expect LXD 2.0.11 to be pushed to Ubuntu 14.04 later this week as LXD 2.0.11 in Ubuntu 16.04 seems to be doing very well.

Upcoming conferences and events

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • Released LXD 2.0.11 to all Ubuntu 16.04 users.
  • Fixed a few issues with lxd init in LXD 2.0.11.


  • Fixed lack of /etc/mtab in the snap environment (as needed for resize on LVM).
  • Cherry-picked a number of bugfixes into the stable LXD snap.

12 December, 2017 12:23AM

hackergotchi for Qubes


XSA-248 through XSA-251 do not affect the security of Qubes OS

The Xen Project has published Xen Security Advisories 248 through 251 (XSA-248 through XSA-251). These XSAs do not affect the security of Qubes OS, and no user action is necessary.

These XSAs have been added to the XSA Tracker:

12 December, 2017 12:00AM

December 11, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: The Silph Road embraces cloud and containers with Canonical

The Silph Road is the premier grassroots network for Pokémon GO players around the world offering research, tools, and resources to the largest Pokémon GO community worldwide, with up to 400,000 visitors per day

Operating a volunteer-run, community network with up to 400,000 daily visitors is no easy task especially in the face of massive and unpredictable demand spikes, and with developers spread all over the world.With massive user demand and with volunteer developers located all over the world, The Silph Road’s operations must be cost-effective, flexible, and scalable.

This led the Pokémon GO network first to cloud, and then to containers and in both cases Canonical ’s technology was the answer.


  • Containerisation with Canonical’s Distribution of Kubernetes helped reduce cloud build by 40%
  • Autoscaling makes coping with spikes in user demand easy and cost-effective
  • Juju enables The Silph Road to migrate between public clouds with less than 2 minutes downtime

For more information and to view the case study please visit Silph Road Case Study

11 December, 2017 02:01PM


Zukunftsentwicklungen für die Smart City von morgen

Im November nach Barcelona? Lohnt sich ein Besuch zu dieser Jahreszeit? Eindeutig ja, wenn man sich mit dem Thema Smart City beschäftigt. Zwischen dem 14. und 16. November 2017 fand der Smart City Expo World … Weiterlesen

Der Beitrag Zukunftsentwicklungen für die Smart City von morgen erschien zuerst auf Münchner IT-Blog.

11 December, 2017 01:58PM by Stefan Döring

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: communication platform enables simplicity through snaps

Created in Brazil, Rocket.Chat provides an open source chat solution for organisations of all sizes around the world. Built on open source values and a love of efficiency, Rocket.Chat is driven by a community of contributors and has seen adoption in all aspects of business and education. As Rocket.Chat has evolved, it has been keen to get its platform into the hands of as many users as possible without the difficulties of installation often associated with bespoke Linux deployments.


By switching to snaps, Rocket.Chat has been able to get its product into the hands of users with as few steps as possible, switching a multi-stage set-up process for a single command and instant installation. As a deployment method, snaps are typically faster to install, easier to create and more secure than competitive packages.

Since creating the snap, there has been over 100,000 installations and the snap accounts for 42% of all installs making it the most popular deployment method.

Download the case study below to learn more about’s reason for adopting snaps and their journey in building the snap.

11 December, 2017 11:51AM

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

YovoTogo Diary-2017-4

Monday October 23, 2017 At 8AM this morning we are at the CRETFP (Regional college for technology education and vocational training) in Dapaong to meet with the responsible persons for this school and those of Jump Lab’Orione, YovoTogo and the Tandjouaré-Dapaong Rotary Club. The goal of this meeting is to define the conditions under which [...]

11 December, 2017 09:40AM by yves

hackergotchi for ZEVENET


New Video: DoS Protection with IPDS module

At Zevenet, we take very seriously the network security in every level at system and networking. The Denial of Service protection included in the IPDS (Intrusion Prevention and Detection System) allows to detect this kind of network patterns attacks and prevent them by stopping the connection. In that case, Zevenet load balancer acts like a filter of network threats for your applications and...


11 December, 2017 08:46AM by Zevenet

hackergotchi for Wazo


Sprint Review 17.17

Hello Wazo community!

This release is our birthday release, since the fork! Thanks for following us this last year and keep in touch, the best is yet to come. :)

New features in this sprint

REST API: REST APIs have been added for SIP trunks, timezones, prompt languages, call permissions voicemail and IAX configuration.

Network: NAT configuration has been simplified, when the Wazo server is hosted in a datacenter.

API interconnection: Wazo is now able to use OAuth2 authentication client mechanisms to interact with external API systems. The first example we are working on is to interconnect Wazo with Zoho, other such API include Google, Office 365, etc.

Ongoing features

User and Tenant Management: We are currently reworking the user and entities (a.k.a tenant) configuration. This should make installations with multiple entities feel more natural in future versions.

REST API: We are working towards replacing the old orange admin web interface with the more modern and easier to maintain blue web interface (wazo-admin-ui on /admin). Since wazo-admin-ui is only using REST API under the hood, we need REST API to cover all cases of administration of Wazo. Hence we are completing the set of REST API offered by Wazo. You can find the complete API documentation on

The instructions for installing Wazo are available in the documentation. The instructions for upgrading Wazo as also available in the documentation. Be sure to read the breaking changes.

For more details about the aforementioned topics, please see the roadmap linked below.

See you at the next sprint review!


11 December, 2017 05:00AM by The Wazo Authors

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Testing a switch to default Breeze-Dark Plasma theme in Bionic daily isos and default settings

Today’s daily ISO for Bionic Beaver 18.04 sees an experimental switch to the Breeze-Dark Plasma theme by default.

Users running 18.04 development version who have not deliberately opted to use Breeze/Breeze-Light in their systemsettings will also see the change after upgrading packages.

Users can easily revert back to the Breeze/Breeze-Light Plasma themes by changing this in systemsettings.

Feedback on this change will be very welcome:

You can reach us on the Kubuntu IRC channel or Telegram group, on our user mailing list, or post feedback on the (unofficial) Kubuntu web forums

Thank you to Michael Tunnell from for kindly suggesting this change.

11 December, 2017 01:15AM

hackergotchi for Qubes


Joanna Rutkowska gives keynote 'Security through Distrusting' at Black Hat Europe 2017

Joanna Rutkowska recently gave a keynote address at Black Hat Europe 2017. The slides from the presentation, Security through Distrusting, are avaialble here: Security through Distrusting slides

Abstract of “Security through Distrusting”

There are different approaches to making (computer) systems (reasonably) secure and trustworthy:

At one extreme, we would like to ensure everything (software, hardware, infrastructure) is trusted. This means the code has no bugs or backdoors, patches are always available and deployed, admins always competent and trustworthy, and the infrastructure always reliable…

On the other end of the spectrum, however, we would like to distrust (nearly) all components and actors, and have no single almighty element in the system.

In my opinion, the industry has been way too much focused on this first approach, which I see as overly naive and non-scalable to more complex systems.

In this talk, based on my prior work as both offensive researcher in the past, as well as an engineer and architect on the defense side in the recent years, I will attempt to convince the audience that moving somehow towards the “security through distrusting” principle might be a good idea. Equally important though, the talk will discuss the trade-offs that this move requires and where can we find the sweet spot between the two approaches.

11 December, 2017 12:00AM

Qubes Canary #14

Dear Qubes community,

We have published Qubes Canary #14. The text of this canary is reproduced below. This canary and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View Canary #14 in the qubes-secpack:

Learn about the qubes-secpack, including how to obtain, verify, and read it:

View all past canaries:

                    ---===[ Qubes Canary #14 ]===---


The Qubes core developers who have digitally signed this file [1]
state the following:

1. The date of issue of this canary is December  10, 2017.

2. There have been 36 Qubes Security Bulletins published so far.

3. The Qubes Master Signing Key fingerprint is:

    427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494

4. No warrants have ever been served to us with regard to the Qubes OS
Project (e.g. to hand out the private signing keys or to introduce

5. We plan to publish the next of these canary statements in the first
two weeks of March 2018. Special note should be taken if no new canary
is published by that time or if the list of statements changes without
plausible explanation.

Special announcements


Disclaimers and notes

We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently
compromised.  This means that we assume NO trust in any of the servers
or services which host or provide any Qubes-related data, in
particular, software updates, source code repositories, and Qubes ISO

This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other
means, like blackmail or compromising the signers' laptops, to coerce
us to produce false declarations.

The news feeds quoted below (Proof of freshness) serves to demonstrate
that this canary could not have been created prior to the date stated.
It shows that a series of canaries was not created in advance.

This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to
anybody. None of the signers should be ever held legally responsible
for any of the statements made here.

Proof of freshness

$ date -R -u
Sun, 10 Dec 2017 20:50:18 +0000

$ feedstail -1 -n5 -f '{title}' -u
Trump's Jerusalem Folly: Time for Europe to Take the Lead on Peace
U.S. Economy: Trump Tax Plan Worries Europe
Alleged INF Treaty Violation: U.S. Demands NATO Action on Russian Missiles
Donald Trump and Jerusalem: 'I Don't See Potential Upsides'
Liberated Raqqa: The Stench of Death amid Hopes for Life

$ feedstail -1 -n5 -f '{title}' -u
For Older Venezuelans, Fleeing Crisis Means ‘Starting From Zero,’ Even at 90
Protests in Lebanon Near U.S. Embassy After Trump’s Jerusalem Decision
Jerusalem: It’s Tense, Crowded and Can Feel Like a Jail
The Interpreter: The Jerusalem Issue, Explained
Macron Steps Into Middle East Role as U.S. Retreats

$ feedstail -1 -n5 -f '{title}' -u
Netanyahu: Palestinians must face reality over Jerusalem
Nobel Peace Prize winner Ican warns nuclear war 'a tantrum away'
Actress Zaira Wasim: I was molested on flight
Star Wars: The Last Jedi - tributes to Carrie Fisher at LA premiere
North Korea: Urgent need to open channels, UN says after visit

$ feedstail -1 -n5 -f '{title}' -u
Palestinian stabs Israeli in Jerusalem; anti-Trump protest flares in Beirut
Iraq holds victory parade after defeating Islamic State
With foes absent, Venezuela's socialists to gain from local vote
UK's Johnson meets Iran president as he lobbies for jailed aid worker
Honduras tribunal says partial vote recount shows same result

$ curl -s ''

$ python3 -c 'import sys, json; print(json.load(sys.stdin)['\''blocks'\''][10]['\''hash'\''])'


[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this
canary in the qubes-secpack.git repo, and (2) via digital signatures
on the corresponding qubes-secpack.git repo tags. [2]

[2] Don't just trust the contents of this file blindly! Verify the
digital signatures!

11 December, 2017 12:00AM

December 10, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Costales: 3rd Ubucon Europe 2018

Yes! A new edition for ubunteros around the world! :))

Ubucons around the world

Is Ubucon made for me?

This event is just for you! ;) You don't need to be a developer, because you'll enjoy a lot of talks about everything you can to imagine about Ubuntu and share great moments with other users.
Even the language won't be a problem. there, you'll meet people from everywhere and surely someone will speak in your language :)

You can read different posts about the previous Ubucon in Paris here:
Another in Spanish:


Gijón/Xixón, Asturies, Spain
Antiguo Instituto, just in the city center, built in 1797:
Antiguo Instituto


27th, 28th and 29th of April 2018.

Organized by

  • Francisco Javier Teruelo de Luis 
  • Francisco Molinero 
  • Sergi Quiles Pérez 
  • Antonio Fernandes 
  • Paul Hodgetts 
  • Santiago Moreira 
  • Joan CiberSheep 
  • Fernando Lanero 
  • Manu Cogolludo 
  • Marcos Costales

Get in touch!

We're working in a few details yet, please don't book a flight yet and join our Telegram channel nowGoogle+ or Twitter for getting the last news and future discounts on hotels and transport.

10 December, 2017 07:21PM by Marcos Costales (

hackergotchi for rescatux


Ubuntu feedback on Rescapp needed

While we wait for Rescatux to be rewritten, once again, so that it’s based on Debian 9 (Stretch) its main program: Rescapp needs some love. Some testing. Some polishment.

Rescapp program has been rewritten so that:

  • It no longer depends on SELinux
  • It depends on PyQT5 instead of the now obsolete PyQT4

That means that it can be built for recent versions of Ubuntu, packaged and put into a repo.

So if you want to help Rescatux / Rescapp development and you happen to have either a:

Ubuntu 16.04 Xenial AMD64 Live CD

or an

Ubuntu 17.10 Artful AMD64 Live CD

you are welcome to test Rescatux repo and report any bugs that you find on the Rescapp issues page.

Flattr this!

10 December, 2017 03:57PM by adrian15

Rescatux 0.51 beta 3 released

Rescatux 0.51 beta 3 has been released.

Rescatux 0.41 beta 1 new optionsRescatux 0.41 beta 1 new options
Update UEFI order in actionUpdate UEFI order in action


  • Torrent (Open the link the in browser and click on Download Torrent File Now)
  • Rescatux 0.51b3 size is about 640 Megabytes.

    Some thoughts:

    • Boot Repair functionality has been removed from Rescatux. Many people, somehow, were using Boot Repair (by default) inside Rescatux while we don’t support it.
      If you feel Rescapp does not cover all the Boot Repair functionality you can fill a bug for a RFE (Request for Enhacement).
    • Rescapp and chntpw are now installed from a Repo. It should be virtually identical to 0.41b1 release. If something doesn’t work as well as before please report a bug so that we can fix it.
    • This is the last build based on Debian 8 (Jessie).

    Important notice:

    • If you want to use the UEFI options make sure you use DD or another equivalent tool (Rufus in ‘Direct image’ mode, usb imagewriter, etc.) to put Rescatux in your USB
    • If you want to use UEFI options make sure you boot your Rescatux device in UEFI mode
    • If you want to use Rescatux make sure you temporarly disable Secure Boot. Rescatux does not support booting in Secure Boot mode but it should be able to fix most of the UEFI Secure Boot problems if booted in Non Secure Boot mode.

    More things I want to do before the stable release are:

    Let’s hope it happens sooner than later.

    Roadmap for Rescatux 0.40 stable release:

    You can check the complete changelog with link to each one of the issues at: Rescatux 0.32-freeze roadmap which I’ll be reusing for Rescatux 0.40 stable release.

    • (Fixed in 0.40b5) [#2192] UEFI boot support
    • (Fixed in 0.40b2) [#1323] GPT support
    • (Fixed in 0.40b11) [#1364] Review Copyright notice
    • (Fixed in: 0.32b2) [#2188] install-mbr : Windows 7 seems not to be fixed with it
    • (Fixed in: 0.32b2) [#2190] debian-live. Include cpu detection and loopback cfg patches
    • (Fixed in: 0.40b8) [#2191] Change Keyboard layout
    • (Fixed in: 0.32b2) [#2193] bootinfoscript: Use it as a package
    • (Fixed in: 0.32b2) [#2199] Btrfs support
    • (Closed in 0.40b1) [#2205] Handle different default sh script
    • (Fixed in 0.40b2) [#2216] Verify separated /usr support
    • (Fixed in: 0.32b2) [#2217] chown root root on sudoers
    • [#2220] Make sure all the source code is available
    • (Fixed in: 0.32b2) [#2221] Detect SAM file algorithm fails with directories which have spaces on them
    • (Fixed in: 0.32b2) [#2227] Use chntpw 1.0-1 from Jessie
    • (Fixed in 0.40b1) [#2231] SElinux support on chroot options
    • (Checked in 0.40b11) [#2233] Disable USB automount
    • (Fixed in 0.40b9) [#2236] chntpw based options need to be rewritten for reusing code
    • [#2239] suppose that the image is based on Super Grub2 Disk version and not Isolinux.The step about extracting iso inside an iso would not be longer needed. Update doc: Put Rescatux into a media for Isolinux based cd
    • (Fixed in: 0.32b2) [#2259] Update bootinfoscript to the latest GIT version
    • (Fixed in: 0.40b9) [#2264] chntpw – Save prior registry files
    • (Fixed in: 0.40b9) [#2234] New option: Easy Grub fix
    • (Fixed in: 0.40b9) [#2235] New option: Easy Windows Admin

    Changes (0.51b3):

    • boot-repair was removed
    • rescapp and chntpw fetched from Rescatux repo

    New options (0.41b1):

    • (Added in 0.41b1) Update UEFI order
    • (Added in 0.41b1) Create a new UEFI Boot entry
    • (Added in 0.41b1) UEFI Partition Status
    • (Added in 0.41b1) Fake Microsoft Windows UEFI
    • (Added in 0.41b1) Hide Microsoft Windows UEFI
    • (Added in 0.41b1) Reinstall Microsoft Windows EFI
    • (Added in 0.41b1) Check UEFI Boot

    Improved bugs (0.41b1):

    • (Improved in 0.41b1) Now EFI System partitions are shown properly in the Rescapp menues
    • (Improved in 0.41b1) Now partition types are shown in partition dialogs in the Rescapp menues
    • (Improved in 0.41b1) Now partition flags are shown in partition dialogs in the Rescapp menues
    • (Improved in 0.41b1) Now partition os-prober long names are shown in partition dialogs in the Rescapp menues
    • (Improved in 0.41b1) Show ‘Unknown GNU/Linux distro’ if we ever fail to parse an /etc/issue file.
    • (Improved in 0.41b1) Usability improvement. When moving entries the last entry moved keeps selected.

    Improved bugs (0.40b11):

    • (Improved in 0.40b11) Many source code build improvements
    • (Improved in 0.40b11) Now most options show their progress while running
    • (Improved in 0.40b11) Added a reference to the source code’s README file in the ‘About Rescapp’ option
    • (Improved in 0.40b11) Not detected’ string was renamed to ‘Windows / Data / Other’ because that’s what it usually happens with Windows OSes

    Fixed bugs (0.40b11):

    • (Fixed in 0.40b11) [#1364] Review Copyright notice
    • (Checked in 0.40b11) [#2233] Disable USB automount
    • (Fixed in 0.40b11) Wineasy had its messages fixed (Promote and Unlock were swapped)
    • (Fixed in 0.40b11) Share log function now drops usage of cat to avoid utf8 / ascii problems.
    • (Fixed in 0.40b11) Sanitize ‘Not detected’ and ‘Cannot mount’ messages

    Fixed bugs (0.40b9):

    • (Fixed in 0.40b9) [#2236] chntpw based options need to be rewritten for reusing code
    • (Fixed in: 0.40b9) [#2264] chntpw – Save prior registry files
    • (Fixed in: 0.40b9) [#2234] New option: Easy Grub fix
    • (Fixed in: 0.40b9) [#2235] New option: Easy Windows Admin

    Fixed bugs (0.40b8):

    • (Fixed in 0.40b8) [#2191] Change Keyboard layout

    Improved bugs (0.40b7):

    • (Improved in 0.40b7) [#2192] UEFI boot support (Yes, again)

    Improved bugs (0.40b6):

    • (Improved in 0.40b6) [#2192] UEFI boot support

    Fixed bugs (0.40b5):

    • (Fixed in 0.40b5) [#2192] UEFI boot support

    Fixed bugs (0.40b2):

    • (Fixed in 0.40b2) [#1323] GPT support
    • (Fixed in 0.40b2) [#2216] Verify separated /usr support

    Fixed bugs (0.40b1):

    • (Fixed in 0.40b1) [#2231] SElinux support on chroot options

    Reopened bugs (0.40b1):

    • (Reopened in 0.40b1) [#2191] Change Keyboard layout

    Fixed bugs (0.32b3):

    • (Fixed in 0.32b3) [#2191] Change Keyboard layout

    Other fixed bugs (0.32b2):

    • Rescatux logo is not shown at boot
    • Boot entries are named “Live xxxx” instead of “Rescatux xxxx”

    Fixed bugs (0.32b1):

    • Networking detection improved (fallback to network-manager-gnome)
    • Bottom bar does not have a shorcut to a file manager as it’s a common practice in modern desktops. Fixed when falling back to LXDE.
    • Double-clicking on directories on desktop opens Iceweasel (Firefox fork) instead of a file manager. Fixed when falling back to LXDE.

    Improvements (0.32b1):

    • Super Grub2 Disk is no longer included. That makes easier to put the ISO to USB devices thanks to standard multiboot tools which support Debian Live cds.
    • Rescapp UI has been redesigned
      • Every option is at hand at the first screen.
      • Rescapp options can be scrolled. That makes it easier to add new options without bothering on final design.
      • Run option screen buttons have been rearranged to make it easier to read.
    • RazorQT has been replaced by LXDE which seems more mature. LXQT will have to wait.
    • WICD has been replaced by network-manager-gnome. That makes easier to connect to wired and wireless networks.
    • It is no longer based on Debian Unstable (sid) branch.

    Distro facts:

    • Packages versions for this release can be found at Rescatux 0.40b11 packages.
    • It’s based mainly on Debian Jessie (Stable). Some packages are from Debian Unstable (sid). Some packages are from Debian stretch.


    Don’t forget that you can use:

    Help Rescatux project:

    I think we can expect four months maximum till the new stable Rescatux is ready. Helping on these tasks is appreciated:

    • Making a youtube video for the new options.
    • Make sure documentation for the new options is right.
    • Make snapshots for new options documentation so that they don’t lack images.

    If you want to help please contact us here:

    Thank you and happy download!

    Flattr this!

    10 December, 2017 03:33PM by adrian15

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Costales: Podcast Ubuntu y otras hierbas S02E03: Distros derivadas, entrevista uNav, Linux on Galaxy

    En esta ocasión, Francisco MolineroFrancisco Javier TerueloFernando Lanero  y Marcos Costalescharlamos sobre los siguientes temas:

    • Distros derivadas. ¿Sí o no?
    • Entrevista al desarrollador de uNav
    • Linux on Galaxy

    Ubuntu y otras hierbas S02E03

    Y atención a la noticia que damos al final del programa ;) Sigue la Ubucon en TelegramGoogle+ o Twitter.

    El podcast esta disponible para escuchar en:

    10 December, 2017 02:59PM by Marcos Costales (

    December 09, 2017

    Clive Johnston: Send SMS messages from your Plasma Desktop

    Earlier this year I talked about using KDE Connect to send and receive SMS messages via your connected device. Back then sending messages was a bit of a faff and involved having to use the terminal, but as of today this is no longer an issue!

    Meet KDEConnect SMS sender Plasmoid which was uploaded earlier today on the KDE Store.  Once installed on your system you can add it to your desktop as a widget (as shown above).  On first use you need to tell it which connection to use by going to the Settings page.



    Once you have it configured to use the correct device, you type in the phone number of the person you wish to send the message to in the first box (as below).  Please note this needs to be the international dialling code (ie +44 for the UK, +353 for Ireland).  Then type your message and click the Send button, it’s that simple!

    Your mobile device will then send the message.  The project has a GitHub page – so head over there for the code, new releases and bug reports/feedback.

    You can try it out yourself, on Xenial (16.04), Artful (17.10) or Bionic (18.04) by adding my PPA:

    sudo add-apt-repository ppa:clivejo/plasma-kdeconnect-sms
    sudo apt update
    sudo apt install plasma-kdeconnect-sms

    09 December, 2017 04:07PM

    hackergotchi for VyOS


    A book on VyOS in German is available

    Our community member Markus Stubbig wrote a VyOS book in German that is now available for purchase from Amazon or

    Markus says there are no definite plans for an English version yet because he's not confident about his translation skills and will need help. If you have those skills and want to offer your services, we can connect you with Markus.

    We don't have copies of the book yet (and none of the VyOS maintainers are fluent in German either, though I can read it a little), we cannot provide any kind review of the book yet, but we have no reasons to doubt Markus' expertise. If you get the book and write a review, we may publish it in the blog.

    09 December, 2017 12:49PM by Daniil Baturin

    hackergotchi for Purism PureOS

    Purism PureOS

    We love Ethical Design

    In our wish to bring our contribution to the betterment of society, wherever we plan to work on refining our products or existing software, we will conform to the Ethical Design Manifesto. Our philosophy and social purpose have always been in perfect unison with the principles stated in the Ethical Design Manifesto, and having it as part of our internal design team’s policy is a good way to make sure that we always keep it in mind.

    What is Ethical Design?

    The goal of “ethical” design is to develop technology that is respectful of human beings whoever they are. It encourages the adoption of ethical business models and, all together, it is favoring a more ethical society.

    According to the manifesto, ethical design aims to respect:

    • Human Rights: “Technology that respects human rights is decentralised, peer-to-peer, zero-knowledge, end-to-end encrypted, free and open source, interoperable, accessible, and sustainable. It respects and protects your civil liberties, reduces inequality, and benefits democracy.”
    • Human Effort: “Technology that respects human effort is functional, convenient, and reliable. It is thoughtful and accommodating; not arrogant or demanding. It understands that you might be distracted or differently-abled. It respects the limited time you have on this planet.”
    • Human Experience: “Technology that respects human experience is beautiful, magical, and delightful. It just works. It’s intuitive. It’s invisible. It recedes into the background of your life. It gives you joy. It empowers you with superpowers. It puts a smile on your face and makes your life better.”

    Growing the seed of an ethical society

    Working towards an “ethical society” may sound like fighting windmills. I personally see it as a global, constant yet disorganized wish that nonetheless tends to materialize from time to time through a common concerted effort. I don’t think that this effort is about changing some thing because of its unethical nature; it has nothing to do with a fight. Instead, it is about growing the seed of a more ethical thing that would exist next to it.

    In line with this goal and our social purpose is the fact that we aim to work in an “upstream first” way as part of the Free Software community; in order to contribute to the common effort toward growing this ethical seed, any software development and improvement on top of an existing project is intended to be discussed and co-developed upstream first. We don’t want to reinvent the wheel and fork existing projects just because we don’t like the colors of the paint on the wall! This would only fraction the community’s resources and add confusion for users.

    There are so many amazing free software projects that share our philosophy, and we hope to contribute while also ensuring these pieces of software respect human rights, human effort and human experience. These are my guiding principles for Purism’s UI and UX design projects.

    09 December, 2017 08:37AM by François Téchené

    December 08, 2017

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Podcast from the UK LoCo: S10E40 – Clammy Eminent Spot - Ubuntu Podcast

    This week an old man is confused by a modern gaming mouse. We talk to Ikey Doherty from the Solus project about Linux Steam Integration and how snaps are improving game delivery for all users of Linux. We have a multi-player love and go over your feedback.

    It’s Season Ten Episode Forty of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Ikey Doherty are connected and speaking to your brain.

    In this week’s show:

    • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

    • This weeks cover image is taken from Wikimedia.

    That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    08 December, 2017 11:00PM


    Linux Mint Installation Guide

    The Linux Mint Installation Guide is ready.

    To read it on your computer click

    On your phone or tablet, scan this QR code:

    This guide is currently available in English and in French and it is currently being translated in many more languages.

    Note: Three other guides are planned: An overview of Linux Mint, a developer guide and a troubleshooting/bug_reporting guide. This new collection will eventually replace the old “Linux Mint User Guide”.


    08 December, 2017 02:32PM by Linux Mint


    HackaTUM 2017 – München auf der Suche nach Cityheroes!

    Am Wochenende vom 17. bis 19. November trafen sich über 300 Studenten an der Fakultät für Informatik der Technischen Universität München zum hackaTUM 2017. Unter dem Slogan „Hack the future“ galt es, Ideen zu entwickeln, … Weiterlesen

    Der Beitrag HackaTUM 2017 – München auf der Suche nach Cityheroes! erschien zuerst auf Münchner IT-Blog.

    08 December, 2017 11:54AM by Stefan Döring

    hackergotchi for Emmabuntüs Debian Edition

    Emmabuntüs Debian Edition

    Emmabuntüs entre à l’université de PARIS DESCARTES

    C’est dans le cadre d’un cours intitulé « Outils informatiques » qu’a eu lieu mercredi 6 décembre cette présentation d’Emmabuntüs par Hervé, membre du collectif. Ce cours dirigé par Antoine Caumel s’adresse aux étudiants en Master 1 du Master « Sociétés contemporaines : enjeux éthiques, politiques, sociaux » de l’université Paris Descartes. Une vingtaine d’étudiants sont inscrits à ce [...]

    08 December, 2017 10:00AM by solan

    hackergotchi for Deepin


    Deepin System Updates (2017.12.08)

    Updated Qt to version 5.6.1 Fixed the black side issue of window problem caused by plugins; Solved the issue of HiDPI thin line. Update TIM to version 2.0.0 Solve the issue that online document can not be opened; The system default browser is used to open an online document. Update Deepin Calculator to version 1.0.1 Specific font is used in historical expression list; Optimized the calculation accuracy. Applications updated and added in Deepin Appstore

    08 December, 2017 08:11AM by longxiang

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Stephen Michael Kellat: Not Messing With Hot Wheels Car Insertion

    Being on furlough from your job for just under four full months and losing 20 pounds during that time can hardly be considered healthy. If anything, it means that something is wrong. I allude in various fora that I work for a bureau of the United States of America's federal government as a civil servant. I am not particularly high-ranking as I only come in at GS-7 Step 1 under "CLEVELAND-AKRON-CANTON, OH" locality pay. My job doesn't normally have me working a full 12 months out of the year (generally 6-8 months depending upon the needs of the bureau) and I am normally on-duty only 32 hours per week.

    More recent headshot of Stephen Michael Kellat

    As you might imagine, I have been trying to leave that job. Unfortunately, working for this particular government bureau makes any resume look kinda weird. My local church has some domestic missions work to do and not much money to fund it. I already use what funding we have to help with our mission work reaching out to one of the local nursing homes to provide spiritual care as well as frankly one of the few lifelines to the outside world some of those residents have. Xubuntu and the bleeding edge of LaTeX2e plus CTAN help greatly in preparing devotional materials for use in the field at the nursing home. Funding held us back from letting me assist with Hurricane Harvey or Hurricane Maria relief especially since I am currently finishing off quite a bit of training in homeland security/emergency management. But for the lack of finances to back it up as well as the lack of a large enough congregation, there is quite a bit to do. Unfortunately the numbers we get on a Sunday morning are not what they once were when the congregation had over a hundred in attendance.

    I don't like talking about numbers in things like this. If you take 64 hours in a two week pay period multiplied it by the minimum of 20 pay periods that generally occur and then multiplied by the hourly equivalent rate for my grade and step it only comes out to a pre-tax gross under $26,000. I rounded up to a whole number. Admittedly it isn't too much.

    At this time of the year last year, many people across the Internet burned cash by investing in the Holiday Hole event put on by the Cards Against Humanity people. Over $100,000 was raised to dig a hole about 90 miles outside Chicago and then fill the thing back in. This year people spent money to help buy a piece of land to tie up the construction of President Trump's infamous border wall and even more which resulted in Cards Against Humanity raking in $2,250,000 in record time.

    Now, the church I would be beefing up the missionary work with doesn't have a web presence. It doesn't have an e-mail address. It doesn't have a fax machine. Again, it is a small church in rural northeast Ohio. According to IRS Publiction 526, contributions to them are deductible under current law provided you read through the stipulations in that thin booklet and are a taxpayer in the USA. Folks outside the USA could contribute in US funds but I don't know what the rules are for foreign tax administrations to advise about how such is treated if at all.

    The congregation is best reached by writing to:

     West Avenue Church of Christ
     5901 West Avenue
     Ashtabula, OH  44004
     United States of America

    With the continuing budget shenanigans about how to fund Fiscal Year 2018 for the federal government, I get left wondering if/when I might be returning to duty. Helping the congregation fund me to undertake missions for it removes that as a concern. Besides, any job that gives you gray hair and puts 30 pounds on you during eight months of work cannot be good for you to remain at. Too many co-workers took rides away in ambulances at times due to the pressures of the job during the last work season.

    Creative Commons License
    Not Messing With Hot Wheels Car Insertion by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

    08 December, 2017 06:03AM

    Robert Ancell: Setting up Continuous Integration on

    Simple Scan recently migrated to the new infrastructure. With modern infrastructure I now have the opportunity to enable Continuous Integration (CI), which is a fancy name for automatically building and testing your software when you make changes (and it can do more than that too).

    I've used CI in many projects in the past, and it's a really handy tool. However, I've never had to set it up myself and when I've looked it's been non-trivial to do so. The great news is this is really easy to do in GitLab!

    There's lots of good documentation on how to set it up, but to save you some time I'll show how I set it up for Simple Scan, which is a fairly typical GNOME application.

    To configure CI you need to create a file called .gitlab-ci.yml in your git repository. I started with the following:

      image: ubuntu:rolling
        - apt-get update
        - apt-get install -q -y --no-install-recommends meson valac gcc gettext itstool libgtk-3-dev libgusb-dev libcolord-dev libpackagekit-glib2-dev libwebp-dev libsane-dev
        - meson _build
        - ninja -C _build install

    The first line is the name of the job - "build_ubuntu". This is going to define how we build Simple Scan on Ubuntu.

    The "image" is the name of a Docker image to build with. You can see all the available images on Docker Hub. In my case I chose an official Ubuntu image and used the "rolling" link which uses the most recently released Ubuntu version.

    The "before_script" defines how to set up the system before building. Here I just install the packages I need to build simple-scan.

    Finally the "script" is what is run to build Simple Scan. This is just what you'd do from the command line.

    And with that, every time a change is made to the git repository Simple Scan is built on Ubuntu and tells me if that succeeded or not! To make things more visible I added the following to the top of the

    [![Build Status](](

    This gives the following image that shows the status of the build:

    pipeline status

    And because there's many more consumers of Simple Scan that just Ubuntu, I added the following to.gitlab-ci.yml:

      image: fedora:latest
        - dnf install -y meson vala gettext itstool gtk3-devel libgusb-devel colord-devel PackageKit-glib-devel libwebp-devel sane-backends-devel
        - meson _build
        - ninja -C _build install

    Now it builds on both Ubuntu and Fedora with every commit!

    I hope this helps you getting started with CI and Happy hacking.

    08 December, 2017 12:40AM by Robert Ancell (

    December 07, 2017

    Cumulus Linux

    Cumulus content roundup: December

    It’s the most wonderful time of the year — that’s right, it’s time for another Cumulus content roundup! We’ve wrapped up all of the best content in a neat little package just for you. (Think of it as an early holiday gift!) Whether you’re interested in centralized configuration or just trying to learn the basics of Linux, this roundup is your roadmap for what’s in this season. The latest articles, videos, industry reports and more are at your fingertips, so get cozy by the fireplace and check out what’s new in open networking trends.

    Cumulus content

    Linux Networking 101 guide: Searching for an easy, comprehensive guide to Linux networking? Look no further! Download this ebook and start learning the language of the data center.

    Forrester’s 2017 Vendor Landscape Report: This report will take you through the characteristics of a network that’s built for the future and help you navigate the vendor ecosystem. Read on to see if your data center is ready for 2018.

    Gartner report: How open is your network vendor?: Many vendors claim to have open solutions, but which ones can support those claims? Check out this report to learn the five questions you should be asking networking vendors.

    Centralized configuration and management: Introduction and overview: Our newest series of how-to videos teaches you what you need to know about centralized configuration. What is it and how can it improve your data center? Watch these videos to answer your questions.

    5 ways to design your container network: If you’re trying to design the perfect container network, then this blog post is for you. In this post, we take you through five different container deployments so you can pick the one that fits your needs. Read on and learn your options.

    Can’t get enough of our content? Check out our learn center, resources page, and solutions section for even more from Cumulus!

    News from the net

    IT science case study: how Monash University moved to OpenStack: Australia’s Monash University needed to switch its legacy data center over to using Linux for a networking operating system. The company explains the process in this exclusive case study. Read on to learn about how they made the transition.

    Six ways microservices support digital transformations in businesses: Microservices architectures can play a key role in enabling digital transformation, especially for organizations seeking to modernize legacy applications and codebases. Check out how your business can better leverage microservices.

    5 Kubernetes success tips: start smart: Beginning to work with Kubernetes? Use this expert advice to make the most of container orchestration. Read the rest of the article to learn about how to optimize this orchestration tool.

    The post Cumulus content roundup: December appeared first on Cumulus Networks Blog.

    07 December, 2017 07:40PM by Madison Emery