August 17, 2017

hackergotchi for VyOS


issues with wiki/forum resolved now


issues with wiki/forum resolved now.

Problem was caused by raid rebuild performance degradation, but now back to normal

17 August, 2017 02:19AM by Yuriy Andamasov

August 16, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Foundations Development Summary – August 16, 2017

This newsletter is to provide a status update from the Ubuntu Foundations Team. There will also be highlights provided for any interesting subjects the team may be working on. If you would like to reach the Foundations team, you can find us at the #ubuntu-devel channel on freenode.


The State of the Archive

  • After no small amount of effort, the perl 5.26 and gcc-7 transitions migrated to artful on the 10th, unblocking many of the packages that had been stuck in -proposed.
  • As GCC 7 is now the default compiler in artful, the build failures reported at now apply to 17.10. Please help us resolve these failing packages for the release.
  • Next in line we have the Qt 5.9 transition. Look for more news about this next week!

Upcoming Ubuntu Dates

Weekly Meeting

16 August, 2017 08:20PM

Ross Gammon: My Debian & Ubuntu work from April to mid-August 2017

Okay, so I have been slack with my blogging again. I have been travelling around Europe with work quite a bit, had a short holiday over Easter in Denmark, and also had 3 weeks of Summer Holiday in Germany.


  • Tidied up the packaging and tried building the latest version of libdrumstick, but tests had been added to the package by upstream which were failing. I still need to get back and investigate that.
  • Updated node-seq (targeted at experimental due to the Debian Stretch release freeze) and asked for sponsorship (as I did not have DM rights for it yet).
  • Uploaded the latest version of abcmidi (also to experimental), and again.
  • Updated node-tmp to the latest version and uploaded to experimental.
  • Worked some more on bluebird RFP, but getting errors when running tests. I still haven’t gone back to investigate that.
  • Updated node-coffeeify to the latest version and uploaded to experimental.
  • Uploaded the latest version of node-os-tmpdir (also to experimental).
  • Uploaded the latest version of node-concat-stream (also to experimental).
  • After encouragement from several Debian Developers, I applied to become a full Debian Developer. Over the summer months I worked with Santiago as my Application Manager and answered questions about working in the Debian Project.
  • A web vulnerability was identified in node-concat-stream, so I prepared a fix to the version in unstable, uploaded it to unstable, and submitted a unblock request bug so that it would be fixed in the coming Debian Stretch release.
  • Debian 10 (Stretch) was released! Yay!
  • Moved abcmidi from experimental to unstable, adding an autopkgtest at the same time.
  • Moved node-concat-stream from experimental to unstable. During the process I had to take care of the intermediate upload to stretch (on a separate branch) because of the freeze.
  • Moved node-tmp to unstable from experimental.
  • Moved node-os-tmpdir from experimental to unstable.
  • Filed a removal bug for creepy, which seems to be unmaintained upstream these days. Sent my unfinished Qt4 to Qt5 porting patches upstream just in case!
  • Uploaded node-object-inspect to experimental to check the reverse dependencies, then moved it to unstable. Then a new upstream version came out which is now in experimental waiting for a retest of reverse dependencies.
  • Uploaded the latest version of gramps (4.2.6).
  • Uploaded a new version of node-cross-spawn to experimental.
  • Discovered that I had successfully completed the DD application process and I was now a Debian Developer. I celebrated by uploading the Debian Multimedia Blends package to the NEW queue, which I was not able to do before!
  • Tweaked and uploaded the node-seq package (with an RC fix) which had been sitting there because I did not have DM rights to the package. It is not an important package anyhow, as it is just one of the many dependencies that need to be packaged for Browserify.
  • Packaged and uploaded the latest node-isarray directly to unstable, as the changes seemed harmless.
  • Prepared and uploaded the latest node-js-yaml to experimental.
  • Did an update to the Node packaging Manual now that we are allowed to use “node” as the executable in Debian instead of “nodejs” which caused us to do a lot of patching in the past to get node packages working in Debian.


  • Did a freeze exception bug for ubuntustudio-controls, but we did not manage to get it sponsored before the Ubuntu Studio Zesty 17.04 release.
  • Investigated why Ardour was not migrating from zesty-proposed, but I couldn’t be sure of what was holding it up. After getting some help from the Developer’s mailing list, I prepared “no change rebuild” of pd-aubio which was sponsored by Steve Langasek after a little tweak. This did the trick.
  • Wrote to the Ubuntu Studio list asking for support for testing the Ubuntu Studio Zesty release, as I would be on holiday in the lead up to the release. When I got back, I found the release had gone smoothly. Thanks team!
  • Worked on some blueprints for the next Ubuntu Studio Artful release.
  • As Set no longer has enough spare time to work on Ubuntu Studio, we had a meeting on IRC to decide what to do. We decided that we should set up a Council like Xubuntu have. I drafted an announcement, but we still have not gone live with it yet. Maybe someone will have read this far and give us a push (or help). 🙂
  • Did a quick test of Len’s ubuntustudio-controls re-write (at least the GUI bits). We better get a move on if we want this to be part of Artful!
  • Tested ISO for Ubuntu Studio Xenial 16.04.3 point release, and updated the release notes.
  • Started working on a merge of Qjackctl using git-ubuntu for the first time. Had some issues getting going, so I asked the authors for some advice.

16 August, 2017 05:16PM

hackergotchi for VyOS


VyOS 2.0 development digest #5: doing 1.2.x and 2.0 development in parallel

There was a rather heated discussion about the 1.2.0 situation on the channel, and valid points were definitely expressed: while 2.0 is being written, 1.2.0 can't benefit from any of that work, and it's sad. We do need a working VyOS in any case, and we can't just stop doing anything about it and only work on 2.0. My original plan was to put 1.2.0 in maintenance mode once it's stabilized, but it would mean no updates at all for anyone, other than bugfixes. To make things worse, some things do need if not rewrite, but at least very deep refactoring bordering on rewrite just to make them work again, due to deep changes in the configs of e.g. StrongSWAN.

There are three main issues with reusing the old code,  as I already said: it's written in Perl, it mixes config reading and checking with logic, and it can't be tested outside VyOS. The fourth issue is that the logic for generating, writing, and applying configs is not separated in most scripts either so they don't fit the 2.0 model of more transactional commits. The question is if we can do anything about those issues to enable rewriting bits of 1.2.0 in a way that will allow reusing that code in 2.0 when the config backend and base system are ready, and what exactly should we do.

My conclusion so far is that we probably can, with some dirty hacks and extra care. Read on.

The language

I guess by now everyone agrees that Perl is a bad idea. There are few people who know it these days, and there is no justification for knowing it. The language is a minefield that lacks proper error reporting mechanism or means to convey the semantics.

If you are new to it, look at this examples:

All "error reporting" enabled, let's try to divide a string by an integer.

$ perl -e 'use strict; use warnings; print "foobar" / 42'
Argument "foobar" isn't numeric in division (/) at -e line 1.

A warning indeed... Didn't prevent program from producing a value though: garbage in, garbage out. And, my long time favorite: analogous issues bit me in real code a number of times!

$ perl -e 'print reverse "dog"'

Even if you know that it has to do with "list context", good luck finding information about default context of this or that function in the docs. In short, if the language of VyOS 1.x wasn't Perl, a lot of bugs would be outright impossible.

Python looks like a good candidate for config scripts: it's strongly typed, the type and object system is fairly expressive, there are nice unit test libraries and template processors and other things, and it's reasonably fast. What I don't like about it and dynamically typed languages in general is that it needs a damn good test coverage because the set of errors it can detect at compile time is limited and a lot of errors make it to runtime, but there are always compromises.

But, we need bindings. VyConf will use sockets and protobuf messages for its API which makes writing bindings for pretty much any language trivial, but in 1.x.x it's more complicated. The C++/Perl library from VyOS backend is not really easy to follow, and not trivial to produce bindings for. However, we have cli-shell-api, which is already used in config scripts written in shell, and behaves as it should. It also produces fairly machine-friendly output, even though its error reporting is rudimantary (then again, error reporting of the C++ and Perl library isn't all that nice either). So, for a proof of concept, I decided to make a thin wrapper around cli-shell-api: later it can be rewritten as a real C++ binding if this approach shows its limitations. It will need some C++ library logic extraction and cleanup to replicate the behaviour (why the C++ library itself links against Perl interpreter library? Did you know it also links to specific version of the apt-pkg library that was never meant for end users and made no promise of API stability, for its version comparison function that it uses for soring names of nodes like eth0? That's another story though).

Anyway, I need to add the Python library to the vyatta-cfg package which I'll do soon, for the time being you can put the file to your VyOS (it works in 1.1.7 with python2.6) and play with it:  

Right now it exposes just a handful of functions: exists(), return_value(), return_values(), and list_nodes(). It also has is_leaf/is_tag/is_multi functions that it uses internally to produce somewhat better error reporting, though they are unnecessary in config scripts, since you already know that about nodes from templates. Those four functions are enough to write a config script for something like squid, dnsmasq, openvpn, or anything else that can reload its config on its own. It's programs that need fancy update logic that really need exists_orig or return_effective_value. Incidentally, a lot of components that need that rewrite to repair or could seriously benefit from serious overhaul are like that: for example. iptables is handled by manipulating individual rules right now even though iptables-restore is atomic, likewise openvpn is now managed by passing it the config in command line options while it's perfectly capable of reloading its config and this would make tunnel restarts a lot less disruptive, and strongswan, the holder of the least maintainable config script, is indeed capable of live reload too.

Which brings us to the next part...

The conventions

To avoid having to do two rewrites of the same code instead of just one, we need to make sure that at least substantial part of the code from VyOS 1.2.x can be reused in 2.0. For this we need to setup a set of conventions. I suggest the following, and let's discuss it.

Language version

Python 3 SHALL be used.

Rationale: well, how much longer can we all keep 2.x alive if 3.0 is just a cleaner and nicer implementation?

Coding standard

No single function shall SHOULD be longer than 100 lines.

Rationale: ;)

Logic separation and testability

This is the most important part. To be able to reuse anything, we need to separate assumptions about the environment from the core logic. To be able to test it in isolation and make sure most of the bugs are caught on developer workstations rather than test routers, we need to avoid dependendies on the global state whenever possible. Also, to fit the transactional commit model of VyOS 2.0 later, we need to keep consistency checking, generating configs, and restarting services separated.

For this, I suggest that we config scripts follow this blueprint:

def get_config():
    foo = vyos.config.return_value("foo bar")
    bar = vyos.config.return_value("baz quux")
    return {"foo": foo, "bar": bar} # Could be an object depending on size and complexity...

def verify(config):
    result do_some_checks(config)
    if checks_succees(result):
        return None
        raise ScaryException("Some error")

def generate(config):

def apply(config):

if __name__ == '__main__':
       c = get_config()
    except ScaryException:

This way the function that process the config can be tested outside of VyOS by creating the same stucture as get_config() would create by hand (or from file) and passing it as an argument. Likewise, in 2.0 we can call its verify(), update(), and apply() functions separately.

Let me know what you think.

16 August, 2017 04:54PM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Kernel Team Summary- August 16, 2017

Development (Artful / 17.10)

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. The artful kernel is now based on Linux 4.11. The Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

  • The kernel in the artful-proposed pocket of the Ubuntu archive has been updated to v4.12.7
  • The kernel in the Artful staging repository has been updated to v4.13-rc5

Stable (Released & Supported)

  • Embargoed CVEs CVE-2017-1000111 and CVE-2017-1000112 have been made public and the fixes released for all the affected kernels (including their derivatives and rebases):

     trusty    3.13.0-128.177
     xenial    4.4.0-91.114
     zesty     4.10.0-32.36
  • The Xenial and Xenial-based kernels have been re-spun to fix a regression with OpenStack (LP: #1709032) and the following packages are on the way of getting promoted to -updates:

     xenial            4.4.0-92.115
     xenial/raspi2     4.4.0-1070.78
     xenial/snapdragon 4.4.0-1072.77
     xenial/aws        4.4.0-1031.40
     xenial/gke        4.4.0-1027.27
     trusty/lts-xenial 4.4.0-92.115~14.04.1
  • Current cycle: 04-Aug through 26-Aug

              04-Aug  Last day for kernel commits for this cycle.
     07-Aug - 12-Aug  Kernel prep week.
     13-Aug - 25-Aug  Bug verification & Regression testing.
              28-Aug  Release to -updates.
  • Next cycle: 25-Aug through 16-Sep

              25-Aug  Last day for kernel commits for this cycle.
     28-Aug - 02-Sep  Kernel prep week.
     03-Sep - 15-Sep  Bug verification & Regression testing.
              18-Sep  Release to -updates.


  • eventstat 0.04.00 for 17.10 has been released. This now uses kernel trace events rather than the deprecated /proc/timer_stat interface.
  • If you would like to reach the kernel team, you can find us at the #ubuntu-kernel
    channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing
    list at:
  • The current CVE status

16 August, 2017 01:16PM

Didier Roche: Ubuntu GNOME Shell in Artful: Day 3

After introducing yesterday a real GNOME vanilla session, let’s see how we are using this to implement small behavior differences and transforming current Ubuntu Artful. For more background on this, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 3: a story of sound and collaboration…

Some devices have very low volume even when pushed at their maximum. One example for this is the x220 when most of videos on youtube, or listening to music in rhythmbox doesn’t give great results even at maximum volume.

Pulseaudio can amplify some of those sound devices (detecting if amplification is available on those output sources). Doing it, at the price of sound quality, makes at least the embedded speakers usable.

There are 3 components involved: * gnome-control-center, which has a slider to push volume for supported device upper than 100%. * gnome-settings-daemon, detecting multimedia keys, for volume up and down. * gnome-shell, showing up a slider in the volume menu.

However, this brings some issues in the current design: if you set the volume above 100% in GNOME Control Center, and then press the multimedia key up, the volume is reset to 100%, and you can’t go further using only keys. A similar behavior is observable using GNOME Shell slider which reset the sound below what you did set in GNOME Control Center.

As a video will explain it better than words:

This behavior gets a little bit annoying when using it day to day on such hardware (you set the volume greater than 100%, then press the multimedia key “up” and it’s back to unhearable state) and not that consistent as the sliders between the OSD, GNOME Shell and GNOME Control Center aren’t in sync.

What we did few cycles ago in Ubuntu, was to introduce the idea to be able to set the volume above 100% from all of those, but conditionally. This was done in unity control center, Ubuntu settings daemon and unity for 14.04.

When the switch is off, which is the default, we removed any possibility to amplify and set the sound volume above 100%, being from GNOME control center, multimedia keys or any Shell visible slider to keep consistency. The option appears only though if any output source supports amplified volume. It’s disabled otherwise. If it’s available and you then switch it on, all of those methods enables amplifying the volume, syncing the sliders in a consistent manner.

We talked about it at GUADEC with Allan who needed a little bit more context and time to think about it (it might be a little bit late for this GNOME 3.26 cycle anyway). However, we didn’t want to keep Ubuntu 17.10 with the current behavior, so we reimplemented that design, but made it fully conditional to the XDG_CURRENT_DESKTOP variable we talked about yesterday. Consequently, the GNOME vanilla session is showing off the behavior you can see in the above video, and nothing in the upstream experience is impacted. However, the Ubuntu session, conditioned by this variable (as for any gsetting keys override), will show off the modifications we made in gnome settings daemon, GNOME Shell and GNOME Control Center, using our own gsettings key that we seeded back under the com.ubuntu settings namespace so that we don’t abuse the org.gnome one until this is blessed upstream. People who enabled that settings in previous Unity session will thus be automatically transitioned.

Here is a video illustrating this behavior:

For people who want to see the discussion we’re having with the GNOME design team to get something similar to this (maybe it won’t be completely the same, but in the same vein), you can head over to the corresponding bug.

We are trying to use these conditional per-session patches to keep the upstream vanilla session as vanilla as possible when it makes sense (meaning, when it’s not about integrating with a debian package based distro for instance). That enables us to deliver the desired and current upstream look and feel, while still be able to propose and implement slightly subtle changes we think are important for our user base.

As yesterday, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

There is still something in the GNOME vanilla session which is different from upstream, can you spot it? (Hint: it’s an icon). It feels as well a little bit out of place in the Ubuntu one. We are going to fix it tomorrow :)

16 August, 2017 11:00AM

hackergotchi for Tails


Tails report for July, 2017


  • Tails 3.2 is ?scheduled for October 3.

Documentation and website

  • We improved our installation instructions for Ubuntu to configure the PPA through Software & Updates instead of the command line.

  • We published instructions on how to repair a Tails 3.0.1 broken by the automatic upgrade.

  • We documented that Tails fails to start on computers with NVIDIA Maxwell graphics.

  • We updated the terminology on our website to stop mentioning SD cards and always talk about Tails USB stick instead of Tails device. #9965

  • We made it clearer in the system requirements that Tails doesn't work on handheld devices.

  • We improved the link and QR code to get back to the same step when switching device during installation. #12319

  • We updated our command line instructions to use apt instead of apt-get.

  • We renamed Mac OS X as macOS, it's new name.

  • We improved the inlining mechanism that links to release notes during upgrades. #13341


  • We received a donation from ExpressVPN.

  • Our OTF proposal was accepted and we will help us do great work between 2017Q4 and 2018Q3:

    • TrueCrypt support in GNOME: graphical utilities to mount TrueCrypt volumes (#11684, #6337)
    • Graphical interface for the Additional Packages persistent feature: allow users to customize which applications are available in their Tails (#5996 #9059)
  • We started using CCT, the Center for the Cultivation of Technology as our European fiscal sponsor for real and are very happy about it. On the long run they will help us spend less time doing administrative work and more time improving Tails!

  • Next INpact started a donation campaign to support Tails, Tor and VeraCrypt using 33% of total donations.


Past events


All the website

  • de: 56% (2809) strings translated, 7% strings fuzzy, 49% words translated
  • fa: 42% (2094) strings translated, 10% strings fuzzy, 44% words translated
  • fr: 87% (4354) strings translated, 2% strings fuzzy, 84% words translated
  • it: 30% (1500) strings translated, 5% strings fuzzy, 26% words translated
  • pt: 25% (1268) strings translated, 10% strings fuzzy, 22% words translated

Total original words: 53070

Core pages of the website

  • de: 77% (1456) strings translated, 13% strings fuzzy, 77% words translated
  • fa: 34% (648) strings translated, 12% strings fuzzy, 35% words translated
  • fr: 95% (1812) strings translated, 4% strings fuzzy, 95% words translated
  • it: 73% (1386) strings translated, 14% strings fuzzy, 72% words translated
  • pt: 44% (842) strings translated, 16% strings fuzzy, 45% words translated

Total original words: 17252


  • Tails has been started more than 690564 times this month. This makes 22276 boots a day on average.
  • 15501 downloads of the OpenPGP signature of Tails ISO from our website.
  • 160 bug reports were received through WhisperBack.

16 August, 2017 10:34AM

hackergotchi for Ubuntu developers

Ubuntu developers

Valorie Zimmerman: Repugnant

I grew up in a right-wing, Republican family. As I grew to adulthood and read about the proud history of the Republican party, beginning with Lincoln, I embraced that party, even as racism began to be embraced as a political strategy during Nixon's campaign for president. I overlooked that part, because I didn't want to see it. Besides, the Democrats were the party of racists.

However, as I heard about the crimes that President Nixon seemed to be excusing, and that people around me also seemed to excuse, I began to think long and hard about party versus principle. Within a few years, I left that party, especially as I saw the Democrats, so long the party steeped in racism, begin to attempt to repair that damage done to the country. It took me many years to admit that I had changed parties, because my beliefs have not changed that much. I just see things more clearly now, after reading a lot more history.

Today I've seen a Republican president embrace racism, support of the Confederacy, and support racists, neo-Nazis, white nationalists, and the Ku Klux Klan party -- a party his father supported in Queens, New York. Fred Trump was arrested for marching publicly in full regalia, masked, hooded and robed. I've seen no report that he was convicted, although there are pictures of the march and the arrest report in the local newspaper.

Make no mistake about it; today's statement was deliberate. Trump's entry into the political fray was as a leader of the so-called birthers, questioning Barack Obama's citizenship. His announcement of candidacy was a full-throated anti-immigrant stance, which he never moderated and has not changed.

Yes, previous American presidents have been racist, some of them proudly so. But since the Civil War we have not seen -- until today -- a president of the United States throw his political lot in with white nationalists and neo-Nazis. Good people voted for this man, hoping that he would shake things up in Washington. Good people cannot stand by statements such as Trump made today.

It is time for the Congress to censure this President. The statements made today are morally bankrupt, and are intolerable. Good people do not march with neo-Nazis, and good people cannot let statements such as those made today, stand.

16 August, 2017 03:03AM by Valorie Zimmerman (

August 15, 2017

The Fridge: Ubuntu Weekly Newsletter Issue 516

Welcome to the Ubuntu Weekly Newsletter. This is issue #516 for the week of August 8 – 14, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

15 August, 2017 10:59PM

Cumulus Linux

Cumulus content roundup: August

It’s time for another Cumulus content roundup! To bring you some of our favorite think-pieces about open source networking trends, both from our website and around the Internet, we’ve wrangled all the links into one convenient spot. From virtual network optimization to private clouds to hyperscale data centers, we’ve got all of the best topics covered. Take a look at these exciting new developments, and let us know what you think in the comments section below.


Cumulus Networks’ latest and greatest

Private cloud vs. public cloud: If you’re getting ready for a data center refresh or thinking about moving to a private cloud, this article is for you. We break down the major differences, benefits and use cases between each type of cloud environment, giving you the information you need to make the right choice for your organization. Read on and find out if it’s time for you to switch to a private cloud.

Monash University video: How did Cumulus Networks manage to migrate Monash University’s entire data center in just two weeks? Senior Consulting Engineer Eric Pulvino breaks it down for you and explains how Cumulus was able to do the impossible. Watch the video to learn more.

Virtual networking optimization and best practices: Are you making the most of network optimization? It is an incredibly important component to scalability and efficiency. Network optimization aids a business in making the most of its technology, reducing costs and even improving upon security. Through virtualization, businesses can leverage their technology more effectively — they just need to follow a few virtual networking best practices. Learn more here.

Want to read even more of what Cumulus Networks has to offer? We have plenty of information about open source, web-scale networking, and more. Feel free to check out our learn center, resource page, or solutions section for more information.


The best from around the web

Enterprises mirror hyperscale data centers to build new infrastructure: The rise of large cloud providers like Amazon Web Services, Microsoft Azure and the Google Cloud Platform has led to a new kind of infrastructure, and smaller enterprises are learning how to emulate these hyperscale data centers. Read this article to find out how they’re accomplishing this.

Open source is powering the digital enterprise: By leveraging broad based collaboration and strong communities of independent developers, open source innovation is transforming the very core of information technology and enabling organizations to win in today’s digital economy. As a community we all gain from these efforts. Find out how Dell EMC is supporting open source.

Open source network tools compete with shrinking vendor equipment: There was a time when a dozen vendors offered a given piece of service provider network equipment. Vendor consolidation has pared that number to as few as three, and with vendors facing further profit pressure, it’s likely the number will fall further. Read about how open source is giving vendor consolidation a run for its money.

The post Cumulus content roundup: August appeared first on Cumulus Networks Blog.

15 August, 2017 06:19PM by Madison Emery

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Server Development Summary – 15 Aug 2017

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: cloud-init at DebConf17

Josh on the cloud-init team presented at DebConf17 last week. His talk included an introduction to how cloud-init works and then an overview of recent developments made by the team. Replay is available on the DebConf17 website.

cloud-init and curtin


  • Update to capabilities documentation to include CLI interface features.
  • Fix AWS NVMe ephemeral storage setup (LP: #1672833)
  • Fix EC2 datasource moved to init-local stage (LP: #1709772)
  • Fix v2 yaml preserving bond/bridge parameters when rendering (LP: #1709180)
  • Added meetingology bot to IRC channel for bi-weekly meeting


  • Update vmtest to purge symlinks from output collection due to missing links
  • Update Jenkins artifact collection regex

git ubuntu

As a reminder, links to the first two posts in the ‘git ubuntu’ series are below.

Recently there was also a discussion on ubuntu-devel as to what tags expected when using git ubuntu.

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

asterisk, 1:13.14.1~dfsg-2ubuntu4, costamagnagianfranco
asterisk, 1:13.14.1~dfsg-2ubuntu3, costamagnagianfranco
byobu, 5.121-0ubuntu1, kirkland
dpdk, 17.05.1-2, None
golang-context, 1.1-2ubuntu1, mwhudson
golang-github-gorilla-mux, 1.1-3ubuntu1, mwhudson
golang-github-mattn-go-colorable, 0.0.6-1ubuntu7, mwhudson
golang-github-mattn-go-sqlite3, 1.2.0+git20170802.105.6654e41~dfsg1-1ubuntu1, mwhudson
golang-github-pborman-uuid, 0.0+git20150824.0.cccd189-1ubuntu9, mwhudson
golang-gocapability-dev, 0.0~git20160928.0.e7cb7fa-1ubuntu2, mwhudson
golang-gopkg-flosch-pongo2.v3, 3.0+git20141028.0.5e81b81-0ubuntu9, mwhudson
golang-gopkg-inconshreveable-log15.v2, 2.11+git20150921.0.b105bd3-0ubuntu12, mwhudson
golang-gopkg-lxc-go-lxc.v2, 0.0~git20161126.1.82a07a6-0ubuntu5, mwhudson
golang-gopkg-tomb.v2, 0.0~git20161208.0.d5d1b58-1ubuntu2, mwhudson
golang-petname, 2.7-0ubuntu2, mwhudson
golang-x-text, 0.0~git20170627.0.6353ef0-1ubuntu1, mwhudson
golang-yaml.v2, 0.0+git20170407.0.cd8b52f-1ubuntu1, mwhudson
libvirt, 3.6.0-1ubuntu1, paelzer
libvirt-python, 3.5.0-1build1, mwhudson
lxc, 2.0.8-0ubuntu4, doko
markupsafe, 1.0-1build1, mwhudson
mod-wsgi, 4.5.17-1, None
mongodb, 1:3.4.7-1, None
nagios-nrpe, 3.2.0-4ubuntu1, nacc
numactl, 2.0.11-2.1, None
php-defaults, 54ubuntu1, nacc
php7.1, 7.1.8-1ubuntu1, nacc
pwgen, 2.08-1, None
pylibmc, 1.5.2-1build1, mwhudson
python-cffi, 1.9.1-2build2, doko
python-django, 1:1.11.4-1ubuntu1, vorlon
python-gevent, 1.1.2-build2, mwhudson
python-sysv-ipc, 0.6.8-2build4, mwhudson
python-tornado, 4.5.1-2.1~build2, mwhudson
requests, 2.18.1-1, None
urwid, 1.3.1-2ubuntu2, doko
Total: 36

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

cgroup-lite, trusty, 1.11~ubuntu14.04.3, serge-hallyn
libapache2-mod-auth-pgsql, zesty, 2.0.3-6.1ubuntu0.17.04.1, racb
libapache2-mod-auth-pgsql, xenial, 2.0.3-6.1ubuntu0.16.04.1, racb
libapache2-mod-auth-pgsql, trusty, 2.0.3-6ubuntu0.1, racb
libseccomp, trusty, 2.1.1-1ubuntu1~trusty4, mvo
libvirt, trusty, 1.2.2-0ubuntu13.1.21, paelzer
libvirt, xenial, 1.3.1-1ubuntu10.13, paelzer
libvirt, zesty, 2.5.0-3ubuntu5.4, paelzer
logcheck, xenial, 1.3.17ubuntu0.1, nacc
logcheck, trusty, 1.3.16ubuntu0.1, nacc
lxc, trusty, 2.0.8-0ubuntu1~14.04.1, stgraber
lxcfs, trusty, 2.0.7-0ubuntu1~14.04.1, stgraber
lxd, trusty, 2.0.10-0ubuntu1~14.04.1, stgraber
lxd, zesty, 2.16-0ubuntu2~ubuntu17.04.1, stgraber
lxd, xenial, 2.16-0ubuntu2~ubuntu16.04.1, stgraber
maas, xenial, 2.2.2-6099-g8751f91-0ubuntu1~16.04.1, andreserl
maas, zesty, 2.2.2-6099-g8751f91-0ubuntu1~17.04.1, andreserl
mongodb, xenial, 1:2.6.10-0ubuntu1.1, nacc
php5, trusty, 5.5.9+dfsg-1ubuntu4.22, mdeslaur
php5, trusty, 5.5.9+dfsg-1ubuntu4.22, mdeslaur
php7.0, zesty, 7.0.22-0ubuntu0.17.04.1, mdeslaur
php7.0, xenial, 7.0.22-0ubuntu0.16.04.1, mdeslaur
php7.0, xenial, 7.0.22-0ubuntu0.16.04.1, mdeslaur
php7.0, zesty, 7.0.22-0ubuntu0.17.04.1, mdeslaur
pollinate, trusty, 4.25-0ubuntu1~14.04.1, smoser
pollinate, xenial, 4.25-0ubuntu1~16.04.1, smoser
postfix, xenial, 3.1.0-3ubuntu0.1, nacc
samba, zesty, 2:4.5.8+dfsg-0ubuntu0.17.04.5, costamagnagianfranco
samba, zesty, 2:4.5.8+dfsg-0ubuntu0.17.04.5, costamagnagianfranco
ubuntu-advantage-tools, trusty, 2, vorlon
ubuntu-advantage-tools, xenial, 2, vorlon
ubuntu-advantage-tools, zesty, 2, vorlon
Total: 32

Contact the Ubuntu Server team

15 August, 2017 05:12PM

Ubuntu Insights: Week 32 In Snapcraft

Welcome to the weekly development notes for snapcraft! This covers work from August 7 until August 13 of 2017.

Development in master

The theme of code landing into master this week was mostly about robotics, from polishing and refactoring to ease the development of ROS 2 support (ament plugin) as well as supporting new features.

Aside from that, worth mentioning is that the next release of snapcraft (2.34) will have support for a Windows msi to ease the installation and get snapcraft working on that operating system (sans local building support).

  • kbuild plugin: support Makefile without install target PR: #1432
  • recording: record the original snapcraft.yaml PR: #1407
  • options: properly handle missing compiler prefix PR: #1425
  • kbuild plugin: move over the cross-compiling logic from the kernel plugin PR: #1417
  • catkin plugin: include-roscore is a boolean PR: #1472
  • catkin plugin: default to release build PR: #1470
  • catkin plugin: rosinstall-files is a pull property PR: #1473
  • catkin plugin: support passing args to cmake PR: #1471
  • Extract rosdep into its own package PR: #1464
  • Add cx_Freeze options targeting bin/snapcraft PR: #1478
  • cli: better error message for missing mksquashfs PR: #1481
  • ci: skip the CLA check for pull requests from the bot PR: #1482

This weeks Pull Requests

These are the pull requests that showed up during the week:

  • lxd: configure user in container PR: #1484
  • lxd: path cannot have extra forward slashes PR: #1483
  • rosdep: add support for multiple dependency types PR: #1479
  • kbuild plugin: use ARCH from environment if set PR: #1474
  • add support for the “contact” field in snapcraft PR: #1447
  • cli: properly handle exceptions PR: #1436

For a full list of pull requests go to

Current active design forum discussions


More snaps!

If you needed to install these look no more and follow the installation instructions:

And some more snaps coming soon (blockchain related) with recently landed snapcraft.yaml and being added to ci/cd:


This week, on the 18th and 19th, you can catch part of the snapcraft team, Leo and Sergio, speaking at the UbuconLA 2017, these are the sessions (they will be carried out in Spanish):

This article originally appeared at the snapcraft github

15 August, 2017 03:54PM

Didier Roche: Ubuntu GNOME Shell in Artful: Day 2

Let’s continue our journey and progress on transforming current Ubuntu Artful. For more background on this, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 2: and the GNOME vanilla session rises…

Today is about differentiating our Ubuntu and GNOME vanilla session to provide different defaults per session.

Remember that the GNOME “vanilla” session yesterday was looking like this: GNOME Vanilla session, as per Monday 14th of August 2017

After upgrading today, there is a whole new layout, very close to a pure upstream GNOME default look: GNOME Vanilla session, as per Tuesday 15th of August 2017

GNOME Vanilla session in overview mode, as per Tuesday 15th of August 2017

What can we notice? * The GNOME vanilla session have a different set of icons and GTK themes * Icon default size are drastically bigger * The desktop doesn’t show any icons by default * There is only one window button (the close icon), on the right * What you can’t see is that even the cursor icon is different * As yesterday, this GNOME session still has different font and symbolic icons

A lot of other noticeable changes that we do in ubuntu (welcoming window disabled in GNOME software, File chooser dialog always opening in current directory instead of recommended, default set of plugins in some softwares like rhythmbox, power settings) are reset to their default in that particular session.

Remember that this session isn’t installed by default but is available via a simple apt install gnome-session. It has never been easier in Ubuntu to taste something really close to upstream vision!

Let’s now look at the Ubuntu default session:

Ubuntu session after per desktop overrides

Ubuntu session overview

Not that many visible changes: we still have icons visible on the desktop in that session. However, the trash (or bin, take the flavor you prefer ;)) icon is available back on the desktop as our dock won’t include. The more observers will notice that there are 3 window buttons (minimize, maximize and close, which makes more sense having this incoming dock in that session), on the right, as we announced some days ago!

How did we implement that? All of this magic relies on a very recent glib feature reported here. This one isn’t available yet upstream, but we got the blessing by Allison (upstream glib developer) to take it as is for now. We this uploaded it as a distro-patch. However, this change going to be merged soon in glib master, once more regression tests are written. Alberts implemented a per session gsettings override, which enables having different default per desktop name. Allison cleaned it up a little bit while we were discussing about it at GUADEC and Alberts fixed some remaining bugs! Thanks to both of them for helping us proactively delivering this!

What remained to be done was to add new desktop names: our default ubuntu session has XDG_CURRENT_DESKTOP=ubuntu:GNOME while the vanilla GNOME one has just XDG_CURRENT_DESKTOP=GNOME. Then, reorganizing the exiting override (after a lot of cleanup, removing redundancies across packages) to separate between “global distribution-wide overrides” that we want globally available and per-desktop ones was easy. The idea is really to only impact our sessions when we want to have different behavior than the default upstream one, without impacting the other session.

So, what about Unity you will say? Remember that people upgrading will still have the “unity” session available (it won’t be the default session and people will be transitionned to our newdefault ubuntu one using GNOME Shell), and people installing from scratch can still apt install unity if they want to have this unity session available in their display manager.

The unity session

Well, buttons are on the left, as people might expect with the unity experience and the trash icon isn’t displayed on the desktop but still on the Unity launcher. This is thanks to Unity now defining XDG_CURRENT_DESKTOP=Unity:Unity7:ubuntu, still having access to the ubuntu overrides, but having the Unity ones taking priorities, like button positions, icons to display on the desktop, some shorcuts…

We had similarly warned the Ubuntu MATE team, as those changes was impacting them too, so that they can prepare for it.

Of course, all this is only overriding defaults. It means that if you change one of those settings (like chosen theme or settings in an application), this is an user change. As an user change is more important than defaults, it will impact all sessions. Indeed, it would feel weird to have some keys impacting only some sessions, and other like your social accounts, where you definitively want them to be system-wide. The more robust and understandable behavior is thus to say that any user action changing a setting will be available whatever desktop the user is on.

We think that this schema is an improvement over the current override system in GNOME Shell. Indeed, this one is only available on a few hardcoded GNOME shell modes, and so, we would have needed to distro-patch to add our own “mode”. Furthermore, it can only impact some defined mutter keys, like button position, modal dialog behavior… Consequentely, it can be easily go out of sync with current mutter schema if this one changes. One of the keys that can’t be overriden is, for instance, the default GTK theme. That’s why I’m proposing having a look at migrating the upstream “GNOME classic” (GNOME 2-like experience under GNOME Shell) session to this system, and replace thus the GNOME Shell current override system, in the near future, once the Glib change is merged in (probably in GNOME next cycle).

As yesterday, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official ubuntu desktop team transitions ppa to get a taste of what’s cooking! Happy testing and playing on those sessions!

I hadn’t lied to you yesterday, the vanilla GNOME session and Ubuntu ones are now looking and behaving drastically different! Tomorrow, I’ll show how we implement back some desired behavior changes while still impacting only the ubuntu session, leaving the upstream behavior in the GNOME vanilla session.

15 August, 2017 10:57AM

hackergotchi for Qubes


QSB #32: Xen hypervisor and Linux kernel vulnerabilities (XSA-226 through XSA-230)

Dear Qubes Community,

We have just published Qubes Security Bulletin (QSB) #32: Xen hypervisor and Linux kernel vulnerabilities (XSA-226 through XSA-230). The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View QSB #32 in the qubes-secpack:

Learn about the qubes-secpack, including how to obtain, verify, and read it:

View all past QSBs:

View XSA-226 through XSA-230 in the XSA Tracker:

             ---===[ Qubes Security Bulletin #32 ]===---

                           August 15, 2017

   Xen hypervisor and Linux kernel vulnerabilities (XSA-226 through XSA-230)


The Xen Security Team released several Xen Security Advisories today (XSA-226
through XSA-230) related to the grant tables mechanism used to share memory
between domains. The impact of these advisories ranges from data leaks to
system crashes and privilege escalations. See our commentary below for details.

Technical details

Xen Security Advisory 226 [1]:

| Code to handle copy operations on transitive grants has built in retry
| logic, involving a function reinvoking itself with unchanged
| parameters.  Such use assumes that the compiler would also translate
| this to a so called "tail call" when generating machine code.
| Empirically, this is not commonly the case, allowing for theoretically
| unbounded nesting of such function calls.
| A malicious or buggy guest may be able to crash Xen.  Privilege
| escalation and information leaks cannot be ruled out.

Xen Security Advisory 227 [2]:

| When mapping a grant reference, a guest must inform Xen of where it
| would like the grant mapped.  For PV guests, this is done by nominating
| an existing linear address, or an L1 pagetable entry, to be altered.
| Neither of these PV paths check for alignment of the passed parameter.
| The linear address path suitably truncates the linear address when
| calculating the L1 entry to use, but the path which uses a directly
| nominated L1 entry performs no checks.
| This causes Xen to make an incorrectly-aligned update to a pagetable,
| which corrupts both the intended entry and the subsequent entry with
| values which are largely guest controlled.  If the misaligned value
| crosses a page boundary, then an arbitrary other heap page is
| corrupted.
| A PV guest can elevate its privilege to that of the host.

Xen Security Advisory 228 [3]:

| The grant table code in Xen has a bespoke semi-lockfree allocator for
| recording grant mappings ("maptrack" entries).  This allocator has a
| race which allows the free list to be corrupted.
| Specifically: the code for removing an entry from the free list, prior
| to use, assumes (without locking) that if inspecting head item shows
| that it is not the tail, it will continue to not be the tail of the
| list if it is later found to be still the head and removed with
| cmpxchg.  But the entry might have been removed and replaced, with the
| result that it might be the tail by then.  (The invariants for the
| semi-lockfree data structure were never formally documented.)
| Additionally, a stolen entry is put on the free list with an incorrect
| link field, which will very likely corrupt the list.
| A malicious guest administrator can crash the host, and can probably
| escalate their privilege to that of the host.

Xen Security Advisory 229 [4]:

| The block layer in Linux may choose to merge adjacent block IO requests.
| When Linux is running as a Xen guest, the default merging algorithm is
| replaced with a Xen-specific one.  When Linux is running as an x86 PV
| guest, some BIO's are erroneously merged, corrupting the data stream
| to/from the block device.
| This can result in incorrect access to an uncontrolled adjacent frame.
| A buggy or malicious guest can cause Linux to read or write incorrect
| memory when processing a block stream.  This could leak information from
| other guests in the system or from Xen itself, or be used to DoS or
| escalate privilege within the system.

Xen Security Advisory 230 [5]:

| Xen maintains the _GTF_{read,writ}ing bits as appropriate, to inform the
| guest that a grant is in use.  A guest is expected not to modify the
| grant details while it is in use, whereas the guest is free to
| modify/reuse the grant entry when it is not in use.
| Under some circumstances, Xen will clear the status bits too early,
| incorrectly informing the guest that the grant is no longer in use.
| A guest may prematurely believe that a granted frame is safely private
| again, and reuse it in a way which contains sensitive information, while
| the domain on the far end of the grant is still using the grant.

Commentary from the Qubes Security Team

It looks like the most severe of the vulnerabilities published today is
XSA-227, which is another example of a bug in memory management code for
para-virtualized (PV) VMs. As discussed before, in Qubes 4.0 [6], we've decided
to retire the use of PV virtualization mode in favour of fully virtualized VMs,
precisely in order to to prevent this class of vulnerabilities from affecting
the security of Qubes OS. We note however, that Qubes 3.2 uses PV for all VMs
by default.

XSA-228 seems to be another potentially serious vulnerability. While this does
not seem to be limited only to PV virtualization, we should note that it is a
race condition type of bug. Such types of vulnerabilities are typically
significantly more difficult to reliably exploit in practice.

The remaining vulnerabilities (XSA-229 and XSA-230) look even more theoretical.
We should also note that XSA-229 is a vulnerability in the Linux kernel's
implementation of the Xen PV block (disk) backend, not in the Xen hypervisor.
The Qubes architecture partly mitigates potential successful attacks exploiting
this vulnerability thanks to offloading some of the storage backend to USB and
(optionally) other VMs. The main system block backend still runs in dom0,
however, hence the inclusion of this bug in the bulletin.

Compromise Recovery

Starting with Qubes 3.2, we offer Paranoid Backup Restore Mode, which was
designed specifically to aid in the recovery of a (potentially) compromised
Qubes OS system. Thus, if you believe your system might have been compromised
(perhaps because of the bugs discussed in this bulletin), then you should read
and follow the procedure described here:


The specific packages that resolve the problems discussed in this
bulletin are as follows:

  For Qubes 3.2:
  - Xen packages, version 4.6.6-29
  - Kernel packages, version 4.9.35-20

  For Qubes 4.0:
  - Xen packages, version 4.8.1-5
  - Kernel packages, version 4.9.35-20

The packages are to be installed in dom0 via the qubes-dom0-update command or
via the Qubes VM Manager. A system restart will be required afterwards.

If you use Anti Evil Maid, you will need to reseal your secret passphrase to
new PCR values, as PCR18+19 will change due to the new Xen and kernel binaries,
and because of the regenerated initramfs.

These packages will migrate to the current (stable) repository over the next
two weeks after being tested by the community.


See the original Xen Security Advisories.



The Qubes Security Team

15 August, 2017 12:00AM

August 14, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: LXD: Weekly Status #10


Christian Brauner (@brauner) and Stéphane Graber (@stgraber) were attending Debconf17 in Montreal.
We had the opportunity to catch up with colleagues, friends and users.

Stéphane gave a talk about LXD and system containers on Debian, a recording is available:

Senthil Kumaran S of Linaro was also presenting LXC on Debian:

Extended CFP for containers micro-conference

As we still have a number of slots available for the containers micro-conference at Linux Plumbers 2017, we’ve decided to extend the CFP. All current proposals have been approved.

You can send a proposal here:

Upcoming conferences

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




  • Nothing to report this week

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • LXD 2.16 was backported to Ubuntu 16.04 LTS and 17.04 (in the backports pocket)
  • LXC 2.0.8, LXCFS 2.0.7 and LXD 2.0.10 have also been backported to Ubuntu 14.04 LTS.


  • Removed CRIU support from the snap as current CRIU doesn’t work with snap confinement.
  • Fixed a number of issues with /run inside the snap environment missing files needed for DNS resolution to properly function.
  • Fixed support for nesting, allowing the LXD snap to be installed inside an unprivileged LXD container.
  • Added libacl as required by the recently introduced ACL shifting code.
  • Changed the LXD daemon directory to be 0755 rather than 0711, having it now be the same as the .deb package.

14 August, 2017 04:45PM

Cumulus Linux

4 tips for managing big data from IoT in your network

Big Data and the Internet of Things. The two seem to go hand in hand, even if there are some important differences between them. As IoT becomes a greater reality, it’s important that your network devops team is ready for its huge impact on your systems and networks. In this post, we’ll cover the basics, like the difference between big data and the Internet of Things, and then we’ll go into more detail about how to ensure your network is managing big data from IoT effectively.

The Internet of Things: a hot topic

The Internet of Things has been a hot topic in recent years. Little wonder, since its potential is increasing daily. From Bluetooth accessible devices such as smart appliances and smart homes, to wearable technology, to smart cars, to energy plants and wind turbines, smart technology is growing fast. Along with this technology is the need to support these devices both in network and storage. By 2025 McKinsey expects IoT will generate $11.1 Trillion annually. Companies are rushing to find ways to capitalize on IoT and the big data it will generate.

Differences between the Internet of Things and big data

Big Data is an interesting concept among engineers. It has many definitions, but is generally defined as a very large data set that can be analyzed and used to spot trends. The data is collected through different means such as manually by human beings, computational analysis, automatic data gathering and devices that fall into the realm of the Internet of Things. The Internet of Things, or IoT, deals with smart objects providing data in real time. So while IoT data may be big data, big data is not necessarily encompassed by IoT.

The network is crucial: managing big data from IoT

Due to the large amounts of data being aggregated by IoT technology, the management of big data in the data center has become more critical as the network is responsible for processing and transferring data rapidly. Because of this, it’s imperative to have a network infrastructure that is flexible, agile and efficient. Big data processing requires a scalable, fault-tolerant, distributed storage system to work closely with MapReduce on commodity servers.

Many organizations look to clustering applications, like Hadoop, to manage their data efficiently. Manageability has become a key expectation from Hadoop users. Hadoop has made great advances with Ambari, which enables the automation of initial installation, rolling upgrades without service disruption, high availability and disaster recovery — all of which are critical to efficient IT operations with data management.

When you’re thinking about designing a network that can support data clustering applications and process big data from the Internet of Things, it’s important to consider how this data will grow and amplify as your business — and the amount of data you can gather — evolves. Here are a few considerations for optimal big data processing:

  1. Consider a non-blocking, multi-tier, scale-out IP Clos fabric design. Ensuring you have an elegant network structure to create redundancies and remove bottlenecks will lessen your risk of slower processing, outages and performance issues.
  2. Ensure you have a high-bandwidth infrastructure for rapid processing of large data so that as you are able to collect more data, your network infrastructure can grow with it.
  3. Be aware of bottlenecking. With hyper converged infrastructure, there is increased east-west traffic. Open networking is optimal for providing minimal bottlenecking.
  4. Thoroughly think through and diagram your architecture design with Hadoop thoroughly before you deploy a refresh or new private cloud environment. Hadoop works seamlessly with Cumulus Linux, and we have put together a comprehensive design guide to get you started.

Still interested in finding out more about big data? Check out our solutions page to learn about how Cumulus Linux works with big data.


The post 4 tips for managing big data from IoT in your network appeared first on Cumulus Networks Blog.

14 August, 2017 03:49PM by Kelsey Havens

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Vanilla Framework has a new website

We’re happy to announce the long overdue Vanilla Framework website, where you can find all the relevant links and resources needed to start using Vanilla. homepageThe homepage of


When you visit the new site, you will also see the new Vanilla logo, which follows the same visual principles as other logos within the Ubuntu family, like Juju, MAAS and Landscape.

We wanted to make sure that the Vanilla site showcased what you can do with the framework, so we kept it nice and clean, using only Vanilla patterns.

We plan on extending it to include more information about Vanilla and how it works, how to prototype using Vanilla, and the design principles that are behind it, so keep tuned.

And remember you can follow Vanilla on Twitter, ask questions on Slack and file new pattern proposals and issues on GitHub.

14 August, 2017 02:35PM

Didier Roche: Ubuntu GNOME Shell in Artful: Day 1

And here we go! I’m going to report here day by day the progress we are making in Ubuntu Artful after taking our decisions regarding our default session experience as discussed in my last blog post.

Day 1: differentiating ubuntu and GNOME vanilla session

Let’s start with the ground work and differentiate our two “ubuntu” and “gnome” sessions. Until today, they were pixel by pixel identical.

Now are migrating some GNOME Session changes as well as some GNOME Shell ones in Artful.

How did we implement that? A little bit before GUADEC, while doing our regular running pratice with Mathieu1, he mentioned to me the notion of GNOME Shell modes that were used to create the classic session.

After a little bit of digging, this sounded the perfect mechanism we should use to customize our default session, while still shipping an upstream vanilla GNOME session as well.

So, after creating the corresponding environment variables and files to differentiate the sessions: we now have an “ubuntu” mode, which is enabled in our default session.

This one refers to an “ubuntu.css” file. To keep it in sync with potential changes in GNOME Shell, this latter is created at build-time, from the “gnome-shell.css” vanilla one, based on some regular expression rules.

So, what changes can you expect with those? Not that much for now in our default session, it should almost be a no-change operation. However, this is the base work to be able to do further changes we’ll highlight tomorrow.

After upgrading, the default session will look like this:

Can you spot the only difference in that screenshot compared to today's session?

Looks very like the current ubuntu session from today’s iso, doesn’t it? However, if you switch to the GNOME session, after apt install gnome-session, you will now see some subtiles differences: * the Cantarell font is installed and used inside that GNOME session instead of the ubuntu one. The per-session mode enabled us to drop our distro-wide patch to enforce the ubuntu font in GNOME Shell. * symbolic icons are only used in the GNOME vanilla session. Indeed, we decided that consistency was important for our default experience. Our ubuntu mono icons don’t have symbolic flavors for most of them. Default ones are looking like the adwaita icon themes and are quite different visually from the ubuntu ones, which can puzzle our users (“Did I launch the application I wanted to? Does this application menu corresponds to the currently focused application?”). But not having symbolic icons for our default theme doesn’t justify it by itself, the most important reason was that most of third-party applications, like firefox, thunderbird, any IDE, and a lot of traditional applications don’t ship either a symbolic icon for branding concerns. Consequently, they appear fully colorized. This inconsistency from one application to another isn’t visually pleasant, and so, we decided, for 17.10, to disable symbolic icons in the ubuntu session.

The vanilla session is thus showing up those 2 differences for now, as you can see here:

The new GNOME Vanilla session, looking still quite like the ubuntu one today

Finally, we needed to enforce gdm to use our ubuntu.css theme as this is a distribution-wide component, and we’ll thus make for all those the “ubuntu look and feel experience”.

If you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official ubuntu desktop team transitions ppa to get a taste of what’s cooking!

Some people might notice that we didn’t use the GNOME Shell per mode overrides (for some mutter behavior like button placement) but we will come with a more scalable and upstream solution that we’ll present tomorrow.

This is just the beginning! The differences as quite subtile right now. Tomorrow, the vanilla GNOME session and ubuntu ones will look drastically different. I heard even that some window buttons may change their position…

  1. Indeed, GNOME and Ubuntu contributors can have fun, even sometimes having drinks and dinners together. ;)

14 August, 2017 01:42PM

Costales: Folder Color now has emblem file support

New release 0.0.85 of folder color! Now you can select a file and set an emblem! Enjoy it! :)

File emblem

How to install?

14 August, 2017 09:18AM by Marcos Costales (

Jono Bacon: Sous Vide For Nerds (With Limited Cooking Experience)

Something a little different to share today, but important if you are (a) not especially gifted/interested in cooking, (b) love great food, and (c) are a bit of a nerd. Sous vide is the technique, and the Joule is the solution.

Sous vide is a method of cooking that involves putting food in a bag and submerging it in a water pan that is kept at a regulated temperature. You then essentially slow-cook the food, but because the water that the food is in is so consistent in temperature, it evenly cooks the food.

The result of this is phenomenal food. While I am still fairly new to sous vide, everything we have tried has been a significant improvement compared to regular methods (e.g. grilling).

As an example, chicken is notoriously difficult to cook well. When I sous vide the chicken and then sear it on the grill (to get some delicious char), you get incredible tender and juicy chicken with the ideal grilled texture and flavor.

Steak is phenomenal too. I use the same technique: sous vide it to a medium-rare doneness and then sear it at high heat on the grill. Perfectly cooked steak.

A particular surprise here are eggs. When you sous vide an egg, the yolk texture is undeniably better. It takes on an almost custard like texture and brings the flavor to life.

So, sous vide is an unquestionably fantastic method of cooking. The big question is, particularly for the non-cooks among you, is it worth it?

Sous vide is great for busy (or lazy) people

Part of why I am loving sous vide is that it matches the formula I want to experience in cooking:

Easy + Low Effort + Low Risk + Minimal Cleanup = Great Food

Here’s the breakdown:

  • Easy – you can’t really screw it up. Put the food in a bag, set the right temperate, come back after a given period of time and your food is perfectly cooked.
  • + Low Effort – it takes a few minutes to start the cooking process and you can do other things while it cooks. You never need to babysit it.
  • + Low Risk – with sous vide you know it is cooked evenly. As an example, with chicken, it is common to get a cooked outer core (from grilling) and it be uncooked in the middle. As such people overcook it to prevent the risk. With sous vide you just have to ensure you cook it to a safe level and it is consistently cooked.
  • + Minimal Cleanup – you put the food in a bag, cook it, and then throw the bag away. The only pan you use is a bowl with water in it (about as easy to clean as possible). Perfect!

Thus, the result is great food and minimal fuss.

One other benefit is reheating for later eating.

As an example, right now I am ‘sous vide’ing’ (?) a pan full of eggs. These will be for breakfast every day this week. When the eggs are done, we will pop them in the fridge to keep. To reheat, we simply submerge the eggs in boiling water and it raises the internal temperature back up. The result is the incredible sous vide texture and consistency, but it takes merely (a) boiling the kettle, (b) submerging the eggs, and (c) waiting a little bit to get the benefits of sous vide later.

The gadget

This is where the nerdy bit comes in, but it isn’t all that nerdy.

For Christmas, Erica and I got a Joule. Put simply, it is white stick that plugs into the wall and connects to your phone via bluetooth.

You fill a pan with water, pop the Joule in, and search for the food you want to cook. The app will then recommend the right temperate and cooking time. When you set the time, the Joule turns on and starts circulating the water in the pan until it reaches the target temperate.

Next, you put the food in in the bag and the app starts the timer. When the timer is done your phone gets notified, you pull the food out and bingo!

The library of food in the app is enormous and even helps with how to prepare the food (e.g. any recommended seasoning). If though you want to ignore the guidance and just set a temperature and cooking time, then you can do that too.

When you are done cooking, throw the bag you cooked the food in away, empty the water out of the pan, and put the Joule back in the cupboard. Job done.

Now, to be clear, there are many other sous vide gadgets, none of which I have tried. I have tried one, the Joule, and it has been brilliant.

So, that’s it: I just wanted to share this recent discovery. Give it a try, I think you will dig it as much as I do.

The post Sous Vide For Nerds (With Limited Cooking Experience) appeared first on Jono Bacon.

14 August, 2017 05:52AM

August 12, 2017

Sebastian Schauenburg: IPv6 Unique Local Address

Since IPv6 is happening, we should be prepared. During the deployment of a new accesspoint I was in need of a Unique Local Address. IPv6 Unique Local Addresses basically are comparable to the IPv4 private address ranges.

Some sites refer to, but that is offline nowadays. Others refer to generator, which is open source and still available yay.

But wait, there must be an offline method for this right? There is! subnetcalc (nowadays available here apparently) to the rescue:

subnetcalc fd00:: 64 -uniquelocal | grep ^Network

Profit :-)

12 August, 2017 09:00AM

August 11, 2017

Ubuntu Insights: Ubuntu Desktop Weekly Update: August 11, 2017


The GNOME conference happened last week with good representation from Ubuntu. The spirit was good and the discussions constructive. Decisions were made, details can be read on–guadec-2017-and-plans-for-gnome-shell-migration/


We’re preparing to make the changes described above in the coming weeks, that means that the GNOME Shell Ubuntu session is going to transition to this design in the next few days. Didier will be posting a series of blog posts next week on how this all works as they are landing. The vanilla upstream GNOME session will also emerge from this work. And we’ll link to the posts in next week’s newsletter, but keep an eye on social media for up-to-date information.

We’ve resurrected the “power off” option when the power button is pressed.  This will appear in GNOME Control Center 3.25.90.

Video, Audio, Bluetooth, Networking

You might have seen this screenshot earlier in the week:

We’re testing some patches to Chromium 60 in Artful to enable video acceleration and we’re seeing roughly a 50% saving in CPU overhead when using VA API.  In the screenshot above playing the video without acceleration is on the left and playing with acceleration is on the right.  The CPU is Haswell.  There are still more bugs to fix, but we’re making progress.

In Pulse Audio we’ve dropped some more patches for Android support (from Ubuntu Touch) bringing us more inline with upstream.  This will make maintenance easier and should reduce the chance of bugs cropping up from our patches.

Our patches to add enabling and disabling of the Network Connectivity Checker are in review upstream.  This will eventually add a toggle switch in the privacy settings of Control Center to allow you to turn on/off the connectivity checker.  We should be able to distro-patch these into Ubuntu soon before they appear upstream, and then drop the patches once they are available upstream.

We’re including the Rhythmbox “Alternate Toolbar” by default in 17.10, this brings a tidier user interface


  • Updated Chromium stable to 60.0.3112.78, pending validation.  The next Chromium stable update is 60.0.3112.90, already lined up in a PPA.  Updated Chromium beta to 61.0.3163.31, Chromium dev to 62.0.3175.4.
  • Updating LibreOffice Snap to 5.4.0.
  • Merged upstream Ghostscript version 9.21 from Debian into Ubuntu.
  • Synced newest versions from Debian for hplip, cups-bjnp, ippusbxd.  System-config-printer updated to a new snapshot from upstream GIT.
  • GNOME got a stack of updates to 3.25.90 (evolution, games, font viewer, online accounts, tweak tool, map, log, disk, calculator, cheese, todo)


In The News

  • We’re planning a Fit And Finish hackfest in London at the end of August to find and fix all the niggly little bugs and theming issues. If you’ve got skills to offer and would like to get involved please see Popey’s blog post.
  • Softpedia also covers this topic here.
  • Linux Action News discuss the changes to the default session in 17.10.
  • 16.04.3 was released last week.  OMG covers the release here.
  • OMG also have articles on the dock and the Rhythmbox plugin
  • Softpedia discuss Didier’s blog post about the changes coming to the default session

11 August, 2017 05:15PM

Ubuntu Insights: How to sign things for Secure Boot

Secure Boot signing

The whole concept of Secure Boot requires that there exists a trust chain, from the very first thing loaded by the hardware (the firmware code), all the way through to the last things loaded by the operating system as part of the kernel: the modules. In other words, not just the firmware and bootloader require signatures, the kernel and modules too. People don’t generally change firmware or bootloader all that much, but what of rebuilding a kernel or adding extra modules provided by hardware manufacturers?

The Secure Boot story in Ubuntu includes the fact that you might want to build your own kernel (but we do hope you can just use the generic kernel we ship in the archive), and that you may install your own kernel modules. This means signing UEFI binaries and the kernel modules, which can be done with its own set of tools.

But first, more on the trust chain used for Secure Boot.

Certificates in shim

To begin with signing things for UEFI Secure Boot, you need to create a X509 certificate that can be imported in firmware; either directly though the manufacturer firmware, or more easily, by way of shim.

Creating a certificate for use in UEFI Secure Boot is relatively simple. openssl can do it by running a few SSL commands. Now, we needs to create a SSL certificate for module signing…

First, let’s create some config to let openssl know what we want to create (let’s call it ‘openssl.cnf’):

# This definition stops the following lines choking if HOME isn't
# defined.
HOME                    = .
RANDFILE                = $ENV::HOME/.rnd 
[ req ]
distinguished_name      = req_distinguished_name
x509_extensions         = v3
string_mask             = utf8only
prompt                  = no

[ req_distinguished_name ]
countryName             = CA
stateOrProvinceName     = Quebec
localityName            = Montreal
0.organizationName      = cyphermox
commonName              = Secure Boot Signing
emailAddress            =

[ v3 ]
subjectKeyIdentifier    = hash
authorityKeyIdentifier  = keyid:always,issuer
basicConstraints        = critical,CA:FALSE
extendedKeyUsage        = codeSigning,,
nsComment               = "OpenSSL Generated Certificate"

Either update the values under “[ req_distinguished_name ]” or get rid of that section altogether (along with the “distinguished_name” field) and remove the “prompt” field. Then openssl would ask you for the values you want to set for the certificate identification.

The identification itself does not matter much, but some of the later values are important: for example, we do want to make sure “” is included in extendedKeyUsage, and it is that OID that will tell shim this is meant to be a module signing certificate.

Then, we can start the fun part: creating the private and public keys.

openssl req -config ./openssl.cnf \
        -new -x509 -newkey rsa:2048 \
        -nodes -days 36500 -outform DER \
        -keyout "MOK.priv" \
        -out "MOK.der"

This command will create both the private and public part of the certificate to sign things. You need both files to sign; and just the public part (MOK.der) to enroll the key in shim.

Enrolling the key

Now, let’s enroll that key we just created in shim. That makes it so it will be accepted as a valid signing key for any module the kernel wants to load, as well as a valid key should you want to build your own bootloader or kernels (provided that you don’t include that ‘’ OID discussed earlier).

To enroll a key, use the mokutil command:

sudo mokutil --import MOK.der

Follow the prompts to enter a password that will be used to make sure you really do want to enroll the key in a minute.

Once this is done, reboot. Just before loading GRUB, shim will show a blue screen (which is actually another piece of the shim project called “MokManager”). use that screen to select “Enroll MOK” and follow the menus to finish the enrolling process. You can also look at some of the properties of the key you’re trying to add, just to make sure it’s indeed the right one using “View key”. MokManager will ask you for the password we typed in earlier when running mokutil; and will save the key, and we’ll reboot again.

Let’s sign things

Before we sign, let’s make sure the key we added really is seen by the kernel. To do this, we can go look at /proc/keys:

$ sudo cat /proc/keys
0020f22a I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
0022a089 I------     2 perm 1f0b0000     0     0 keyring   .builtin_trusted_keys: 1
003462c9 I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
00709f1c I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
00f488cc I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
1dcb85e2 I------     1 perm 1f030000     0     0 asymmetri Build time autogenerated kernel key: eae8fa5ee6c91603c031c81226b2df4b135df7d2: X509.rsa 135df7d2 []

Just make sure a key exists there with the attributes (commonName, etc.) you entered earlier

To sign kernel modules, we can use the kmodsign command:

kmodsign sha512 MOK.priv MOK.der module.ko

should be the file name of a kernel module file you want to sign. The signature will be appended to it by kmodsign, but if you would rather keep the signature separate and concatenate it to the module yourself, you can do that too (see ‘kmosign –help’).

You can validate that the module is signed by checking that it includes the string ‘~Module signature appended~’:

$ hexdump -Cv module.ko | tail -n 5
00002c20  10 14 08 cd eb 67 a8 3d  ac 82 e1 1d 46 b5 5c 91  |.....g.=....F.\.|
00002c30  9c cb 47 f7 c9 77 00 00  02 00 00 00 00 00 00 00  |..G..w..........|
00002c40  02 9e 7e 4d 6f 64 75 6c  65 20 73 69 67 6e 61 74  |..~Module signat|
00002c50  75 72 65 20 61 70 70 65  6e 64 65 64 7e 0a        |ure appended~.|

You can also use hexdump this way to check that the signing key is the one you created.

What about kernels and bootloaders?

To sign a custom kernel or any other EFI binary you want to have loaded by shim, you’ll need to use a different command: sbsign. Unfortunately, we’ll need the certificate in a different format in this case.

Let’s convert the certificate we created earlier into PEM:

openssl x509 -in MOK.der -inform DER -outform PEM -out MOK.pem

Now, we can use this to sign our EFI binary:

sbsign --key MOK.priv --cert MOK.pem my_binary.efi --output my_binary.efi.signed

As long as the signing key is enrolled in shim and does not contain the OID from earlier (since that limits the use of the key to kernel module signing), the binary should be loaded just fine by shim.

Doing signatures outside shim

If you don’t want to use shim to handle keys (but I do recommend that you do use it), you will need to create different certificates; one of which being the PK (Platform Key) for the system, which you can enroll in firmware directly via KeyTool or some firmware tool provided with your system. I will not elaborate the steps to enroll the keys in firmware as it tends to vary from system to system, but the main idea is to put the system in Secure Boot “Setup Mode”; run KeyTool (which is its own EFI binary you can build yourself and run), and enroll the keys — first by installing the KEK and DB keys, and finishing with the PK. These files need to be available from some FAT partition.

I do have this script to generate the right certificates and files; which I can share (and itself is copied from somewhere, I can’t remember):

echo -n "Enter a Common Name to embed in the keys: "
read NAME
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME PK/" -keyout PK.key \
        -out PK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME KEK/" -keyout KEK.key \
        -out KEK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME DB/" -keyout DB.key \
        -out DB.crt -days 3650 -nodes -sha256
openssl x509 -in PK.crt -out PK.cer -outform DER
openssl x509 -in KEK.crt -out KEK.cer -outform DER
openssl x509 -in DB.crt -out DB.cer -outform DER
GUID=`python -c 'import uuid; print str(uuid.uuid1())'`
echo $GUID > myGUID.txt
cert-to-efi-sig-list -g $GUID PK.crt PK.esl
cert-to-efi-sig-list -g $GUID KEK.crt KEK.esl
cert-to-efi-sig-list -g $GUID DB.crt DB.esl
rm -f noPK.esl
touch noPK.esl
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK PK.esl PK.auth
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK noPK.esl noPK.auth
chmod 0600 *.key
echo ""
echo ""
echo "For use with KeyTool, copy the *.auth and *.esl files to a FAT USB"
echo "flash drive or to your EFI System Partition (ESP)."
echo "For use with most UEFIs' built-in key managers, copy the *.cer files."
echo ""

The same logic as earlier applies: sign things using sbsign or kmodsign as required (use the .crt files with sbsign, and .cer files with kmodsign); and as long as the keys are properly enrolled in the firmware or in shim, they will be successfully loaded.

What’s coming up for Secure Boot in Ubuntu

Signing things is complex — you need to create SSL certificates, enroll them in firmware or shim… You need to have a fair amount of prior knowledge of how Secure Boot works, and that the commands to use are. It’s rather obvious that this isn’t at the reach of everybody, and somewhat bad experience in the first place. For that reason, we’re working on making the key creation, enrollment and signatures easier when installing DKMS modules.

update-secureboot-policy should soon let you generate and enroll a key; and DKMS will be able to sign things by itself using that key.

This article was originally posted on Mathieu Trudel-Lapierre’s blog

11 August, 2017 02:06PM

Valorie Zimmerman: Akademy; at 20, KDE reaches out

Some of the talks, initiatives, conversations, and workshops that inspired me at Akademy. Thanks so much for the e.V. for sponsoring me.

A. Wikidata  - We have some work to do to get our data automatically uploaded into Wikidata. However, doing so will help us keep our Wikipedia pages up-to-date.

B. Looking for Love, Paul Brown's talk and workshop about Increasing your audience's appreciation for your project. Many of the top Google results for our pages don't address what people are looking for:
  1. What can your project do for me? 
  2. What does your application or library do?
Paul highlighted one good example: That crucial information is above the fold, with no scrolling. Attractive, and exactly the approach we should be taking in all our public-facing pages.

My offer to all projects: I will help with the text on any of your pages. This is a serious offer! Just ask in IRC or send an email to valorie at kde dot org for editing.

C. The Enterprise list for people with large KDE deployments, an under-used resource for those supporting our users in huge numbers, in schools, governments and companies. If you know of anyone doing this job who is not on the list, hand along the link to them.

D. Goalposts for KDE - I was not at this "Luminaries" Kabal Proposals BoF, but I read the notes. I'll be happy to see this idea develop on the Community list.

E. UserBase revival -- This effort is timely! and brings the list of things I'm excited about full circle. For many teams, UserBase pages are their website. We need to clean up and polish UserBase! Join us in #kde-wiki in IRC or the Telegram channel and where we'll actually be tracking and doing the work. I'm so thankful that Claus is taking the leadership on this.

If you are a project leader and want help buffing your UserBase pages, we can help!

In addition to all of the above ideas, there is still another idea floating around that needs more development. Each of our application sites, at least, should have a quality metric box, listing things like code testing, translation/internationalization percentage, number of contributors, and maybe more. These should be picked up automatically, not generated by hand. No other major projects seem to have this, so we should lead. When people are looking for what applications they want to run on their computers, they should choose by more than color or other incidentals. We work so much on quality -- we should lead with it. There were many informal discussions about this but no concrete proposals yet.

11 August, 2017 08:08AM by Valorie Zimmerman (

Stephan Ruegamer: Looking for a Release Engineer, Berlin

Release Engineer (Berlin, Germany), Sony Interactive Entertainment

Do you want to be part of an engineering team that is building a world class cloud platform that scales to millions of users? Are you excited to dive into new projects, have an enthusiasm for automation, and enjoy working in a strong collaborate culture? If so, join us!


Design and development of Release Engineering projects and tools to aid in release pipeline. Work in cross-functional development teams to build and deploy new software systems. Work with team and project managers to deliver quality software within schedule constraints.


Demonstrable knowledge of distributed architectures, OOP and Python BS or a minimum of 5 years of relevant work experience

Skills & Knowledge

  • Expert level knowledge of Unix/Linux
  • Advanced skills in Python
  • Kubernetes experience is a huge plus
  • Programming best practices including unit testing, integration testing, static analysis, and code documentation
  • Familiarity with build systems
  • Familiarity with continuous integration and delivery

Additional Attributes

  • Contributor to open source projects
  • Version control systems (preferably Git)
  • Gamer is a plus
  • Enjoys working in a fast-paced environment
  • Strong communication skills

Interested? Use this link

11 August, 2017 07:16AM

David Tomaschik: Review of HackerBoxes 0021: Hacker Tracker

HackerBoxes is a monthly subscription service for hardware hackers and makers. I hadn’t heard of it until I was researching DEF CON 25 badges, for which they had a box, at which point I was amazed I had missed it. They were handing out coupons at DEF CON and BSidesLV for 10% off your first box, so I decided to give it a try.

Hacker Tracker

First thing I noticed upon opening the box was that there’s no fanfare in the packaging or design of the shipping. You get a plain white box shipped USPS with all of the contents just inside. I can’t decide if I’m happy they’re not wasting material on extra packaging, or disappointed they didn’t do more to make it feel exciting. If you look at their website, they show all the past boxes with a black “Hacker Boxes” branded box, so I don’t know if this is a change, or the pictures on the website are misleading, or the influx of new members from hacker summer camp has resulted in a box shortage.

I unpacked the box quickly to find the following:

  • Arduino Nano Clone
  • Jumper Wires
  • Small breadboard
  • MicroSD Card (16 GB)
  • USB MicroSD Reader
  • MicroSD Breakout Board
  • u-blox NEO 6M GPS module
  • Magnetometer breakout
  • PCB Ruler
  • MicroUSB Cable
  • Hackerboxes Sticker
  • Pinout card with reminder of instructions (aka h4x0r sk00l)

If you’ve been trying to do the math in your head, I’ll save you the trouble. In quantity 1, these parts can be had from AliExpress for about $30. If you’re feeling impatient, you can do it on Amazon for about $50. Of course, the value of the parts alone isn’t the whole story: this is a curated set of components that builds a project, and the directions they provide on getting started are part of the product. (I just know everyone wanted to know the cash value.)

Compared to some of their historical boxes, I’m a little underwhelmed. Many of their boxes look like something where I could do many things with the kit or teach hardware concepts: for example, “0018: Circuit Circus” is clearly an effort to teach analog circuits. “0015 - Connect Everything” lets you connect everything to WiFi via the ESP32. Even when not multi-purpose, previous kits have included reusable tools like a USB borescope or a Utili-Key. Many seem to have an exclusive “fun” item, like a patch or keychain, in addition to the obligatory HackerBoxes sticker.

In contrast, the “Hacker Tracker” box feels like a unitasker: receive GPS/magnetometer readings and log them to a MicroSD card. Furthermore, there’s not much hardware education involved: all of the components connect directly via jumper wires to the provided Arduino Nano clone, so other than “connect the right wire”, there’s no electronics skillset to speak of. On the software side, while there are steps along the way showing how each component is used, a fully-functional Arduino sketch is provided, so you don’t have to know any programming to get a functional GPS logger.

Overall, I feel like this kit is essentially “paint-by-numbers”, which can either be great or disappointing. If you’re introducing a teenager to electronics and programming, a “paint-by-numbers” approach is probably a great start. Likewise, if this is your first foray into electronics or Arduino, you should have no trouble following along. On the other hand, if you’re more experienced and just looking for inspiration of endless possibilities, I feel like this kit has fallen short.

There’s one other gripe I have with this kit: there are headers on the Arduino Nano clone and the MicroSD breakout, but the headers are not soldered on the accelerometer or GPS module. At least if you’re going to make a simple kit, make it so I don’t have to clean off the soldering station, okay?

So, am I keeping my subscription? For the moment, yes, at least for another month. Like I said, I’ve been impressed by past kits, so this might just be an off month for what I’m looking for. I don’t think this kit is bad, and I’m not disappointed, just not as excited as I’d hoped to be. I might have to give Adabox a try though.

As for the subscription service itself: it looks like their web interface makes it easy to skip a month (maybe you’re travelling and won’t have time?) or cancel entirely. I’m not advocating cancelling, but I absolutely hate when subscription services make you contact customer service to cancel (just so they can try to talk you into staying longer, like AOL back in the 90s). The site has a nice clean feel and works well.

If anyone from HackerBoxes is reading this, I’ll consolidate my suggestions to you in a few points:

  • Hook us up with patches & more stickers! Especially a sticker that won’t take 1/4 of a laptop. (I love the sticker from #0015 and the patch from #0018.)
  • Don’t have the only soldering be two tiny header strips. Getting out the soldering iron just to do a couple of SPI connections is a bit of a drag. Either do a PCB like #0019, #0020, etc., or provide modules with headers in place. (If it wasn’t for the soldering, you could take this kit on vacation and play with just the kit and a laptop!)
  • Instructables with more information on why you’re doing what you’re doing would be nice. Mentioning that there’s a level shifter on the MicroSD breakout because MicroSD cards run at 3.3V, and not the 5V from an Arduino Nano, for example.
  • Including a part that requires a warning about you (the experts) having had a lot of problems with it in an introductory kit seems like a poor choice. A customer with flaky behavior won’t know if it’s their setup, their code, or the part.

Overall, I’m excited to see so much going into STEM education and the maker movement, and I’m happy that it’s still growing. I want to thank HackerBoxes for being a part of that and wish them success even if I don’t turn out to be their ideal demographic.

11 August, 2017 07:00AM

hackergotchi for Deepin


Deepin Package Manager V1.0 Is Released——Intelligent Detection, One Click to Install

Deepin application family welcomes its new member – Deepin Package Manager V1.0. It is a management tool for deb package, and is developed for users to easily install customized applications that are not categorized in Deepin Store. With an easy-to-use interface as well as functions like batch installation, version detection and auto-completion of dependencies, you can quickly get the software installed on deepin once you get the right deb package. Neat Design with One Click to Install All necessary information, like package name, version number and description, are clearly shown. Users just need to follow the prompt to finish installation ...Read more

11 August, 2017 01:47AM by jingle

August 10, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Scarlett Clark: Another successful Akademy! Neon team BoF, snappy and more.

Akademy 2017

Akademy 2017

This years akademy held in Almeria, Spain was a great success.
We ( the neon team ) have decided to move to using snappy container format for KDE applications in KDE Neon.
This will begin in the dev/unstable builds while we sort out the kinks and heavily test it. We still have some roadblocks to overcome, but hope to work with the snappy team to resolve them.
We have also begun the transition of moving Plasma Mobile CI over to Neon CI. So between mobile (arm), snap and debian packaging, we will be very busy!
I attended several BoFs that brought great new ideas for the KDE community.
I was able to chat with Kubuntu release manager ( Valorie Zimmerman ) and hope to work closer with the Kubuntu and Debian teams to reduce duplicate work. I feel this
is very important for all teams involved.

We had so many great talks, see some here:

Akademy is a perfect venue for KDE contributors to work face to face to tackle issues and create new ideas.
Please consider donating:

As usual, it was wonderful to see my KDE family again! See you all next year in Vienna!

10 August, 2017 07:57PM

Ubuntu Insights: Security Team Weekly Summary: August 10, 2017

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at:

During the last week, the Ubuntu Security team:

  • Triaged 242 public security vulnerability reports, retaining the 57 that applied to Ubuntu.
  • Published 13 Ubuntu Security Notices which fixed 29 security issues (CVEs) across 15 supported packages.

Ubuntu Security Notices

Bug Triage

Mainline Inclusion Requests

Updates to Community Supported Packages

  • James Lu (tacocat) provided debdiffs for xenial-zesty for gnome-exe-thumbnailer (LP: #651610)

  • Simon Quigley (tsimonq2) provided debdiffs for trusty-xenial for lxterminal (LP: #1690416)

  • Simon Quigley (tsimonq2) provided debdiffs for trusty-zesty for pcmanfm (LP: #1708542)

  • Otto Kekäläinen (otto) provided debdiffs for trusty for mariadb-5.5 (LP: #1705944)

  • Otto Kekäläinen (otto) provided debdiffs for xenial for mariadb-10.0 (LP: #1698689)

  • Otto Kekäläinen (otto) provided debdiffs for zesty for mariadb-10.1 (LP: #1698689)

  • Roger Light (ral) provided debdiffs for trusty-zesty for mosquitto (LP: #1700490)


What the Security Team is Reading This Week

Weekly Meeting

More Info

10 August, 2017 07:49PM

Cumulus Linux

Announcing the ultimate BGP how-to guide

The Border Gateway Protocol has become the most popular routing protocol in the data center. But for all its popularity, some consider BGP to be too complicated. Despite its maturity and sophistication, many network operators and data center administrators won’t go anywhere near it. If only there were an equally sophisticated BGP guide to solve this problem…

Fortunately, your prayers to the data center gods have been answered. We at Cumulus are incredibly proud of our Chief Scientist, Dinesh Dutt, who has added the title of “published author” to his impressive repertoire with the publication of the ebook BGP in the Data Center. This handy BGP how-to guide, published by O’Reilly Media, is the ideal companion for anyone looking to better understand the Border Gateway Protocol and its place in the data center. It’s perfect for any network operators and engineers that want to become masters of this protocol, regardless of their base level of familiarity with BGP.

In this ebook, Dinesh covers BGP operations and enhancements to simplify its use in order to help readers truly appreciate the elegance and ultimate simplicity of BGP.

This guide covers topics such as:

  • How the BGP theory of operations works for the modern data center
  • Enhancements to BGP configuration provided by open networking
  • BGP best practices
  • How to manage changes
  • Troubleshooting routing problems
  • And much more!

While some network operators and engineers may disregard these benefits and argue that BGP is excessively complicated and difficult to understand, BGP in the Data Center makes learning this protocol accessible and intuitive. Dinesh Dutt breaks it down into easily digestible sections focusing on the theory, design and operationalization of BGP in data center networks — including automation.

But please, don’t just take our word for it! Download your own copy of BGP in the Data Center here and become a master of the most popular routing protocol.

The post Announcing the ultimate BGP how-to guide appeared first on Cumulus Networks Blog.

10 August, 2017 05:59PM by Madison Emery

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Canonical Distribution of Kubernetes: Dev Summary 2017 (Week 32)

August 4th concluded our most recent development sprint on the Canonical Distribution of Kubernetes (CDK). Here are some highlights:

Testing & Planning

  • CDK offline testing plan. We wrote up a plan for testing CDK in an environment where there is no (or severely limited) egress internet access. The end goal is to ensure that CDK can be deployed in this scenario, and create docs describing how to do it. Initial testing begins in the current sprint.
  • etcd2-to-etcd3 migration plan. We wrote up a plan for upgrading existing CDK clusters from etcd2 to etcd3 if desired, and making etcd3 the new default. While the plan is in place, we don’t have any implementation work planned in the current sprint.
  • Canal. We wrote up a design doc for implementing Canal (Calico-on-Flannel) for CDK. Implementation of the Canal charm was scheduled for the current sprint and is currently in code review.
  • We added a Jenkins job to test our stable charms against the latest upstream patch release. A passing build here tells us that we can release the latest binaries for CDK without breaking currently-deployed clusters.


  • Completed RBAC proof-of-concept work. At this point we know how to turn RBAC on/off via charm config, and what changes are needed in CDK to make this work. In the coming weeks we’ll be working on moving from proof-of-concept to production-ready.
  • s390x support. We started by snapping the major cluster components. There are some docker images that don’t have s390x builds, namely nginx-ingress-controller, heapster-grafana, and addon-resizer. We’ll be following up on these in the current sprint.
  • Calico. We updated the Calico CNI charm to use the latest Calico binaries, and added the Calico charm and bundles to CI.

If you’d like to follow along more closely with CDK development, you can do so in the following places:

Until next time!

This was originally featured on Tim Van Steenburgh’s blog

10 August, 2017 05:33PM

Ubuntu Insights: Top snaps in July: GIMP, Brackets, Gogland, Openstack and more

If you’re thinking July = holidays… think again! This month’s pick of the top snaps is all about productivity. Graphic design tools, web code editor, Go IDE, JSON configuration, blockchain wallet and Openstack… Ready for an active summer?

If the term “snaps” doesn’t ring a bell, they are a new way for developers to package their apps, bringing many advantages over traditional package formats such as .deb, .rpm, and others. They are secure, isolated and allow apps to be rolled back should an issue occur. They also aim to work on any distribution or platform, from IoT devices to servers, desktops and mobile devices. Snaps really are the future of Linux application packaging and we’re excited to showcase some great examples of these each month.

Our June selection


Daniel Llewellyn

GIMP is … is there really a need to introduce GIMP?  THE free, open source cross-platform image editor, for anything from advanced graphics design to quick image fixes!

2. Brackets


Brackets is an open-source editor for web design and development built on top of web technologies such as HTML, CSS and JavaScript. The project was created and is maintained by Adobe, and is released under an MIT License.

3. Radiomanager-cli

Cas Adriani

Command line interface to the RadioManager API v2

4. Quadrapassel

Ken VanDine

Quadrapassel is a derivative of a classic Russian falling-block game. Reposition and rotate the blocks as they fall, and attempt to fit them together. When you form a complete horizontal row of blocks, the row will disappear and you score points. The game is over when the blocks get stacked too high. As your score gets higher, you level up and the blocks fall faster.

5. Huggle


Diff browser for MediaWiki based websites intended to deal with vandalism.

6. Usb-reset

Roger Light

This tool allows you to perform a bus reset on a USB device connected to your system. If the device has got confused, this may sort it out.

7. Nanowallet

Nikhil Jah

NanoWallet is a secure interface to the NEM Blockchain platform. Send and receive transactions, purchase XEM, create your own tokens, notarize files with Apostille, and much much more!

8. Gogland


Gogland is the codename for a new commercial IDE by JetBrains aimed at providing an ergonomic environment for Go development.

9. Jsonnet

Juanjo Ciarlante

Jsonnet is a domain specific configuration language that helps you define JSON data. Jsonnet lets you compute fragments of JSON within the structure, bringing the same benefit to structured data that templating languages bring to plain text.

10. Keystone (Edge channel)

James Page

Keystone provides authentication, authorization and service discovery mechanisms via HTTP primarily for use by projects in the OpenStack family. It is most commonly deployed as an HTTP interface to existing identity systems, such as LDAP. You can find out more about Openstack as snaps.

  • From the command-line:
    sudo snap install --edge keystone

10 August, 2017 02:33PM

Ubuntu Podcast from the UK LoCo: S10E23 – Important Fluffy Turn - Ubuntu Podcast

This week we’re joined by a love bug and add more pixels to our computer. RedHat abandon btrfs, Marcus Hutchins is arrested, Google did evil and the podcast patent is over turned! We also have a large dose of Ubuntu community news and some events.

It’s Season Ten Episode Twenty-Three of the Ubuntu Podcast! Alan Pope, Mark Johnson and Dave Lee are connected and speaking to your brain.

In this week’s show:

Entroware Apollo laptop contest reminder

  • We kicked off a contest in Episode 22 to win an Entroware Apollo laptop, the very one Alan reviewed last week.
  • The contest is open until 3rd September 2017, so plenty of time to get your entries in!
  • Listen to Episode 22 for all the details.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

10 August, 2017 02:00PM

hackergotchi for Deepin


deepin Security Updates (DSA 3904-2 &DSA 3909-1 &DSA 3911-1& …&DSA 3919-1)

The security updates of bind9, samba, evince, heimdal, apache2, catdoc and openjdk-8. Vulnerability Information DSA-3904-1 bind9 —Security Updates Security database details: Clément Berthaux from Synaktiv discovered two vulnerabilities in BIND, a DNS server implementation. They allow an attacker to bypass TSIG authentication by sending crafted DNS packets to a server. CVE-2017-3142: An attacker who is able to send and receive messages to an authoritative DNS server and who has knowledge of a valid TSIG key name may be able to circumvent TSIG authentication of AXFR requests via a carefully constructed request packet. A server that relies solely on TSIG keys for protection ...Read more

10 August, 2017 10:01AM by melodyzou

hackergotchi for Ubuntu developers

Ubuntu developers

Duncan McGreggor: NASA/EOSDIS Earthdata


It's been a few years since I posted on this blog -- most of the technical content I've been contributing to in the past couple years has been in the following:
But since the publication of the Mastering matplotlib book, I've gotten more and more into satellite data. The book, it goes without saying, focused on Python for the analysis and interpretation of satellite data (in one of the many topics covered). After that I spent some time working with satellite and GIS data in general using Erlang and LFE. Ultimately though, I found that more and more projects were using the JVM for this sort of work, and in particular, I noted that Clojure had begun to show up in a surprising number of Github projects.


Enter NASA's Earth Observing System Data and Information System (see also and EOSDIS on Wikipedia), a key part of the agency's Earth Science Data Systems Program. It's essentially a concerted effort to bring together the mind-blowing amounts of earth-related data being collected throughout, around, and above the world so that scientists may easily access and correlate earth science data for their research.

Related NASA projects include the following:
The acronym menagerie can be bewildering, but digging into the various NASA projects is ultimately quite rewarding (greater insights, previously unknown resources, amazing research, etc.).


Back to the Clojure reference I made above:  I've been contributing to the nasa/Common-Metadata-Repository open source project (hosted on Github) for a few months now, and it's been amazing to see how all this data from so many different sources gets added, indexed, updated, and generally made so much more available to any who want to work with it. The private sector always seems to be so far ahead of large projects in terms of tech and continuously improving updates to existing software, so its been pretty cool to see a large open source project in the NASA Github org make so many changes that find ways to keep helping their users do better research. More so that users are regularly delivered new features in a large, complex collection of libraries and services thanks in part to the benefits that come from using a functional programming language.

It may seem like nothing to you, but the fact that there are now directory pages for various data providers (e.g., GES_DISC, i.e., Goddard Earth Sciences Data and Information Services Center) makes a big difference for users of this data. The data provider pages now also offer easy access to collection links such as UARS Solar Ultraviolet Spectral Irradiance Monitor. Admittedly, the directory pages still take a while to load, but there are improvements on the way for page load times and other related tasks. If you're reading this a month after this post was written, there's a good chance it's already been fixed by now.


In summary, it's been a fun personal journey from looking at Landsat data for writing a book to working with open source projects that really help scientists to do their jobs better :-) And while I have enjoyed using the other programming languages to explore this problem space, Clojure in particular has been a delightfully powerful tool for delivering new features to the science community.

10 August, 2017 04:14AM by Duncan McGreggor (

Duncan McGreggor: Mastering matplotlib: Acknowledgments

The Book

Well, after nine months of hard work, the book is finally out! It's available both on Packt's site and Getting up early every morning to write takes a lot of discipline, it takes even more to say "no" to enticing rabbit holes or herds of Yak with luxurious coats ripe for shaving ... (truth be told, I still did a bit of that).

The team I worked with at Packt was just amazing. Highly professional and deeply supportive, they were a complete pleasure with which to collaborate. It was the best experience I could have hoped for. Thanks, guys!

The technical reviewers for the book were just fantastic. I've stated elsewhere that my one regret was that the process with the reviewers did not have a tighter feedback loop. I would have really enjoyed collaborating with them from the beginning so that some of their really good ideas could have been integrated into the book. Regardless, their feedback as I got it later in the process helped make this book more approachable by readers, more consistent, and more accurate. The reviewers have bios at the beginning of the book -- read them, and look them up! These folks are all amazing!

The one thing that slipped in the final crunch was the acknowledgements, and I hope to make up for that here, as well as through various emails to everyone who provided their support, either directly or indirectly.


The first two folks I reached out to when starting the book were both physics professors who had published very nice matplotlib problems -- one set for undergraduate students and another from work at the National Radio Astronomy Observatory. I asked for their permission to adapt these problems to the API chapter, and they graciously granted it. What followed were some very nice conversations about matplotlib, programming, physics, education, and publishing. Thanks to Professor Alan DeWeerd, University of Redlands and Professor Jonathan W. Keohane, Hampden Sydney College. Note that Dr. Keohane has a book coming out in the fall from Yale University Press entitled Classical Electrodynamics -- it will contain examples in matplotlib.

Other examples adapted for use in the API chapter included one by Professor David Bailey, University of Toronto. Though his example didn't make it into the book, it gets full coverage in the Chapter 3 IPython notebook.

For one of the EM examples I needed to derive a particular equation for an electromagnetic field in two wires traveling in opposite directions. It's been nearly 20 years since my post-Army college physics, so I was very grateful for the existence and excellence of SymPy which enabled me to check my work with its symbolic computations. A special thanks to the SymPy creators and maintainers.

Please note that if there are errors in the equations, they are my fault! Not that of the esteemed professors or of SymPy :-)

Many of the examples throughout the book were derived from work done by the matplotlib and Seaborn contributors. The work they have done on the documentation in the past 10 years has been amazing -- the community is truly lucky to have such resources at their fingertips.

In particular, Benjamin Root is an astounding community supporter on the matplotlib mail list, helping users of every level with all of their needs. Benjamin and I had several very nice email exchanges during the writing of this book, and he provided some excellent pointers, as he was finishing his own title for Packt: Interactive Applications Using Matplotlib. It was geophysicist and matplotlib savant Joe Kington who originally put us in touch, and I'd like to thank Joe -- on everyone's behalf -- for his amazing answers to matplotlib and related questions on StackOverflow. Joe inspired many changes and adjustments in the sample code for this book. In fact, I had originally intended to feature his work in the chapter on advanced customization (but ran out of space), since Joe has one of the best examples out there for matplotlib transforms. If you don't believe me, check out his work on stereonets. There are many of us who hope that Joe will be authoring his own matplotlib book in the future ...

Olga Botvinnik, a contributor to Seaborn and PhD candidate at UC San Diego (and BioEng/Math double major at MIT), provided fantastic support for my Seaborn questions. Her knowledge, skills, and spirit of open source will help build the community around Seaborn in the years to come. Thanks, Olga!

While on the topic of matplotlib contributors, I'd like to give a special thanks to John Hunter for his inspiration, hard work, and passionate contributions which made matplotlib a reality. My deepest condolences to his family and friends for their tremendous loss.

Quite possibly the tool that had the single-greatest impact on the authoring of this book was IPython and its notebook feature. This brought back all the best memories from using Mathematica in school. Combined with the Python programming language, I can't imagine a better platform for collaborating on math-related problems or producing teaching materials for the same. These compliments are not limited to the user experience, either: the new architecture using ZeroMQ is a work of art. Nicely done, IPython community! The IPython notebook index for the book is available in the book's Github org here.

In Chapters 7 and 8 I encountered a bit of a crisis when trying to work with Python 3 in cloud environments. What was almost a disaster ended up being rescued by the work that Barry Warsaw and the rest of the Ubuntu team did in Ubuntu 15.04, getting Python 3.4.2 into the release and available on Amazon EC2. You guys saved my bacon!

Chapter 7's fictional case study examining the Landsat 8 data for part of Greenland was based on one of Milos Miljkovic's tutorials from PyData 2014, "Analyzing Satellite Images With Python Scientific Stack". I hope readers have just as much fun working with satellite data as I did. Huge thanks to NASA, USGS, the Landsat 8 teams, and the EROS facility in Sioux Falls, SD.

My favourite section in Chapter 8 was the one on HDF5. This was greatly inspired by Yves Hilpisch's presentation "Out-of-Memory Data Analytics with Python". Many thanks to Yves for putting that together and sharing with the world. We should all be doing more with HDF5.

Finally, and this almost goes without saying, the work that the Python community has done to create Python 3 has been just phenomenal. Guido's vision for the evolution of the language, combined with the efforts of the community, have made something great. I had more fun working on Python 3 than I have had in many years.

10 August, 2017 04:12AM by Duncan McGreggor (

August 09, 2017

Ubuntu Insights: Weekly Kernel Development Summary – Aug 9, 2017

This is the Ubuntu Kernel Team highlights and status for the week.

If you would like to reach the kernel team, you can find us at the #ubuntu-kernel channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing list at:


  • Add virtualbox guest driver to artful 4.13 kernel
  • Start work on 4.12-based raspi2 kernel for artful
  • Added aufs driver to 4.13 artful kernel
  • Update 4.12 artful kernel to v4.12.5
  • Rebase 4.13 artful kernel to v4.13-rc4
  • Upload 4.11.0-12.18 to artful-proposed
  • Stress-ng 0.08.10 released; improvements on the dirdeep stressor and fixes on the job script parsing
  • intel-cmt-cat 1.1.0 released; (Intel Platform Quality of Service and Cache Allocation Technology tools)
  • The following SRU kernels have been promoted to -updates and -security:

    Trusty   3.13.0-126.175
    Xenial   4.4.0-89.112
    Zesty    4.10.0-30.34
    trusty/lts-xenial  4.4.0-89.112~14.04.1
    xenial/hwe         4.10.0-30.34~16.04.1
    xenial/raspi2      4.4.0-1067.75
    xenial/snapdragon  4.4.0-1069.74
    xenial/aws         4.4.0-1028.37
    xenial/gke         4.4.0-1024.24
    zesty/raspi2       4.10.0-1013.16
    • Embargoed CVE-2017-7533 has been made public and the fix released for all the affected kernels.

    • CVE’s fixed by the kernels published on -updates and -security:


    • CVE-2017-1000364
    • CVE-2017-7482
    • CVE-2017-1000365
    • CVE-2016-8405
    • CVE-2017-2618


    • CVE-2017-7533
    • CVE-2017-10810


    • CVE-2017-7533
    • CVE-2017-1000364
    • CVE-2017-7482
    • CVE-2017-1000365
    • CVE-2017-10810

    • Two new kernel snaps are now being distributed: aws-kernel and gke-kernel.

    • The following kernel snaps have been uploaded to the snapcraft store:

    • aws-kernel

    • gke-kernel
    • pc-kernel
    • pi2-kernel
    • dragonboard-kernel

Development Kernel Announcements

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. The artful kernel is now based on Linux 4.11. The Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

Stable Kernel Announcements

  • Current cycle: 04-Aug through 26-Aug

             04-Aug  Last day for kernel commits for this cycle.
    07-Aug - 12-Aug  Kernel prep week.
    13-Aug - 25-Aug  Bug verification & Regression testing.
             28-Aug  Release to -updates.
  • Next cycle: 25-Aug through 16-Sep

             25-Aug   Last day for kernel commits for this cycle.
    28-Aug - 02-Sep   Kernel prep week.
    03-Sep - 15-Sep   Bug verification & Regression testing.
             18-Sep   Release to -updates.
  • The current CVE status

09 August, 2017 08:10PM

Ubuntu Insights: git ubuntu clone

This is the second post in a collaborative series between Robie Basak and myself to introduce (more formally) git ubuntuto a broader audience. There is an index of all our planned posts in the first post. As mentioned there, it is important to keep in mind that the tooling and implementation are still highly experimental.

In this post, we will introduce the git ubuntu clonesubcommand and take a brief tour of what an imported repository looks like. git ubuntu clone will be the entry point for most users to interact with Ubuntu source packages, as it answers a common request on IRC: “Where is the source for package X?”. As Robie alluded to in his introductory post, one of the consequences of the git ubuntu importer is that there is now a standard way to obtain the source of any given source package: git ubuntu clone*

Getting git ubuntu clone

git-ubuntu is distributed as a “classic” snap. To install it on Ubuntu 16.04 or later:
sudo snap install --classic git-ubuntu. Help is available via git-ubuntu --help and man-pages are currently in development**

Using git ubuntu clone

Let’s say we are interested in looking at the state of PHP 7.0 in Ubuntu. First, we obtain a local copy of the repository***: git ubuntu clone php7.0

With that one command, we now have the entire publishing history for php7.0 in ./php7.0. Anyone who has tried to find the source for an Ubuntu package before will recognize this as a significant simplification and improvement.

With git, we would expect to be on a ‘master’ branch after cloning. git ubuntu clone defaults to a local branch ‘ubuntu/devel’, which represents the current tip of development in Ubuntu. ‘ubuntu/devel’ is branched from the remote-tracking branch ‘pkg/ubuntu/devel’.

You might now be wondering, “What is ‘pkg/’?”

The default remotes

Running git remote, we see two remotes are already defined: ‘pkg’ and ‘nacc’.

‘pkg’ will be the same for all users and is similar to ‘origin’ that git users will be familiar with. The second is a derived remote name based upon a Launchpad ID. As shown above, the first time run git ubuntu runs, it will prompt for a Launchpad ID that will be cached for future use in ~/.gitconfig. Much like ‘origin’, the ‘pkg’ branches will keep moving forward via the importer and running git fetch pkg will keep your local remote-tracking branches up to date. While not strictly enforced by git or git ubuntu, we should treat the ‘pkg/’ namespace as reserved and read-only to avoid any issues.

The importer branches

The tip of ‘pkg/ubuntu/devel’ reflects the latest version of this package in Ubuntu. This will typically correspond to the development release and often will be the version in the ‘-proposed’ pocket for that release. As mentioned earlier, a local branch ‘ubuntu/devel’ is created by default, which starts at ‘pkg/ubuntu/devel’, much like ‘master’ typically starts at ‘origin/master’ by default when using git. Just like the tip of ‘ubuntu/devel’ is the latest version in Ubuntu for a given source package, there are series-‘devel’ branches for the latest in a given series, e.g., the tip of ‘pkg/ubuntu/xenial-devel’ is the latest version uploaded to 16.04. There are also branches tracking each ‘pocket’ of every series, e.g. ‘pkg/ubuntu/xenial-security’ is the latest version uploaded to the security pocket of 16.04.

Finally, there is a distinct set of branches which correspond to the exact same histories, but with quilt patches applied. Going into the reasoning behind this is beyond the scope of this post, but will be covered in a future post. It is sufficient for now to be aware that is what ‘pkg/applied/*’ are for.

What else can we do?

All of these branches have history, like one would expect, reflecting the exact publishing history of php7.0 within the context of that branch’s semantics, e.g., the history of ‘pkg/ubuntu/xenial-security’ shows all uploads to the security pocket of 16.04 and what those uploads, in turn, are based off of, etc. As another example, git log ubuntu/devel shows you the long history of the latest upload to Ubuntu.

With this complete imported history, we can not only see the history of the current version and any given series, but also what is different between versions and releases 16.04 and 17.04 for php7.0!

For other source packages that have existed much longer, you would be able to compare LTS to LTS, and do all the other normal git-ish things you might like, such as git blame to see what introduced a specific change to a file.

We can also see all remote-tracking branches with the normal git branch -r

This shows us a few of the namespaces in use currently:

  • pkg/ubuntu/* — patches-unapplied Ubuntu series branches
  • pkg/debian/* — patches-unapplied Debian series branches
  • pkg/applied/ubuntu/* — patches-applied Ubuntu series branches
  • pkg/applied/debian/* — patches-applied Debian series branches
  • pkg/importer/* — importer-internal branches

As Robie mentioned in the first post, we are currently using a whitelist to constrain the importer to a small subset of source packages. What happens if you request to clone a source package that has not yet been imported?

While many details (particularly why the repository looks the way it does) have been glossed in this post, we now have a starting point for cloning any source package (if it has been imported) and a way to request an import of any source package.

Using git directly (for advanced users)

Technically, git ubuntu clone is equivalent in functionality to git clone and git clone could be used directly. In fact, one of our goals is to not impede a “pure” git usage in any way. But again, as Robie mentioned in his introductory post, there are some caveats to both using git and the structure of our repositories that git ubuntu is aware of. The “well-defined URLs” just mentioned are still being worked on, but for instance for PHP 7.0, one could follow the instructions at the top of the Launchpad code page for the php7.0 source package. The primary differences we would notice in this usage is “origin” instead of “pkg” and there will not be a remote for your personal Launchpad space for this source package.


In this post, we have seen a new way to get the source for any given package, git ubuntu clone.

Robie’s next post will discuss where the imported repositories are and what they look like. My next post will continue discussing the git ubuntu tooling, by looking at another relatively simple subcommand “tag”.

*Throughout this post, we are assuming a automatically updated repository. This is true for the whitelisted set of packages currently auto-imported, but not true generally (yet).

**All commands are available as both git-ubuntu … and git ubuntu …. However, for –help to work in the latter form, the changes mentioned in [LP : #1699526|], a few simple tweaks to ~/.gitconfig are necessary until some additional snap functionality is available generally.

***Currently, git ubuntu clone is rather quiet while it works, and can take a long time (the history of a source package can be long!); we have received feedback and opened [a bug|] to make it a bit more like git clone from a UX perspective.

This blogpost originated from Nish’s Blog

09 August, 2017 01:31PM

Ubuntu Insights: 68% of businesses are struggling to hire talent for IoT

Research from Canonical shows businesses are battling with drought in the “Internet of Talent” pool

London, 9th August 2017 – Businesses are struggling to recruit employees with the skills needed to make the internet of things a success according to a new IoT Business Models report from Canonical – the makers of the IoT operating system, Ubuntu Core.

The report, which includes research from over 360 IoT professionals, developers and vendors found that 68% are struggling to find and recruit employees with relevant IoT expertise.

According to Canonical’s research, the most difficult to hire IoT employees are those with knowledge of big data and analytics, with 35% of IoT professionals saying they struggle to recruit this skillset. Knowledge of big data and analytics was also identified as the most important skillset for IoT professionals, with 75% deeming it a necessity for anyone claiming to be an IoT expert. The next most hard to find skillsets for IoT professionals are knowledge of embedded software development (33%), embedded electronics (32%), expertise in IT security (31%) and an understanding of AI (30%).  

Commenting on these findings, Mike Bell, EVP of IoT and Devices at Canonical said, “When it comes to the internet of things, the business community is still overcoming a significant skills gap. Many businesses are concerned by their own lack of knowledge and skills within the IoT market and many business leaders are finding themselves running head first into a set of technology and business challenges that they do not yet fully understand.

“Businesses need to realise that working in IoT should not require such an extensive variety of skills. What is needed, instead, is a simplification of the technologies behind IoT. Within the next five years we expect to see IoT technologies built into all aspects of the business environment. As edge computing brings connected intelligence directly to the shop floor, cloud computing will continue to drive back-end processes across the entire supply chain, for example. With all business processes growing increasingly connected, their supporting IoT technologies must be easy enough for anyone to manage, monitor and use – regardless of their background knowledge or personal skillset.

“Above all, businesses must be agile when it comes to deciding on the ‘right’ people, skills and team to take them forward. What is decided upon today, is unlikely to remain the same in even one or two years, so constantly evaluating what change is needed and being able to execute this quickly is a must.” To find out more about the IoT skills gap, download Canonical’s Defining IoT Business Models report.



The Defining IoT Business Models report incorporates original research, commissioned by Canonical and conducted by independent industry publication IoTNow. The research surveyed 361 people from IoT Now’s database of registered IoT professionals.

About Canonical

Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu.

Established in 2004, Canonical is a privately held company. For further information please click here.

09 August, 2017 01:00PM

The Fridge: Ubuntu Weekly Newsletter Issue 515

Welcome to the Ubuntu Weekly Newsletter. This is issue #515 for the week of August 1 – 7, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

09 August, 2017 04:44AM

August 08, 2017

Alan Pope: Ubuntu Community Hub Proposal

Status Quo

For over four years now, the Ubuntu Community Portal has been the 'welcome mat' for new people seeking to get involved in Ubuntu. In that time the site had seen some valuable but minor incremental changes; no major updates have occurred recently. I'd like us to fix this. We can also use this as an opportunity to improve our whole onboarding process.

I've spent a chunk of time recently chatting with active members of the Ubuntu Community about the community itself. A few themes came up in these conversations which can be summarised as:-

  • Our onboarding process for new contributors is not straightforward or easy to find
  • Contributors find it hard to see what's going on in the project
  • There is valuable documentation out there, but no launch pad to find it

To try address these concerns we have looked at each area to try improve the situation.


A prospective contributor has a limited amount of their spare time to get involved, and with a poorly documented or hard-to-find on-boarding process, they will likely give up and walk away. They won't know where to go for the 'latest news' of what's happening in this development cycle, or how they can contribute their limited time to the project most effectively and it is important that get access to the community straight away


Ubuntu has been around a long time with teams using a range of different communication tools. Despite happening in the open, the quick moving and scattered conversations lose transparency. So finding out what's 'current' is hard for new (and existing) contributors. Surfacing the gems of what's needed and the current strategic direction more obviously would help here and having a place where all contributors can discuss topics is important.


The wiki has served Ubuntu well but it suffers from many of the problems wikis have over time, namely out-of-date information, stale references, and bloat. We could undertake an effort to tidy up the wiki but today we have other tools that could serve better. Sites such as and which are much richer and easier to navigate and form the basis of our other official documentation and ways of working. Using these in conjunction with any new proposal makes much more sense.

So, what could we do to improve things?

Community Hub Proposal

I propose we replace the Community Portal with a dynamic and collaboratively maintained site. The site would raise the profile of conversations and content, to improve our onboarding and communication issues.

We could migrate existing high-value content over from the existing site to the new one, and encourage all contributors to Ubuntu, both within and outside Canonical to post to the site. We will work with teams to bring announcements and conversations to the site, to ensure content is fresh and useful.

In common with many other projects, we could use discourse for this. I don't expect this site to replace all existing tools used by the teams, but it could help to improve visibility of some.

The new Hub would contain pointers to the most relevant information for on-boarding, calls for participation (in translation, documentation, testing), event announcements, feature polls and other dynamic conversations. New & existing contributors alike should feel they can get up to date with what's going on in Ubuntu by visiting the site regularly. We should expect respectful and inclusive conversation as with any Ubuntu resource.

The Community Hub isn’t intended to replace all our other sites, but add to them. So this wouldn’t replace our existing well established Ask Ubuntu and Ubuntu Forums support sites, but would supplement them. The Community Hub could indeed link to interesting or trending content on the forums or unanswered Ask Ubuntu questions.

So ultimately the Community Hub would become a modern, welcoming environment for the community to learn about and join in with the Ubuntu project, talk directly with the people working on Ubuntu, and hopefully become contributors themselves.

Next steps

We’ll initially need to take a snapshot of the pages on the current site, and stand up an instance of discourse for the new site. We will need to migrate over the content which is most appropriate to keep, and archive anything which is no longer accurate or useful. I’d like us to have some well defined but flexible structure to the site categories. We can take inspiration from other community discourse sites, but I’d be interested in hearing feedback from the community on this.

While the site is being set-up, we’ll start planning a schedule of content, pulling from all teams in the Ubuntu project. We will reach out to Ubuntu project teams to get content lined up for the coming months. If you’re active in any team within the project, please contact me so we can talk about getting your teams work highlighted.

If you have any suggestions or want to get involved, feel free to leave a comment on this post or get in touch with me.

08 August, 2017 05:45PM

Ubuntu Insights: Ubuntu Foundations Development Summary: August 8, 2017

This newsletter is to provide a status update from the Ubuntu Foundations Team.  There will also be highlights provided for any interesting subjects the team may be working on.

If you would like to reach the Foundations team, you can find us at the #ubuntu-devel channel on freenode.


The State of the Archive

  • The ongoing libevent transition is at 88% of completion
  • The GCC 7 transition has begun in artful-proposed; GCC 7 will be the default in 17.10, please help ensure your packages in the archive are up to date and ready to build with this new toolchain.
  • The transition to Perl 5.26 is in progress in artful-proposed, with some delays due to issues with the autopkgtest infrastructure and a sync of a reupload from Debian.  This is expected to reach artful early next week.
  • The python3 transition continues, with python3.5 being dropped from the list of supported versions in artful-proposed.  Packages uploaded today will build without python3.5 support, and python3.5 will be dropped from artful before release.

Upcoming Ubuntu Dates

Weekly Meeting

IRC Log:

08 August, 2017 02:54PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

OpenProject 7 with new Gantt chart is now available in the Univention App Center

The well-known project management solution OpenProject is now available in a new version. The application offers a powerful feature set for both traditional as well as agile project management and empowers project teams to efficiently plan, steer and communicate within projects.

The development of OpenProject is coordinated by the OpenProject Foundation, an active open source developer community.

Aside from regular software and security updates, the manufacturer offers maintenance and support contracts as well as trainings for employees.

The most important changes in OpenProject 7.1 are:

  • Simplified integration into Univention App Center
  • a new Gantt chart which allows for easy project planning and scheduling
  • show and hide sub-tasks in list view
  • the Zen mode which hides all non-essential information and lets you focus on the tasks at hand
  • improved main navigation in application header
  • multi-select custom fields
  • custom logo and color scheme

Simplified integration into Univention App Center

The integration with Univention Corporate Server (UCS) has been updated and simplified. System administrators can enable users for OpenProject directly in the web-based UCS management system.

In detail the integration is split into two parts: User provisioning and authentication. Provisioning uses a combination of App Center attributes and listener plugin. The attributes are a shortcut, so that the App Center automatically creates an LDAP schema extension and extended attributes for the UCS management system from the attribute definitions and applies them upon app installation. They are responsible for adding the checkboxes “Activate user” and “Give admin rights” to the user dialogue in UCS and storing the values in the directory service.

Furthermore, the listener plugin adds newly created users to the OpenProject database via the OpenProject API. This step allows existing users to assign items to new users in OpenProject, before they login the first time. In general, listener plugins are a very powerful method to distribute data from the directory service for example to other data stores like a database.

User authentication as second part is done by OpenProject’s default directory connection. The app configures the connection appropriately. After a user is enabled for OpenProject in the UCS user management, it is automatically provisioned to OpenProject via listener plugin. However, authentication is still done directly against the LDAP directory so that no passwords need to be stored in the OpenProject database.

The most important new functions

New Gantt chart

The new Gantt chart is a complete rewrite of the existing timeline module. It is now tightly integrated into the list view. Start and end dates can be easily set by dragging and dropping phases and milestones in the Gantt chart. It also shows relations between those elements.

Show and hide sub-tasks in list view

Work packages are often structured in hierarchies. With OpenProject 7.0 sub-tasks can now be collapsed in the list view.

Show and hide sub-tasks in list view

Zen mode

The new Zen-mode hides all non-essential information such as the navigation header and the sidebar so the user can focus on the project plan or task they are currently working on. In addition the browser is set into the fullscreen mode. This is very helpful in presentation or meeting situations.

Multi-select custom fields

Work packages custom fields of type List or User can now be set to multi-select. Once created, project members can select multiple values for these custom fields.

Logo upload and custom color scheme

In OpenProject 7.0 the OpenProject logo can be replaced with a custom logo. Additionally, the color scheme of the application can be adapted to the corporate identity.

Improved main navigation in application header

The top navigation in OpenProject has been simplified with OpenProject 7. Most notably, the project selection has moved to the left and now shows the selected project.


More information can be found in the official OpenProject 7 release notes and this video summary:

More about OpenProject 7

Der Beitrag OpenProject 7 with new Gantt chart is now available in the Univention App Center erschien zuerst auf Univention.

08 August, 2017 01:31PM by Irina Feller

hackergotchi for Tails


Tails 3.1 is out

This release fixes many security issues and users should upgrade as soon as possible.


Upgrades and changes

  • Update Tor Browser to 7.0.4.
  • Update Linux to 4.9.30-2+deb9u3.

Fixed problems

  • Make sure that Thunderbird erases its temporary directory, containing for example attachments opened in the past. #13340

  • Fix translations of the time synchronization and "Tor is ready" notifications. #13437

For more details, read our changelog.

Known issues

None specific to this release.

See the list of long-standing issues.

Get Tails 3.1

  • To install, follow our installation instructions.

  • To upgrade, automatic upgrades are available from 3.0 to 3.1. Due to the #13426 bug, automatic upgrades from 3.0.1 are disabled. If you cannot do an automatic upgrade or if you fail to start after an automatic upgrade, please try to do a manual upgrade.

  • Download Tails 3.1.

What's coming up?

Tails 3.2 is scheduled for October 3.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

08 August, 2017 10:34AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu in NYC: Kubernetes in minutes and enterprise support on AWS

On August 14th, at the Javits Convention Center in midtown Manhattan, Canonical will be participating in the AWS Summit. Ubuntu has long been popular with users of AWS due to its stability, regular cadence of releases, and scale-out-friendly usage model. Canonical optimizes, builds, and regularly publishes the latest Ubuntu images to the EC2 Quickstart and AWS Marketplace, which ensures the best Ubuntu experience for developers using AWS’s cloud services. And in April, we even launched an AWS-tuned kernel, which provides up to 30% faster boot speeds, on a 15% smaller kernel package, as well as many other features.

To complement great performance with enhanced uptime and security, last year Canonical launched its Ubuntu Advantage support package as two listings (Standard and Advanced tiers) on the AWS Marketplace SaaS program. On top of access to enterprise SLAs as well as key features and tools, buying Ubuntu Advantage through AWS Marketplace ensures hourly pricing rates based on the quantity of your actual Ubuntu usage on AWS, and centralized billing through your existing AWS Marketplace account.

And this month, we are delighted to announce that we are adding our Ubuntu Advantage – Essential tier, which is focused on access to features and tooling, as a listing to AWS Marketplace. At $0.009 per instance hour, this tier will include:

  • Access to the self-service customer care portal and Canonical’s knowledge base
  • Landscape Management & Monitoring (SaaS Edition)
  • Canonical Livepatch Service: apply critical kernel patches without rebooting (on Ubuntu 14.04 LTS and Ubuntu 16.04 LTS images using the generic Linux 4.4 kernel)
  • Ubuntu Legal and IP Assurance program
  • Ubuntu 12.04 Extended Security Maintenance (ESM)

It gets better. At the booth, we will be doing live demos of getting Canonical’s open-source, pure upstream distribution of Kubernetes deployed in minutes on AWS, as well as complex architectures around services like Hadoop and Tensorflow.


Be sure to visit booth 231 at the AWS New York Summit on August 14th!

08 August, 2017 10:25AM

Ubuntu Insights: Ubuntu Artful Desktop Fit and Finish Sprint

The Artful development cycle is full speed ahead to the Ubuntu 17.10 release in October. As you may have heard, we’re switching the default desktop from Unity to GNOME Shell in this cycle. With such a significant change, we need all the eyeballs we can get on every part of the desktop experience. As usual we will have our regular testing cycles and automated checks that the QA team runs through.

We are also organising a set of mini-events which we’d love to get our community’s help with. First up is the Desktop Fit & Finish Sprint on August 24th and 25th. Some members of the Ubuntu desktop team will be camped out in the Canonical London office for that Thursday and Friday, and we need your help.

Over the two days we’ll be scrutinising the new GNOME Shell desktop experience, looking for anything jarring/glitchy or out of place. We’ll be working on the GTK, GDM and desktop theme alike, to fix inconsistencies, performance, behavioural or visual issues. We’ll also be looking at the default key bindings, panel colour schemes and anything else we discover along the way.

We’re inviting a small number of community contributors to join us in the London office on Thursday evening to help out with this effort. Ideally we’re looking for people who are experienced in identifying (and fixing) theme issues, CSS experts and GNOME Shell / GTK themers.

If you’re in the area on August 24th from around 4pm to 9pm and would like to help us get GNOME Shell on Ubuntu ready for prime time, then fill in this form to let us know. Space in the London office is limited, and we need to inform the building security team about visitors well in advance of the event happening. The deadline for filling in the form is Friday August 18th 2017.

Unfortunately we don’t have room for everyone. If you fill the form in and subsequently can’t make it, please let either willcooke or popey know so we can free up your space for someone else. We’ll provide pizza and beverages for those people who come along and help out.

If you’re interested in joining in the fit and finish sprint remotely, we’ll be available through the day and evening on irc in #ubuntu-desktop. We’ll also have an Ubuntu On Air hangout during the event.


08 August, 2017 10:00AM

hackergotchi for Deepin


Deepin has Added Mirror Site Service like Portland State University and so on

Today, deepin has added some new mirror sites, one of them is: Portland State University, AARNet. As deepin has been widely used all over the world, we will add more and more mirror sites so that deepin users all around the world could get high quality user experience, and especially high quality experience for using Linux desktop. The United States——Portland State University rsync:// rsync:// Australia——AARNet We also welcome more mirror sites and open-source communities to provide mirror services for deepin, contact with us:

08 August, 2017 06:06AM by jingle

August 07, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Valorie Zimmerman: Keysigning!

There are a couple of reasons to create a network of trust, using gpg keys. If you are a software developer and want to sign your commits, and on a larger stage, sign software releases, you need a key pair. On a distribution level, ISOs are signed as well. In Ubuntu, a GPG key is required to sign the Code of Conduct.

On a personal level, emails and other communications and files can be signed and/or encrypted. In this era of wide-spread spoofed emails and more and more efforts to snoop into our every move, gpg is a tool we can use to prove our identity and be able to rely on gpg-signed emails.

I attended a keysigning at Akademy, which involved a few steps. First, generating a key pair. This is amazingly easy: gpg --gen-key . Various options are discussed here, among other places: This site and many others describes how to immediately create a revocation certificate, just in case. This is not difficult either. Finally, send your key pair to a keyserver, and your fingerprint to the person running the keysigning event, or print out the fingerprint yourself.

At the keysigning, you will check to see that your own fingerprint is correct as provided by the host, and that each person at the event has valid identification proving they are who they say they are.

The final step to creating your web of trust is signing those keys. Some people have an additional step before signing and uploading; that of sending an encrypted email to each person to establish that both keys work. Since I created my key pair using my gmail address, I was having some difficulty with decrypting some of those emails using "mailvelope", a gmail addon. Bhushan Shah told me that I can download the raw encrypted email and then decrypt that file by gpg --decrypt filename.txt . Excellent!

gpg --encrypt filename.txt recipientkeyID works as well.

Now I've found and am trying out GooPG which is interesting, and seems to work. Nothing seems to be able to read the email I got from Launchpad to verify my uploaded key, however. :(  The actual code block throws a CRC error.

To sum up: be a geek, do some key signing, and sign your emails! And when needed, encrypt them.

PS: Martin Bednar asked where to find the Google extension. None of my browsers let me answer comments (or even make comments) directly, so here is the link:

07 August, 2017 10:41PM by Valorie Zimmerman (

Paul White: Forums. Why do I bother to post?

Ever since I have had an internet presence I have posted on a number of forums that have been related to my interests at the time of posting. Those interests have mainly included Windows software, Ubuntu and UK related transport matters.

Today I called "time" on my postings to any forum other than the Ubuntu Forums. Quite simply I have had enough of those users that hide behind anonymous user-names who seem to only post in a manner that belittles anyone that has an opinion which differs from themselves. Such users take postings far too literally in order to provoke an argument. I think troll is the word that I am looking for here. A recent reply to one of my posts caused me to lose several hours sleep as I was finding it very hard not to think about how to reply to something that had upset me so much. In other words: "Why do I bother to post?"

At least with the Ubuntu Forums I know that I have built up a reputation with the forum staff and other long time users that appreciate what I post. Anything that I do post is posted in good faith and without any bad intentions. The Ubuntu Forums Code of Conduct not only protects me but gives me guidelines on what I can and cannot say. Or should that be "should not" say?

I'm all for "free speech" but abiding by a few simple rules is the way to go.

07 August, 2017 03:33PM by Paul White (

Raphaël Hertzog: My Free Software Activities in July 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12 hours but I only managed to work for 7 hours (due to vacation and unanticipated customer work). I gave back the remaining hours to the pool as I didn’t want to carry them over for August which will be also short due to vacation (BTW I’m not attending Debconf). I spent my 7 hours doing CVE triaging during the week where I was in charge of the LTS frontdesk (I committed 22 updates to the security tracker). I did publish DLA-1010-1 on vorbis-tools but the package update had been prepared by Petter Reinholdtsen.

Misc Debian work

zim. I published an updated package in experimental (0.67~rc2-2) with the upstream bug fixes on the current release candidate. The final version has been released during my vacation and I will soon upload it to unstable.

Debian Handbook. I worked with Petter Reinholdtsen to finalize the paperback version of the Norwegian translation of the Debian Administrator’s Handbook (still covering Debian 8 Jessie). It’s now available.

Bug reports. I filed a few bugs related to my Kali work. #868678: autopkgtest’s setup-testbed script is not friendly to derivatives. #868749: aideinit fails with syntax errors when /etc/debian_version contains spaces.

debian-installer. I submitted a few d-i patches that I prepared for a customer who had some specific needs (using the hd-media image to boot the installer from an ISO stored in an LVM logical volume). I made changes to debian-installer-utils (#868848), debian-installer (#868852), and iso-scan (#868859, #868900).


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

07 August, 2017 02:49PM

hackergotchi for VyOS


VyOS 2.0 development digest #1

I keep talking about the future VyOS 2.0 and how we all should be doing it, but I guess my biggest mistake is not being public enough, and not being structured enough.

In the early days of VyOS, I used to post development updates, which no one would read or comment upon, so I gave up on it. Now that I think of it, I shouldn't have expected much as the size of the community was very small at the time, and there were hardly many people to read it in the first place, even though it was a critical time for the project, and input from the readers would have been very valuable.

Well, this is a critical time for the project too, and we need your input and your contributions more than ever, so I need to get to fixing my mistakes and try to make it easy for everyone to see what's going on and what we need help with.

Getting a steady stream of contributions is a very important goal. While the commercial support thing we are doing may let the maintainers focus on VyOS and ensure that things like security fixes and release builds get guaranteed attention in time, without occasional contributors who add things they personally need (while maintainers may not, I think myself I'm using maybe 30% of all VyOS features any often) the project will never realize its full potential, and may go stale.

But to make the project easy to manage and easy to contribute to, we need to solve multiple hard problems. It can be hard to get oneself to do things that promise no immediate returns, but if you looks at it the other way, we have a chance to build a system of our dreams together. As of 1.1.x and 1.2.x (the jessie branch), we'll figure it out how to maintain it until we solve those problems, but that's for another post. Right now we are talking about VyOS 2.0, which gets to be a cleanroom rewrite.

Why VyOS isn't as good as it could be, and can't be improved

I considered using "Why VyOS sucks" to catch reader's attention. It's a harsh word, and it may not be all that true, given that VyOS in its current state is way ahead of many other systems that don't even have system-wide config consistency checks, or revisions, or safe upgrades, but there are multiple problems that are so fundamental that they are impossible to fix without rewriting at least a very large part of the code.

I'll state the design problems that cannot be fixed in the current system. They affect both end users and contributors, sometimes indirectly, but very seriously.

Design problem #1: partial commits

You've seen it. You commit, there's an error somewhere, and one part of the config is applied, while the other isn't. Most of the time it's just a nuisance, you fix the issue and commit again, but if you, say, change interface address and firewall rule that is supposed to allow SSH to it, you can get locked out of your system.

The worst case, however, is when commit fails at boot. While it's good to have SSH at least, debugging it can be very frustrating, when something doesn't work, and you have no idea why, until you inspect the running config and see that something is simply missing (if you run into it in VyOS 1.x, do "load /config/config.boot" and commit, this will either work or show you why it failed). It's made worse by lack of notifications about config load failure for remote users, you can only see that error on the console.

The feature that can't be implemented due to it is what goes by "commit check" in JunOS. You can't test if your configuration will apply cleanly without actually commiting it.

It's because in the scripts, the logic for consistency checking and generating real configs (and sometimes applying them too) is mixed together. Regardless of the backend issues, every script needs to be taken apart and rewritten to separate that logic. We'll talk more about it later.

Design problem #2: read and write operations disparity

Config reads and writes are implemented in completely different ways. There is no easy programmatic API for modifying the config, and it's very hard to implement because binaries that do it rely on specific environment setup. Not impossible, but very hard to do right, and to maintain afterwards.

This blocks many things: network API and thus an easy to implement GUI, modifying the config script scripts in sane ways (we do have the script-template which does the trick, kinda, but it could be a lot better).

Design problem #3: internal representation

Now we are getting to really bad stuff. The running config is represented as a directory tree in tmpfs. If you find it hard to believe, browse /opt/vyatta/config/active, e.g. /opt/vyatta/config/active/system/time-zone/node.val

Config levels are directories, and node values are in node.val files. For every config session, a copy of the active directory is made, and mounted together with the original directory in union mount through UnionFS.

There are lots of reasons why it's bad:

  • It relies on behaviour of UnionFS, OverlayFS or another filesystem won't do. We are at mercy of unionfs-fuse developers now, and if they stop maintaining it (and I can see why they may, OverlayFS has many advantages over it), things will get interesting for us
  • It requires watching file ownership and permissions. Scripts that modify the config need to run as vyattacfg group, and if you forget to sg, you end up with a system where no one but you (or root) can make any new commits, until you fix it by hand or reboot
  • It keeps us from implementing role-based access control, since config permissions are tied to UNIX permissions, and we'd have to map it to POSIX ACLs or SELinux and re-create those access rules at boot since the running config dir is populated by loading the config
  • For large configs, it creates a fair amount of system calls and context switches, which may make system run slower than it could

Design problem #3: rollback mechanism

Due to certain details (mostly handling of default values), and the way config scripts work too, rollback cannot be done without reboot. Same issue once made Vyatta developers revert activate/deactivate feature.

It makes confirmed commit a lot less useful than it should be, especially in telecom where routers cannot be rebooted at random even in maintenance windows.

Implementation problem #1: untestable logic

We already discussed it a bit. The logic for reading the config, validating it, and generating application configs is mixed in most of the scripts. It may not look like a big deal, but for the maintainers and contributors it is. It's also amplified by the fact that there is not way to create and manipulate configs separately, the only way you can test anything is to build a complete image, boot it, and painstakingly test everything by hand, or have expect-like tool emulate testing it by hand.

You never know if your changes may possibly work until you get them to a live system. This allows syntax errors in command definitions and compilation errors in scripts to make it into builds, and it make it into a release more than one time when it wasn't immediately apparent and only appread with certain combination of options.

This can be improved a lot by testing components in isolation, but this requires that the code is written in appropriate way. If you write a calculator and start with add(), sub(), mul() etc. functions, and use them in a GUI form, you can test the logic on its own automatically, e.g. does add(2,3) equal 5, and does mul(9, 0) equal 0, does sqrt(-3) raise an exception and so on. But if you embed that logic in button event handlers, you are out of luck. That's how VyOS is for the most part, even if you mock the config subsystem so that config read functions return the test data, you need to redo the script so that every function does exactly one thing testable in isolation.

This is one of the reasons 1.2.0 is taking so long, without tests, or even ability to add them, we don't even know what's not working until we stumble upon it in manual testing.

Implementation problem #2: command definitions

This is a design problem too, but it's not so fundamental. Now we use custom syntax for command definitions (aka "templates"), which have tags such as help: or type: and embedded shell scripts. There are multiple problem with it. For example, it's not so easy to automatically generate at least a command reference from them, and you need a complete live system for that, since part of the templates is autogenerated. The other issue is that right now some components feature very extensive use of embedded shell, and some things are implemented in embedded shell scripts inside templates entirely, which makes testing even harder than it already is.

We could talk about upgrade mechanism too, but I guess I'll leave it for another post. Right now I'd like to talk about proposed solutions, and what's being done already, and what kind of work you can join.

07 August, 2017 11:55AM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: JAAS & Juju update: Juju GUI 2.8.0

JAAS has seen 2 minor and 6 patch updates in the last nine weeks. Let’s talk about what’s new as of the 2.8.0 release.

Direct Deploy

Direct Deploy gets your solutions deployed easier and faster. The feature allows you to create Juju cards which will add the specified bundle or charm to a new model and then open directly into the deployment flow. At this point they simply need to complete the deployment flow and will have a deployed solution without having to manually add or modify the model pre-deploy. To see Direct Deploy in action click on the image below or on this link.

Screen Shot 2017-08-01 at 1.23.18 PM

To create a card simply visit and enter the charm or bundle id that you’d like to embed. A card will be generated for you which you can embed in your own website or blog.


Check out the release video of Direct Deploy in action and check back for a follow-up post with details about how you can create your own Direct Deploy card.

Import & Add SSH Keys

Users often need the ability to SSH into one or more of the Juju provisioned machines to access generated content, logs, or results of Juju actions. In the deployment flow we now allow you to import public SSH keys directly from GitHub or add them manually. These keys will then be propagated to all of the machines in your model making sure you can ssh directly to them.



Define and Modify Machine Constraints

One of the benefits to cloud computing is your ability to tailor the machine hardware to your needs. With the updates to the Machine View, you can now specify a number of constraints for the machines that you’d like Juju to request from your cloud provider. These constraints are now editable pre-deploy in the machine view and shown in the Deployment Flow. These might include 16G of ram or 4 CPU cores.

Screen Shot 2017-08-01 at 1.28.26 PM

Model Changes List

After modeling a complex environment you’ll often want to review the actions that Juju will perform before hitting the deploy button. Now in the Deployment Flow we’ve merged the two change log lists into a single list which groups the changes by application and splits out the machine information into their own lists to make it much easier for you to review.

Screen Shot 2017-08-01 at 1.29.26 PM

Users on JAAS are already running the latest GUI and have these features available today!

If you’ve bootstrapped your own controller you can upgrade to the latest Juju GUI with:

    juju upgrade-gui

We welcome any feedback you may have. You can chat with us in #juju on or you can file issues at:


07 August, 2017 10:00AM

David Tomaschik: Hacker Summer Camp 2017: Lessons Learned

In addition to taking stock of how things went at Hacker Summer Camp, I think it’s important to examine the lessons learned from the event. Some of these lessons will be introspective and reflect on myself and my career, but I think it’s important to share these to encourage others to also reflect on what they want and where they’re going.


It’s still incredibly important to me to be doing hands-on technical work. I do a lot of other things, and they may have significant impact, but I can’t imagine taking a purely leadership/organizational role. I wouldn’t be happy, and unhappy people are not productive people. Finding vulnerabilities, doing technical research, building tools, are all areas that make me excited to be in this field and to continue to be in this field.

I saw so many highly-technical projects presented and demoed, and these were all the ones that made me excited to still be in this field. The IoT village, in particular, showed a rapidly-evolving highly technical area of security with many challenges left to be solved:

  • How do you configure devices that lack a user interface?
  • How do you update devices that users expect to run 24/7?
  • How do you build security into a device that users expect to be dirt cheap?
  • What are the tradeoffs between Bluetooth, WiFi, 802.15.4, and other radio techs?

Between these questions and my love of playing with hardware (my CS concentration was in embedded systems), it’s obvious why I’ve at least slightly gravitated towards IoT/embedded security.

This brings me to my next insight: I’m still very much a generalist. I’ve always felt that being a generalist has hamstrung me from working on cool things, but I’m beginning to think the only thing hamstringing me is me. Now I just need to get over the notion that 0x20 is too old of an age for cool security/vulnerability research. I’m focusing on IoT and I’ve managed to exclude certain areas of security in the interests of time management: for as fascinating as DFIR is, I’m not actively pursuing anything in that space because it turns out time is a finite quantity and spreading it too thin means getting nowhere with anything.


Outwardly, I’m happy that BSidesLV and DEF CON both appear to have had an increasingly diverse attendance, though I have no idea how accurate the numbers are given their methodology. (To be fair, I’m super happy someone is trying to even to figure this out in the chaos that is hacker summer camp.) The industry, and the conferences, may never hit a 50/50 gender split, but I think that’s okay if we can get to a point where we build an inclusive meritocracy of an environment. Ensuring that women, LGBTQ, and minorities who want to get into this industry can do so and feel included when they do is critical to our success. I’m a firm believer that the best security professionals draw from their life background when designing solutions, and having a diverse set of life backgrounds ensures a diverse set of solutions. Different experiences and different viewpoints avoids groupthink, so I’m very hopeful to see those numbers continue to rise each year.

I have zero data to back this up, but observationally, it seemed that more attendees brought their kids with them to hacker summer camp. I love this: inspiring the next generation of hackers, showing them that technology can be used to do cool things, and that it’s never too early to start learning about it will benefit both them (excel in the workforce, even if they take the hacker mindset to another industry) and society (more creative/critical thinkers, better understanding of future tech, and hopefully keeping them on the white hat side). I don’t know how much of this is a sign of the maturing industry (more hackers have kids now), more parents feel that it’s important to expose their kids to this community, or maybe just a result of the different layout of Caesar’s, leading to bad observations.


There were a few things from my packing list this year that turned out to be really useful. I’m going to try to do an updated planning post pair (e.g., one far out and one shortly before con) for next year, but there’s a few things I really thought were useful and so I’ll highlight them here.

  • An evaporative cooling towel really helps with the Vegas heat. It’s super lightweight and takes virtually no space. Dry, its useful as a normal towel, but if you wet it slightly, the evaporating water actually cools off the towel (and you). Awesome for 108 degree weather.
  • An aluminum water bottle would’ve been nice. Again, fight the dehydration. In the con space, there’s lots of water dispensers with at least filtered water (Vegas tap water is terrible) plus the SIGG bottles are nice because you can use a carabiner to strap it to your bag. I like the aluminum better than a polycarbonate (aka Nalgene) because it won’t crack no matter how you abuse it. (Ok, maybe it’s possible to crack aluminum, but this isn’t the Hydraulic Press Channel.)
  • RFID sleeves. I mentioned these before. Yes, my room key was based on some RFID/proximity technology. Yes, a proxmark can clone it. Yes, I wanted to avoid that happening without my knowing.

For some reason, I didn’t get a chance to break out a lot of the hacking gear I brought with me, but I’ll probably continue to bring it to cons “just in case”. I’m usually checking a bag anyway, so a few pounds of gear is a better option than regretting it if I want to do something.


That concludes my Hacker Summer Camp blog series for this year. I hope it’s been useful, entertaining, or both. Agree with something I said? Disagree? Hit me up on Twitter or find me via other means of communications. :)

07 August, 2017 07:00AM

Jono Bacon: Joining the Advisory Board

I have previously posted pieces about, an Austin-based startup focused on providing a powerful platform for data preparation, analysis, and collaboration. were previously a client where I helped to shape their community strategy and I have maintained a close relationship with them ever since.

I am delighted to share that I have accepted an offer to join their Advisory Board. As with most advisory boards, this will be a part-time role where I will provide guidance and support to the organization as they grow.

Why I Joined

Without wishing to sound terribly egotistical, I often get offers to participate in an advisory capacity with various organizations. I am typically loathed to commit too much as I am already rather busy, but I wanted to make an exception for

Why? There are a few reasons.

Firstly, the team are focusing on a really important problem. As our world becomes increasingly connected, we are generating more and more data. Sadly, much of this data is in different places, difficult to consume, and disconnected from other data sets. provides a place where data can be stored, sanitized/prepped, queried, and collaborated around. In fact, I believe that collaboration is the secret sauce: when we combine a huge variety of data sets, a consistent platform for querying, and a community with the ingenuity and creative flair for querying that data…we have a powerful enabler for data discovery. provides a powerful set of tools for storing, prepping, querying, and collaborating around data.

There is a particularly pertinent opportunity here. Buried inside individual data sets there are opportunities to make new discoveries, find new patterns/correlations, and use data as a means to make better decisions. When you are able to combine data sets, the potential for discovery exponentially grows, whether you are a professional researcher or an armchair enthusiast.

This is why the community is so important. In the same way GitHub provided a consistent platform for millions of developers to create, fork, share, and collaborate around code…both professionals and hobbyists… has the same potential for data.

…and this is why I am excited to be a part of the Advisory Board. Stay tuned for more!

The post Joining the Advisory Board appeared first on Jono Bacon.

07 August, 2017 04:28AM

hackergotchi for Wazo


Sprint Review 17.11

Hello Wazo community! Here comes the release of Wazo 17.11!

New features in this sprint

REST API: We have added a new REST API to add outgoing webhooks. The REST API is not yet available in the web interface, but it will come in time. Outgoing webhooks allow Wazo to notify other applications about events that happen on the telephony server, e.g. when a call arrives, when it is answered, hung up, when a new contact is added, etc.

Plugins: Wazo plugins can now depend on each other. This allows us to create metaplugins, that will install multiple plugins at once. It also allows different plugins to rely on the same basic plugin, without having to handle the installation of the basic plugin manually.

Technical features

Asterisk: Asterisk was updated from 14.5.0 to 14.6.0

Chat: We have integrated a new software in Wazo, namely MongooseIM. We will progressively insert MongooseIM in the heart of the Wazo chat, so that we can benefit from its features: chat history, chat rooms, mobile push notifications, and maybe XMPP connectivity...

Ongoing features

Plugin management: There is still a lot to be done to the plugin management service. e.g. upgrade, HA, ...

Webhooks: We are adding a new way of interconnecting Wazo with other software: webhooks. Outgoing webhooks allow Wazo to notify other applications about events that happen on the telephony server, e.g. when a call arrives, when it is answered, hung up, when a new contact is added, etc. Incoming webhooks also allow Wazo to be notified of events happening on other applications, e.g. a new customer was added in your CRM, a new employee was added in your directory, etc. Unfortunately, there is no magic and the application in question must be able to send or receive webhooks so that Wazo can talk with it. See also this blog post (sorry, it's in French) about Wazo and webhooks.

The instructions for installing Wazo or upgrading Wazo are available in the documentation.

For more details about the aforementioned topics, please see the roadmap linked below.

See you at the next sprint review!


07 August, 2017 04:00AM by The Wazo Authors

August 05, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Paul White: A quick look at the decline of Ubuntu Membership

An Ubuntu Membership is best described as recognition of significant and sustained contribution to Ubuntu or the Ubuntu community. Back in January 2015 when I was successful in being granted an Ubuntu Membership there were, according to, around 750 Ubuntu Members. As I write this, just over two and a half years later, the number has unfortunately reduced to 706.

With a little time to spare on a rainy Saturday afternoon here in the UK, I thought I would take a quick look at that Launchpad group by copying the membership information into a spreadsheet. I sorted the entries by joining date and then grouped them by year. In order to keep things simple I only included those members that had secured their membership directly through the Ubuntu Membership Boards and ignored those users that had made their applications through other means such as the Ubuntu Forums, the Kubuntu Council or the IRC Council. I was left with just 452 members that I was most interested in looking at, that is the "general users" of Ubuntu.

The results were pretty much as I expected and the breakdown for each year that an Ubuntu Membership has been available is as follows:

2005       26
2006       40
2007       54
2008       38
2009       70
2010       48
2011       44
2012       36
2013       30
2014       23
2015       22
2016       16
2017        5

Obviously there have been many members that have made successful applications but for various reasons have let their membership expire. A quick look at the table above shows that over half of today's Ubuntu Members secured their membership between 2007 and 2011 but since then there has been a steady decline in members being granted and retaining their membership. I'll leave it to others to offer their explanations as to why an Ubuntu Membership may not be something that many of our users wish to work towards but membership applications are definitely no longer a regular occurrence.

Most applications are successful as applicants tend to apply only when they are sure that they have met the necessary criteria for membership. Interestingly, there have been only five successful applications so far this year.

05 August, 2017 04:12PM by Paul White (

David Tomaschik: Hacker Summer Camp 2017: DEF CON

DEF CON, of course, is the main event of Hacker Summer Camp for me. It’s the largest gathering of hackers in the world, and it’s the only opportunity I get to see some of the people I know in the industry. It’s also the most hands-on of all of the conferences I’ve ever attended, and the people running the villages clearly know their stuff and are super passionate about their area. Nowhere do I see so much raw talent and excitement for the hacker spirit as at DEF CON.

This year was the first year at Caesar’s Palace and quite frankly, it showed. Traffic control reminded me of the first year at Bally’s/Paris: as best as they could do without any data, but still far from optimal. Additionally, Dark Tangent pointed out that they were expecting 6% growth, but ended up closer to 20%. That’s thousands extra. The rule that they do not sell out and everyone gets through the door is not without its downsides.

Overall, this year was incredible for me personally. Though I attended no main track talks, I made it to a couple of Sky Talks and some village talks, as well as a bunch of village activities. I met a bunch of interesting people who are working on interesting technical things, which is great because it reminds me why I got into this industry in the first place and what I want to be doing in the future.

The IoT village was excellent, but I wish I had gotten to it earlier to participate in the IoT CTF – it looked like a lot of fun, and their physical target range wasn’t something you see everyday. They had everything from cheap bluetooth devices to the Google Home and Amazon Alexa, and I believe this is a reflection of where we’ll see the future growth in security – the IoT isn’t a passing fad, and we’ll have millions of low-cost devices deployed and not properly managed. There’s no time like the present to get security to the front and center of the IoT device design process.

In previous years, I’d always played in the Capture the Packet contest. This year I opted out, despite having a bye in the first round, because there was so much going on and because it had consumed too much of my time at DEF CON 24. I don’t regret this decision, but it is something I missed slightly. In fact, it ended up that I never even set foot in the packet capture village! (I guess that’s what happens to villages at the end of halls?)

The “linecon” joke was never more accurate than this year – there was a line for everything! Not only did every talk have lines, but there were lines to get into the Biohacking Village, the Swag line was long (where was Hacker Stickers with our official unofficial swag?), even the line for Mohawkcon was ridiculous! (Maybe next year I just need to get a mohawk before I go there – it’s not like I don’t donate to the EFF anyway.) I’m sure this is a combination of many factors, including the growth of the community, the new venue, and the fact that it wouldn’t be DEF CON without linecon.

The DEF CON artwork is not something I normally write about, largely because I’m no artist and I barely have an eye for, well, anything, but I really thought the art was excellent this year. I so desperately wanted to rip one of the posters off the wall next to the escalators! (I have hopes one of them might appear in a charity auction at some point, but I didn’t see it at con.)

Caesar’s as a venue was okay – there was noticably more space, but figuring out how to get between some of the areas was not crystal clear. A lot of that was on me – I should’ve done more recon of the con area. (Look for a “lessons learned” post coming soon.) My hotel room was awesome though, and in the tower right above the con space, so I had that going for me. Fingers crossed to get in the same tower next year.

Dual Core

Dual Core had an outstanding show on the Friday Night lineup. I don’t care what DEF CON calls the headliner, Dual Core is always the headliner for my music tastes. I’ve seen him perform live at least once at every DEF CON and at dozens of other events (Southeast Linux Fest, DerbyCon, etc.), and I just don’t think it would be a full con without seeing him.

Mad props to DT and all the DEF CON Goons and organizers who work so hard to put the event together. No matter how much chaos there may be, I’ve had a great time every year, and I wouldn’t miss it for the world. That’s just a part of the World’s Biggest Hacker Convention.

05 August, 2017 07:00AM

hackergotchi for Ubuntu


Ubuntu 16.04.3 LTS released

The Ubuntu team is pleased to announce the release of Ubuntu 16.04.3 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

Like previous LTS series’, 16.04.3 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures except for 32-bit powerpc, and is installed by default when using one of the desktop images. Ubuntu Server defaults to installing the GA kernel, however you may select the HWE kernel from the installer bootloader.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 16.04 LTS.

Kubuntu 16.04.3 LTS, Xubuntu 16.04.3 LTS, Mythbuntu 16.04.3 LTS, Ubuntu GNOME 16.04.3 LTS, Lubuntu 16.04.3 LTS, Ubuntu Kylin 16.04.3 LTS, Ubuntu MATE 16.04.3 LTS and Ubuntu Studio 16.04.3 LTS are also now available. More details can be found in their individual release notes:

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu Base, and Ubuntu Kylin. All the remaining flavours will be supported for 3 years.

To get Ubuntu 16.04.3

In order to download Ubuntu 16.04.3, visit:

Users of Ubuntu 14.04 will be offered an automatic upgrade to 16.04.3 via Update Manager. For further information about upgrading, see:

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 16.04.3 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

More Information

You can learn more about Ubuntu and about this release on our website listed below:

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

Originally posted to the ubuntu-announce mailing list on Thu Aug 3 16:07:03 UTC 2017 by Adam Conrad, on behalf of the Ubuntu Release Team

05 August, 2017 03:46AM by lyz

August 04, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Sebastian Kügler: Interview on

How KDE's Open Source community has built reliable, monopoly-free computing for 20+ yearsHow KDE’s Open Source community has built reliable, monopoly-free computing for 20+ years
A few days ago,’s lovely Alexandra Leslie interviewed me about my work in KDE. This interview has just been published on their site. The resulting article gives an excellent overview over what and why KDE is and does, along with some insights and personal stories from my own history in the Free software world.

At the time, Sebastian was only a student and was shocked that his work could have such a huge impact on so many people. That’s when he became dedicated to helping further KDE’s mission to foster a community of experts committed to experimentation and the development of software applications that optimize the way we work, communicate, and interact in the digital space.

“With enough determination, you can really make a difference in the world,” Sebastian said. “The more I realized this, the more I knew KDE was the right place to do it.”

04 August, 2017 08:42PM

Ubuntu Insights: Ubuntu Server Development Summary – 4 Aug 2017

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: git ubuntu clone

The second post in the series about ‘git ubuntu’ was published this week.
Nish Aravamudan discusses git ubuntu clone and what imported repositories are and what they look like.

cloud-init and curtin


  • Scott Moser made two demos of using Ubuntu and Debian cloud images, KVM, and cloud-init together to customize an instance.
  • Fallback on timesyncd configuration if ntp is not installable (LP: #1686485)
  • Fix /etc/resolve.conf comment added on each reboot (LP: #1701420)
  • Fix integration test building local tree


  • Removed CentOS curthooks flags
  • Disabled yum plugins during install

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

apache2, 2.4.27-2ubuntu2, mdeslaur
apache2, 2.4.27-2ubuntu1, nacc
cloud-init, 0.7.9-231-g80bf98b9-0ubuntu1, smoser
curtin, 0.1.0~bzr519-0ubuntu1, smoser
excalibur-logkit, 2.0-11ubuntu3, vorlon
libvirt-python, 3.5.0-1build1, mwhudson
maas, 2.3.0~alpha1-6165-geae082b-0ubuntu1, andreserl
markupsafe, 1.0-1build1, mwhudson
memcached, 1.4.33-1ubuntu3, vorlon
mod-wsgi, 4.5.11-1ubuntu2, jbicha
mysql-5.7, 5.7.19-0ubuntu1, mdeslaur
mysql-5.7, 5.7.18-0ubuntu2, vorlon
openbsd-inetd, 0.20160825-2build1, vorlon
openldap, 2.4.45+dfsg-1ubuntu1, costamagnagianfranco
openssh, 1:7.5p1-5ubuntu1, xnox
pyjunitxml, 0.6-1.2, None
pylibmc, 1.5.2-1build1, mwhudson
python-bcrypt, 3.1.3-1, None
python-sysv-ipc, 0.6.8-2build4, mwhudson
requests, 2.18.1-0ubuntu1, corey.bryant
spice, 0.12.8-2.2, None
tmux, 2.5-3build1, vorlon
ubuntu-advantage-tools, 2, vorlon
unbound, 1.6.4-1build2, vorlon
Total: 24

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

libapache2-mod-auth-pgsql, zesty, 2.0.3-6.1ubuntu0.17.04.1, racb
libapache2-mod-auth-pgsql, trusty, 2.0.3-6ubuntu0.1, racb
libapache2-mod-auth-pgsql, xenial, 2.0.3-6.1ubuntu0.16.04.1, racb
libvirt, xenial, 1.3.1-1ubuntu10.12, paelzer
lxc, trusty, 1.0.10-0ubuntu1.1, mdeslaur
lxc, trusty, 1.0.10-0ubuntu1.1, mdeslaur
ntp, trusty, 1:4.2.6.p5+dfsg-3ubuntu2.14.04.12, racb
ntp, zesty, 1:4.2.8p9+dfsg-2ubuntu1.2, paelzer
ntp, xenial, 1:4.2.8p4+dfsg-3ubuntu5.6, paelzer
numactl, xenial, 2.0.11-1ubuntu1, serge-hallyn
numactl, zesty, 2.0.11-1ubuntu2, xnox
pollinate, trusty, 4.23-0ubuntu1~14.04.2, vorlon
rabbitmq-server, xenial, 3.5.7-1ubuntu0.16.04.2, mdeslaur
rabbitmq-server, trusty, 3.2.4-1ubuntu0.1, mdeslaur
rabbitmq-server, trusty, 3.2.4-1ubuntu0.1, mdeslaur
rabbitmq-server, xenial, 3.5.7-1ubuntu0.16.04.2, mdeslaur
squid3, xenial, 3.5.12-1ubuntu7.4, racb
ubuntu-advantage-tools, xenial, 2, vorlon
ubuntu-advantage-tools, zesty, 2, vorlon
ubuntu-advantage-tools, trusty, 2, vorlon
ubuntu-advantage-tools, trusty, 2, vorlon
Total: 21

Contact the Ubuntu Server team

04 August, 2017 06:40PM