June 23, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S10E16 – Enthusiastic Woozy Route - Ubuntu Podcast

This week Mark goes camping, we interview Michael Hall from Endless Computers, bring you another command line love and go over all your feedback.

It’s Season Ten Episode Sixteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joey Sneddon are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been upto recently:
  • We interview Michael Hall about Endless Computers.

  • We share a Command Line Lurve:

    • nmon – nmon is short for Nigel’s performance Monitor
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

23 June, 2017 02:00PM

hackergotchi for Grml developers

Grml developers

grml development blog: New Grml developer: Darshaka Pathirana

We're proud to be able to announce that Darshaka "dpat" Pathirana just joined the team as official Grml developer. Welcome in the team, Darshaka!

23 June, 2017 11:15AM by Michael Prokop (nospam@example.com)

June 22, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: ISO Image Writer

ISO Image Writer is a tool I’m working on which writes .iso files onto a USB disk ready for installing your lovely new operating system.  Surprisingly many distros don’t have very slick recommendations for how to do this but they’re all welcome to try this.

It’s based on ROSA Image Writer which has served KDE neon and other projects well for some time.  This adds ISO verification to automatically check the digital signatures or checksums, currently supported is KDE neon, Kubuntu and Netrunner.  It also uses KAuth so it doesn’t run the UI as root, only a simple helper binary to do the writing.  And it uses KDE Frameworks goodness so the UI feels nice.

First alpha 0.1 is out now.

Download from https://download.kde.org/unstable/isoimagewriter/

Signed by release manager Jonathan Riddell with 0xEC94D18F7F05997E. Git tags are also signed by the same key.

It’s in KDE Git at kde:isoimagewriter and in bugs.kde.org, please do try it out and report any issues.  If you’d like a distro added to the verification please let me know and/or submit a patch. (The code to do with is a bit verbose currently, it needs tidied up.)

I’d like to work out how to make AppImages, Windows and Mac installs for this but for now it’s in KDE neon developer editions and available as source.

 

Facebooktwittergoogle_pluslinkedinby feather

22 June, 2017 07:14PM

Colin King: Cyclic latency measurements in stress-ng V0.08.06

The stress-ng logo
The latest release of stress-ng contains a mechanism to measure latencies via a cyclic latency test.  Essentially this is just a loop that cycles around performing high precisions sleeps and measures the (extra overhead) latency taken to perform the sleep compared to expected time.  This loop runs with either one of the Round-Robin (rr) or First-In-First-Out real time scheduling polices.

The cyclic test can be configured to specify the sleep time (in nanoseconds), the scheduling type (rr or fifo),  the scheduling priority (1 to 100) and also the sleep method (explained later).

The first 10,000 latency measurements are used to compute various latency statistics:
  • mean latency (aka the 'average')
  • modal latency (the most 'popular' latency)
  • minimum latency
  • maximum latency
  • standard deviation
  • latency percentiles (25%, 50%, 75%, 90%, 95.40%, 99.0%, 99.5%, 99.9% and 99.99%
  • latency distribution (enabled with the --cyclic-dist option)
The latency percentiles indicate the latency at which a percentage of the samples fall into.  For example, the 99% percentile for the 10,000 samples is the latency at which 9,900 samples are equal to or below.

The latency distribution is shown when the --cyclic-dist option is used; one has to specify the distribution interval in nanoseconds and up to the first 100 values in the distribution are output.

For an idle machine, one can invoke just the cyclic measurements with stress-ng as follows:

 sudo stress-ng --cyclic 1 --cyclic-policy fifo \
--cyclic-prio 100 --cyclic-method --clock_ns \
--cyclic-sleep 20000 --cyclic-dist 1000 -t 5
stress-ng: info: [27594] dispatching hogs: 1 cyclic
stress-ng: info: [27595] stress-ng-cyclic: sched SCHED_FIFO: 20000 ns delay, 10000 samples
stress-ng: info: [27595] stress-ng-cyclic: mean: 5242.86 ns, mode: 4880 ns
stress-ng: info: [27595] stress-ng-cyclic: min: 3050 ns, max: 44818 ns, std.dev. 1142.92
stress-ng: info: [27595] stress-ng-cyclic: latency percentiles:
stress-ng: info: [27595] stress-ng-cyclic: 25.00%: 4881 us
stress-ng: info: [27595] stress-ng-cyclic: 50.00%: 5191 us
stress-ng: info: [27595] stress-ng-cyclic: 75.00%: 5261 us
stress-ng: info: [27595] stress-ng-cyclic: 90.00%: 5368 us
stress-ng: info: [27595] stress-ng-cyclic: 95.40%: 6857 us
stress-ng: info: [27595] stress-ng-cyclic: 99.00%: 8942 us
stress-ng: info: [27595] stress-ng-cyclic: 99.50%: 9821 us
stress-ng: info: [27595] stress-ng-cyclic: 99.90%: 22210 us
stress-ng: info: [27595] stress-ng-cyclic: 99.99%: 36074 us
stress-ng: info: [27595] stress-ng-cyclic: latency distribution (1000 us intervals):
stress-ng: info: [27595] stress-ng-cyclic: latency (us) frequency
stress-ng: info: [27595] stress-ng-cyclic: 0 0
stress-ng: info: [27595] stress-ng-cyclic: 1000 0
stress-ng: info: [27595] stress-ng-cyclic: 2000 0
stress-ng: info: [27595] stress-ng-cyclic: 3000 82
stress-ng: info: [27595] stress-ng-cyclic: 4000 3342
stress-ng: info: [27595] stress-ng-cyclic: 5000 5974
stress-ng: info: [27595] stress-ng-cyclic: 6000 197
stress-ng: info: [27595] stress-ng-cyclic: 7000 209
stress-ng: info: [27595] stress-ng-cyclic: 8000 100
stress-ng: info: [27595] stress-ng-cyclic: 9000 50
stress-ng: info: [27595] stress-ng-cyclic: 10000 10
stress-ng: info: [27595] stress-ng-cyclic: 11000 9
stress-ng: info: [27595] stress-ng-cyclic: 12000 2
stress-ng: info: [27595] stress-ng-cyclic: 13000 2
stress-ng: info: [27595] stress-ng-cyclic: 14000 1
stress-ng: info: [27595] stress-ng-cyclic: 15000 9
stress-ng: info: [27595] stress-ng-cyclic: 16000 1
stress-ng: info: [27595] stress-ng-cyclic: 17000 1
stress-ng: info: [27595] stress-ng-cyclic: 18000 0
stress-ng: info: [27595] stress-ng-cyclic: 19000 0
stress-ng: info: [27595] stress-ng-cyclic: 20000 0
stress-ng: info: [27595] stress-ng-cyclic: 21000 1
stress-ng: info: [27595] stress-ng-cyclic: 22000 1
stress-ng: info: [27595] stress-ng-cyclic: 23000 0
stress-ng: info: [27595] stress-ng-cyclic: 24000 1
stress-ng: info: [27595] stress-ng-cyclic: 25000 2
stress-ng: info: [27595] stress-ng-cyclic: 26000 0
stress-ng: info: [27595] stress-ng-cyclic: 27000 1
stress-ng: info: [27595] stress-ng-cyclic: 28000 1
stress-ng: info: [27595] stress-ng-cyclic: 29000 2
stress-ng: info: [27595] stress-ng-cyclic: 30000 0
stress-ng: info: [27595] stress-ng-cyclic: 31000 0
stress-ng: info: [27595] stress-ng-cyclic: 32000 0
stress-ng: info: [27595] stress-ng-cyclic: 33000 0
stress-ng: info: [27595] stress-ng-cyclic: 34000 0
stress-ng: info: [27595] stress-ng-cyclic: 35000 0
stress-ng: info: [27595] stress-ng-cyclic: 36000 1
stress-ng: info: [27595] stress-ng-cyclic: 37000 0
stress-ng: info: [27595] stress-ng-cyclic: 38000 0
stress-ng: info: [27595] stress-ng-cyclic: 39000 0
stress-ng: info: [27595] stress-ng-cyclic: 40000 0
stress-ng: info: [27595] stress-ng-cyclic: 41000 0
stress-ng: info: [27595] stress-ng-cyclic: 42000 0
stress-ng: info: [27595] stress-ng-cyclic: 43000 0
stress-ng: info: [27595] stress-ng-cyclic: 44000 1
stress-ng: info: [27594] successful run completed in 5.00s


Note that stress-ng needs to be invoked using sudo to enable the Real Time FIFO scheduling for the cyclic measurements.

The above example uses the following options:

  • --cyclic 1
    • starts one instance of the cyclic measurements (1 is always recommended)
  • --cyclic-policy fifo 
    • use the real time First-In-First-Out scheduling for the cyclic measurements
  • --cyclic-prio 100 
    • use the maximum scheduling priority  
  • --cyclic-method clock_ns
    • use the clock_nanoseconds(2) system call to perform the high precision duration sleep
  • --cyclic-sleep 20000 
    • sleep for 20000 nanoseconds per cyclic iteration
  • --cyclic-dist 1000 
    • enable latency distribution statistics with an interval of 1000 nanoseconds between each data point.
  • -t 5
    • run for just 5 seconds
From the run above, we can see that 99.5% of latencies were less than 9821 nanoseconds and most clustered around the 4880 nanosecond model point. The distribution data shows that there is some clustering around the 5000 nanosecond point and the samples tail off with a bit of a long tail.

Now for the interesting part. Since stress-ng is packed with many different stressors we can run these while performing the cyclic measurements, for example, we can tell stress-ng to run *all* the virtual memory related stress tests and see how this affects the latency distribution using the following:

 sudo stress-ng --cyclic 1 --cyclic-policy fifo \  
--cyclic-prio 100 --cyclic-method clock_ns \
--cyclic-sleep 20000 --cyclic-dist 1000 \
--class vm --all 1 -t 60s

..the above invokes all the vm class of stressors to run all at the same time (with just one instance of each stressor) for 60 seconds.

The --cyclic-method specifies the delay used on each of the 10,000 cyclic iterations used.  The default (and recommended method) is clock_ns, using the high precision delay.  The available cyclic delay methods are:
  • clock_ns (use the clock_nanosecond() sleep)
  • posix_ns (use the POSIX nanosecond() sleep)
  • itimer (use a high precision clock timer and pause to wait for a signal to measure latency)
  • poll (busy spin-wait on clock_gettime() to eat cycles for a delay.
All the delay mechanisms use the CLOCK_REALTIME system clock for timing.

I hope this is plenty of cyclic measurement functionality to get some useful latency benchmarks against various kernel components when using some or a mix of the stress-ng stressors.  Let me know if I am missing some other cyclic measurement options and I can see if I can add them in.

Keep stressing and measuring those systems!

22 June, 2017 06:45PM by Colin Ian King (noreply@blogger.com)

Dustin Kirkland: My Meetup Slides: Deploy and Manage Kubernetes Clusters on Ubuntu in the Oracle Cloud

Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:


If you're interested in learning more, check out:
It was a great audience, with plenty of good questions, pizza, and networking!

I'm pleased to share my slide deck here.

Cheers,
Dustin

22 June, 2017 03:20PM by Dustin Kirkland (noreply@blogger.com)

Ubuntu Insights: Certified Ubuntu Cloud Guest – The best of Ubuntu on the best clouds

eBook Certified Ubuntu Cloud Guest

Ubuntu has a long history in the cloud. It is the number one guest operating system on AWS, Azure and Google Cloud Platform. In fact there are more Ubuntu images running in the public cloud than all other operating systems combined.

Ubuntu is a free operating system which means anyone can download an image, whenever they want. So why should cloud providers offer certified Ubuntu images to their customers?

This eBook explains why certified Ubuntu images are essential for organisations and individuals that require the highest level of security and reliability.

Download this eBook to learn:

  • How cloud providers differentiate themselves from their competitors by offering customers certified Ubuntu images
  • How to make sure your cloud provider is using certified Ubuntu images

Submit your details to download the eBook:

 

22 June, 2017 02:03PM

hackergotchi for SparkyLinux

SparkyLinux

Sparky 4.6 STB

There are new live/install iso images of SparkyLinux 4.6-STB “Tyche” available to download.

This is the first Sparky edition based on Debian stable line 9 codename “Stretch”.

Sparky “Home” edition provides fully featured operating system with two lightweight desktops: LXDE and Xfce.

Sparky MinimalGUI and MinimalCLI lets you install the base system with a minimal set of applications and a desktop of your choice, via the Sparky Advanced Installer.

Changes between version 4.5 and 4.6-STB:
– full system upgrade from Debian 9 stable repos as of June 19, 2017
– Linux kernel 4.9.30 as default (4.10.x and 4.11.x available in Sparky ‘unstable’ repo)
– added new repo (not active): wine-staging.com
– deep cleaning from old packages and files of older releases
– email client Icedove replaced by Thunderbird
– changed http to https protocol of all Sparky services, including repository; updating the ‘sparky-apt’ package fixes it automatically
– new theme “Sparky5” which fixes look of gtk+ based applications
– added two new live system boot options:
1. toram – lets you load the whole live system into RAM memory (if you have enough);
2. text mode – if any problem with normal or failsafe boot, this option runs sparky in text mode and lets you install it using the advanced installer
– new tool for checking and displaying notification on your desktop about available updates
– Calamares 3.1 as default installer

Sparky edition based on the Openbox window manager (MinimalGUI) has gotten 3 key shortcuts:
– Super+t (terminal) -> terminal emulator
– Super+r (run) -> gexec
– Super+q (quit) -> logout window

You can transform your existing installation of Sparky 4.x based on Debian testing “Stretch” to based on Debian stable “Stretch”, see ho to: switch-sparky-testing-to-stable

To make fresh installation of Sparky 4.6 based on Debian stable “Stretch” use new iso images with the “STB” in the name: download/stable

If you have and prefer Sparky based on Debian “testing” line – simply keep it up to date.

Donate Sparky or buy an entry in our Web Dir (for only 5 Euros) to help keeping it alive.

 

22 June, 2017 10:10AM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

James Page: Ubuntu OpenStack Pike Milestone 2

The Ubuntu OpenStack team is pleased to announce the general availability of the OpenStack Pike b2 milestone in Ubuntu 17.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive for OpenStack Pike on Ubuntu 16.04 LTS installations by running the following commands:

sudo add-apt-repository cloud-archive:pike
sudo apt update

The Ubuntu Cloud Archive for Pike includes updates for Barbican, Ceilometer, Cinder, Congress, Designate, Glance, Heat, Horizon, Ironic, Keystone, Manila, Murano, Neutron, Neutron FWaaS, Neutron LBaaS, Neutron VPNaaS, Neutron Dynamic Routing, Networking OVN, Networking ODL, Networking BGPVPN, Networking Bagpipe, Networking SFC, Nova, Sahara, Senlin, Trove, Swift, Mistral, Zaqar, Watcher, Senlin, Rally and Tempest.

We’ve also now included GlusterFS 3.10.3 in the Ubuntu Cloud Archive in order to provide new stable releases back to Ubuntu 16.04 LTS users in the context of OpenStack.

You can see the full list of packages and versions here.

Ubuntu 17.10

No extra steps required; just start installing OpenStack!

Branch Package Builds

If you want to try out the latest master branch updates, or updates to stable branches, we are maintaining continuous integrated packages in the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Still to come…

In terms of general expectation for the OpenStack Pike release in August we’ll be aiming to include Ceph Luminous (the next stable Ceph release) and Open vSwitch 2.8.0 so long as the release schedule timing between projects works out OK.

Any finally – if you’re interested in the general stats – Pike b2 involved 77 package uploads including new 4 new packages for new Python module dependencies!

Thanks and have fun!

James


22 June, 2017 10:00AM

LiMux

GovJam 2017 – Design von Verwaltungs-Services

Am 17.  und 18. Mai fand der Global GovJam 2017 statt. In 25 Städten auf der ganzen Welt erforschten Interessierte 48 Stunden lang neue Wege bürgerzentrierter Innovation. Ziel war es, öffentliche Dienstleistungen zu verbessern. GovJam … Weiterlesen

Der Beitrag GovJam 2017 – Design von Verwaltungs-Services erschien zuerst auf Münchner IT-Blog.

22 June, 2017 07:47AM by Stefan Döring

hackergotchi for Ubuntu developers

Ubuntu developers

Meerkat: The state of IMEs under Linux

Input Method Editors, or IMEs for short, are ways for a user to input text in another, more complex character set using a standard keyboard, commonly used for Chinese, Japanese, and Korean languages (CJK for short). So in order to type anything in Chinese, Japanese, or Korean, you must have a working IME for that language.

Quite obviously, especially considering the massive userbase in these languages, it’s crucial for IMEs to be quick and easy to setup, and working in any program you decide to use.

The reality is quite far from this. While there are many problems that exist with IMEs under Linux, the largest one I believe is the fact that there’s no (good) standard for communicating with programs.

IMEs all have to implement a number of different interfaces, the 3 most common being XIM, GTK (2 and 3), and Qt (3, 4, and 5).

XIM is the closest we have to a standard interface, but it’s not very powerful, the pre-editing string doesn’t always work properly, isn’t extensible to more advanced features, doesn’t work well under many window systems (in those I’ve tested, it will always appear at the bottom of the window, instead of beside the text), and a number of other shortcomings that I have heard exist, but am not personally aware of (due to not being one who uses IMEs very often).

GTK and Qt interfaces are much more powerful, and work properly, but, as might be obvious, they only work with GTK and Qt. Any program using another widget toolkit (such as FLTK, or custom widget toolkits, which are especially prevalent in games) needs to fall back to the lesser XIM interface. Going around this is theoretically possible, but very difficult in practice, and requires GTK or Qt installed anyways.

IMEs also need to provide libraries for every version of GTK and Qt as well. If an IME is not updated to support the latest version, you won’t be able to use the IME in applications using the latest version of GTK or Qt.

This, of course, adds quite a large amount of work to IME developers, and causes quite a problem with IME users, where a user will no longer be able to use an IME they prefer, simply because it has not been updated to support programs using a newer version of the toolkit.

I believe these issues make it very difficult for the Linux ecostructure to advance as a truly internationalized environment. It first limits application developers that truly wish to honor international users to 2 GUI toolkits, GTK and Qt. Secondly, it forces IME developers to constantly update their IMEs to support newer versions of GTK and Qt, requiring a large amount of effort, duplicated code, and as a result, can result in many bugs (and abandonment).

 

I believe fixing this issue would require a unified API that is toolkit agnostic. There’s 2 obvious ways that come to mind.

  1. A library that an IME would provide that every GUI application would include
  2. A client/server model, where the IME is a server, and the clients are the applications

Option #1 would be the easiest and least painful to implement for IME developers, and I believe is actually the way GTK and Qt IMEs work. But there are also problems with this approach. If the IME crashes, the entire host application will crash as well, as well as the fact that there could only be one IME installed at a time (since every IME would need to provide the same library). The latter is not necessarily a big issue for most users, but in multi-user desktops, this can be a big issue.

Option #2 would require more work from the IME developers, juggling client connections and the likes (although this could be abstracted with a library, similar to Wayland’s architecture). However, it would also mean a separate address space (therefore, if the IME crashes, nothing else would crash as a direct result of this), the possibility for more than one IME being installed and used at once, and even the possibility of hotswapping IMEs at runtime.

The problem with both of these options is the lack of standardization. While they can adhere to a standard for communicating with programs, configuration, dealing with certain common problems, etc. are all left to the IME developers. This is the exact problem we see with Wayland compositors.

However, there’s also a third option: combining the best of both worlds in the options provided above. This would mean having a standard server that will then load a library that provides the IME-related functions. If there are ever any major protocol changes, common issues, or anything of the likes, the server will be able to be updated while the IMEs can be left intact. The library that it loads would be, of course, entirely configurable by the user, and the server could also host a number of common options for IMEs (and maybe also host a format for configuring specific options for IMEs), so if a user decides to switch IMEs, they wouldn’t need to completely redo their configuration.

Of course, the server would also be able to provide clients for XIM and GTK/Qt-based frontends, for programs that don’t use the protocol directly.

Since I’m not very familiar with IMEs, I haven’t yet started a project implementing this idea, since there may be challenges about a method like this that might have already been discussed, but that I’m not aware of.

This is why I’m writing this post, to hopefully bring up a discussion about how we can improve the state of IMEs under Linux :) I would be very willing to work with people to properly design and implement a better solution for the problem at hand.


22 June, 2017 07:08AM

June 21, 2017

hackergotchi for VyOS

VyOS

VyOS 1.2.0 repository re-structuring

In preparation for the new 1.2.0 (jessie-based) beta release, we are re-populating the package repositories. The old repositories are now archived, you still can find then in the /legacy/repos directory on dev.packages.vyos.net

The purpose of this is two-fold. First, the old repo got quite messy, and Debian people (rightfully!) keep reminding us about it, but it would be difficult to do a gradual cleanup. Second, since the CI server has moved, and so did the build hosts, we need to test how well the new procedures are working. And, additionally, it should tell us if we are prepared to restore VyOS from its source should anything happen to the packages.vyos.net server or its contents.

For perhaps a couple of days, there will be no new nightly builds, and you will not be able to build ISOs yourself, unless you change the repo path in ./configure options by hand. Stay tuned.

21 June, 2017 07:53PM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Kernel Team Summary: June 22, 2017

This newsletter is to provide a status update from the Ubuntu Kernel Team. There will also be highlights provided for any interesting subjects the team may be working on.

If you would like to reach the kernel team, you can find us at the #ubuntu-kernel channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing list at: kernel-team@lists.ubuntu.com

Highlights

  • FWTS 17.06.00 released: https://wiki.ubuntu.com/FirmwareTestSuite/ReleaseNotes/17.06.00
  • Released stress-ng 0.08.05, new Real Time cyclic stressor and Real Time scheduling softlockup stressor.
  • Prepare 4.4.73 (Xenial)
  • Update artful/4.11 to v4.11.6
  • The embargo for CVE-2017-1000364 [1] has expired and the fix was
    released for the following packages in the updates and security pockets:
    • * Trusty
    • – linux 3.13.0-121.170
    • – linux-lts-xenial 4.4.0-81.104~14.04.1
    • * Xenial
    • – linux 4.4.0-81.104
    • – linux-aws 4.4.0-1020.29
    • – linux-gke 4.4.0-1016.16
    • – linux-raspi2 4.4.0-1059.67
    • – linux-snapdragon 4.4.0-1061.66
    • – linux-hwe 4.8.0-56.61~16.04.1
    • – linux-hwe-edge 4.10.0-24.28~16.04.1
    • – linux-joule 4.4.0-1003.8
    • * Yakkety
    • – linux 4.8.0-56.61
    • – linux-raspi2 4.8.0-1040.44
    • * Zesty
    • – linux 4.10.0-24.28
    • – linux-raspi2 4.10.0-1008.11

    Due to that, the proposed updates for the above packages being prepared
    on the current SRU cycle are being re-spun to include the fix.

    [1] CVE description: It was discovered that the stack guard page for
    processes in the Linux kernel was not sufficiently large enough to
    prevent overlapping with the heap. An attacker could leverage this with
    another vulnerability to execute arbitrary code and gain administrative
    privileges.

Devel Kernel Announcements

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. The Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

Stable Kernel Announcements

Current cycle: 02-Jun through 24-Jun

  • 02-Jun Last day for kernel commits for this cycle
  • 05-Jun – 10-Jun Kernel prep week.
  • 11-Jun – 23-Jun Bug verification & Regression testing.
  • 26-Jun Release to -updates.

Next cycle: 23-Jun through 15-Jul

  • 23-Jun Last day for kernel commits for this cycle
  • 26-Jun – 01-Jul Kernel prep week.
  • 02-Jul – 14-Jul Bug verification & Regression testing..
  • 17-Jul Release to -updates.

Status: CVE’s

The current CVE status can be reviewed at the following:
http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html

21 June, 2017 06:37PM

Brian Murray: Getting information about LP bugs with lots of duplicates

The other day some of my fellow Ubuntu developers and I were looking at bug 1692981 and trying to figure out what was going on. While we don’t have an answer yet, we did use some helpful tools (at least one of which somebody hadn’t heard of) to gather more information about the bug.

One such tool is lp-bug-dupe-properties from the lptools package in Ubuntu. With this it is possible to quickly find out information about all the duplicates, 36 in this case, of a bug report. For example, if we wanted to know which releases are affected we can use:

lp-bug-dupe-properties -D DistroRelease -b 1692981

LP: #1692981 has 36 duplicates
Ubuntu 16.04: 1583463 1657243 1696799 1696827 1696863 1696930 1696940
1697011 1697016 1697068 1697099 1697121 1697280 1697290 1697313 1697335
1697356 1697597 1697801 1697838 1697911 1698097 1698100 1698104 1698113
1698150 1698171 1698244 1698292 1698303 1698324 1698670 1699329
Ubuntu 16.10: 1697072 1698098 1699356

While lp-bug-dupe-properites is useful, in this case it’d be helpful to search the bug’s attachments for more information. Luckily there is a tool, lp-grab-attachments (also part of lptools), which will download all the attachments of a bug report and its duplicates if you want. Having done that you can then use grep to search those files.

lp-grab-attachments -dD 1692981

The ‘-d’ switch indicates I want to get the attachments from duplicate bug reports and the ‘-D’ switch indicates that I want to have the bug description saved as Description.txt. While saving the description provides some of the same capability as lp-bug-dupe-properties it ends up being quicker. Now with the attachments saved I can do something like:

for desc in $(find . -name Description.txt); do grep "dpkg 1.18.[4|10]" $desc;
done

...
dpkg 1.18.4ubuntu1.2
dpkg 1.18.10ubuntu2
dpkg 1.18.10ubuntu1.1
dpkg 1.18.4ubuntu1.2
...

and find out that a variety of dpkg versions are in use when this is encountered.

I hope you find these tools useful and I’d be interested to hear how you use them!

21 June, 2017 05:40PM

BunsenLabs Linux

[Security Advisory] The Stack Clash (CVE-2017-1000364 & others)

The Stack Clash is a vulnerability in the memory management of several operating systems. It affects Linux, OpenBSD, NetBSD, FreeBSD and Solaris, on i386 and amd64.  It can be exploited by attackers to corrupt memory and execute arbitrary code.

21 June, 2017 05:38PM

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

SamSam Ransomware Targeted Attacks Continue

Normally new variants of ransomware families aren't particularly interesting.

SamSam, however, is different. Whereas most ransomware is automatically propagated, SamSam is deployed manually.

In addition, the group behind SamSam charges very high ransoms because of the amount of effort invested in their operations, which made them the subject of two FBI Alerts last year.

The attacks seem to peak in waves as campaigns distributing SamSam are executed. A notable recent example was a large hospital in New York that was hit with SamSam in April. The hospital declined to pay the attackers the $44,000 ransom demanded. It took a month for the hospital’s IT systems to be fully restored.

Defending against SamSam is more akin to a targeted attack than typical opportunistic ransomware. SamSam attackers are known to:

  • Gain remote access through traditional attacks, such as JBoss exploits
  • Deploy web-shells
  • Connect to RDP over HTTP tunnels such as ReGeorg
  • Run batch scripts to deploy the ransomware over machines

Earlier this week, ID Ransomware spotted new variants of the SamSam ransomware. A review of the code (which decompiles cleanly with the tool ILSpy) indicates that little has changed, apart from some updates to the ransom note:

The ransom the victims must pay to recover their files is hardcoded in the malware. In this attack, it was:

  • 1.7 Bitcoin ($4,600) for a single machine
  • 6 Bitcoins ($16,400) for half the machines (allowing the victim to confirm they can recover their files)
  • 12 Bitcoins ($32,800) for all of the machines

The most recent attacks appear to have been successful, at least from the attackers point of view. The Bitcoin address associated with this week’s attacks has received $33,000.

These new variants remind us that we must remain vigilant and utilize the latest threat indicators to detected new strains of existing malware. You can view the associated indicators in OTX.

Update: Vallejo has published an analysis on this sample of SamSam.

       

21 June, 2017 04:07PM

hackergotchi for Ubuntu developers

Ubuntu developers

Paul White: Some random notes on losing my broadband connection

I first started using Ubuntu just a few weeks after Lucid Lynx was released and have used Ubuntu, Kubuntu, Xubuntu, Lubuntu and Ubuntu GNOME since then. Towards the end of 2016 I took early retirement and decided to curtail some of my Ubuntu related activities in favour of some long abandoned interests which went back to the 1960s. Although I had no intention of spending every day sat in front of a computer screen I still wished to contribute to Ubuntu but at a reduced level. However, recent problems relating to my broadband connection, which I am hoping are now over, prompted me to look closely at how I could continue to contribute to Ubuntu if I lost my "always on" internet.

Problems

Thanks to my broadband provider, whose high profile front man sports a beard and woolly jumpers, my connection changed from being one that was "always on" to one that was "usually off". There's a limit to how many times I'm prepared to reboot my cable modem on the advice of the support desk, be sent unnecessary replacement modems because the one I'm using must be faulty, to allow engineers into my home to measure signal levels, and be told the next course of action will definitely get my connection working only to find that I'm still off-line the next day and the day after. I kept asking myself: "Just how many engineers will they need to send before someone successfully diagnoses the problem and fixes it?"

Mobile broadband

Much of my recent web browsing, on-line banking, and updating of my Xubuntu installations has been done with the aid of two iPhones acting as access points while connected to the 3 and EE mobile networks. It was far from being an ideal situation, connection speeds were often very low by today's standards but "it worked" and the connections were far more reliable than I thought that they would be. A recent test during the night showed a download speed on a 4G connection to be comparable to that offered by many other broadband providers. But downloading large Ubuntu updates took a long time especially during the evening. As updating the pre-installed apps on a smart phone can quickly use up one's monthly data allowance I made myself aware of where I could find local Wi-Fi hotspots to make some of the important or large phone updates and save some valuable bandwidth for Ubuntu. Interestingly with the right monthly plan and using more appropriate hardware than a mobile phone, I could actually save some money by switching from cable to mobile broadband although I would definitely miss my 100Mb/s download speed that is most welcome when downloading ISO images or large Ubuntu updates.

ISO testing

Unfortunately these problems, which lasted for over three weeks, meant that I had to cease ISO testing due to the amount of data I would need to download several times each week. I had originally intended to get a little more involved with testing of the development release of Xubuntu during the Artful cycle but those plans were put on hold while I waited for my broadband connection to be restored and deemed to be have been fixed permanently. During this outage I still managed to submit a couple of bug reports and comment on a few others but my "always on" high speed connection was very much missed.

Connection restored!

How I continue with Ubuntu long-term will now depend on the reliability of my broadband connection which does seem to have now been restored to full working order. I'm finalising this post a week after receiving yet another visit from an engineer who restored my connection in just a matter of minutes. Cables had been replaced and signal levels had been measured and brought to within the required limits. Apparently the blame for the failure of the most recent "fix" was put solely on one of his colleagues who I am told failed to correctly join two cables together. In other words, I wasn't actually connected to their network at all. It must have been so very obvious to my modem/router which sat quietly in the corner of the room forever looking to connect to something that it just could not find and yet was unable to actually tell me so. If only such devices could actually speak....

21 June, 2017 09:58AM by Paul White (noreply@blogger.com)

hackergotchi for Univention Corporate Server

Univention Corporate Server

First point release of UCS 4.2 published

With UCS 4.2-1 the first point release for Univention Corporate Server 4.2 is now available.

It includes various detail improvements and error corrections. Some of the most important changes are:

  • The forwarding of e-mails per each mail user can now be saved in the UCS management system.
  • Improvements in changing the password in the Univention Management Console: From now on, also users from a Microsoft Active Directory domain can change their expired passwords. In addition, more hints are now displayed if the password change should fail.
  • The possibilities for IPv6 (Internet Protocol Version 6) configuration have been improved in various services, for example in the Nagios or proxy server configuration and in the UCS management system.

We also placed great emphasis on improving the user-friendliness. So from now on you can configure, for example, the font color in the online portal. This is especially useful when a dark background image has been configured.

Screenshot der Portalseite der UCS Onlinedemo

Enhancements to the UCS Setup wizard will now help you to set up app appliances as well as to join UCS into an existing Microsoft Active Directory.

In addition, the integration of Docker apps has been further improved in the Univention App Center, so the system now responds better to error situations.

Technical detail improvements of release 4.2-1:

  • Improved opportunity to send feedback to Univention via the UMC
  • Improvement of the SAML logins in various places
  • When logging in as a root user, a note appears, because for root, unlike the administrator, the domain modules are, among other things, not available.
  • The proxy configuration of the UCS system is now transmitted to the Docker app.
  • IPv6 addresses can be saved directly on the computer objects in the management system. IPv6 addresses can also be used in the Nagios configuration as well as in the proxy server configuration.
  • The fonts in the online portal can now be configured. This is especially useful when a dark background image has been configured.
  • The join process of a UCS system in a Microsoft Active Directory domain has been improved in several places.
  • Changing the password via UMC has been improved in several places, for example, if the password has expired. Improved error messages will also be displayed if the change should fail.
  • Various misconfigurations of the Cyrus IMAP daemon have been fixed.
  • The package dependencies of the mailstack have been adapted so that Dovecot Pro can now be installed as an alternative to Dovecot.
  • Users from the Microsoft Active Directory domain can now change their expired password to UMC.
  • The setup wizard has been improved in several places, both for setting up the app appliances as for joining into an existing Microsoft Active Directory.
  • An email forwarding can now be stored for each user via the management system.
  • The App Center Docker integration has been improved, so it now better reacts to errors.
  • The radius configuration has been extended in several places.
  • The DDNS handling in UCS domains with Samba 4 has been improved.
  • The synchronization of the Sysvol share can now deal better with error situations.
  • The French translation has been updated.

This and other information on new features, security updates and detail improvements can be found in our

Release Notes (comprehensive changelog)

Der Beitrag First point release of UCS 4.2 published erschien zuerst auf Univention.

21 June, 2017 07:44AM by Maren Abatielos

hackergotchi for Ubuntu developers

Ubuntu developers

Mathieu Trudel: Netplan by default in 17.10

Friday, I uploaded an updated nplan package (version 0.24) to change its Priority: field to important, as well as an update of ubuntu-meta (following a seeds update), to replace ifupdown with nplan in the minimal seed.

What this means concretely is that nplan should now be installed by default on all images, part of ubuntu-minimal, and dropped ifupdown at the same time.

For the time being, ifupdown is still installed by default due the way debootstrap generates the very minimal images used as a base for other images -- how it generates its base set of packages, since that depends only on the Priority: field of packages. Thus, nplan was added, but ifupdown still needs to be changed (which I will do shortly) to disappear from all images.

The intent is that nplan would now be the standard way of configuring networks. I've also sent an email about this to ubuntu-devel-announce@.

I've already written a bit about what netplan is and does, and I have still more to write on the subject (discussing syntax and how to do common things). We especially like how using a purely declarative syntax makes things easier for everyone (and if you can't do what you want that way, then it's a bug you should report).

MaaS, cloud-init and others have already started to support writing netplan configuration.

The full specification (summary wiki page and a blueprint reachable from it) for the migration process is available here.

While I get to writing something comprehensive about how to use the netplan YAML to configure networks, if you want to know more there's always the manpage, which is the easiest to use documentation. It should always be up to date with the current version of netplan available on your release (since we backported the last version to Xenial, Yakkety, and Zesty), and accessible via:

man 5 netplan

To make things "easy" however, you can also check out the netplan documentation directly from the source tree here:

https://git.launchpad.net/netplan/tree/doc/netplan.md

There's also a wiki page I started to get ready that links to the most useful things, such as an overview of the design of netplan, some discussion on the renderers we support and some of the commands that can be used.

We even have an IRC channel on Freenode: #netplan

I think you'll find that using netplan makes configuring networks easy and even enjoyable; but if you run into an issue, be sure to file a bug on Launchpad here:

21 June, 2017 02:10AM by Mathieu Trudel-Lapierre (noreply@blogger.com)

June 20, 2017

hackergotchi for Maemo developers

Maemo developers

Software design and architecture

The placing of your state is the only really important thing in architecture.

0 Add to favourites0 Bury

20 June, 2017 10:12PM by Philip Van Hoof (pvanhoof@gnome.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: OpenStack and Containers live Q&A session

Join us for a 1 hour online session with a cloud expert

OpenStack and Containers Office Hours are online Q&A sessions held on an ongoing basis. Their aim is to help community members and customers deploy, manage and scale their Ubuntu-based cloud infrastructure.

What’s covered?

These interactive online sessions are hosted by an expert from our Cloud Team who will:

  • Outline how to leverage the latest features of Ubuntu OpenStack, LXD, MAAS, Kubernetes and Juju
  • Answer questions on OpenStack and containers technology

Who should attend?

These sessions are ideal for IT Pros, DevOps and SysAdmins wanting a relaxed, informal environment to discuss their experiences using Ubuntu Cloud technology.

Such sessions are normally attended by a small group, making them ideal for networking with other OpenStack and scale-out cloud enthusiasts.

Why join?

Get the chance to ask any questions about our software and support services.

These sessions are attended by a small group, making them ideal for networking with other OpenStack and scale-out cloud enthusiasts.

Upcoming sessions

Book your place

20 June, 2017 03:20PM

Simon Raffeiner: My Ubuntu for mobile devices post mortem analysis

Now that Ubuntu phones and tablets are gone, I would like to offer my thoughts on why I personally think the project failed and what one may learn from it.

20 June, 2017 03:00PM

hackergotchi for Qubes

Qubes

QSB #31: Xen hypervisor vulnerabilities with unresearched impact (XSA 216-224)

Dear Qubes community,

We have just published Qubes Security Bulletin (QSB) #31: Xen hypervisor vulnerabilities with unresearched impact (XSA 216-224). The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View QSB #31 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-031-2017.txt

Learn about the qubes-secpack, including how to obtain, verify, and read it:

https://www.qubes-os.org/security/pack/

View all past QSBs:

https://www.qubes-os.org/security/bulletins/

View XSA-216 through XSA-224 in the XSA Tracker:

https://www.qubes-os.org/security/xsa/

             ---===[ Qubes Security Bulletin #31 ]===---

                            June 20, 2017


Xen hypervisor vulnerabilities with unresearched impact (XSA 216-224)

Summary
========

Today the Xen Security Team has disclosed several Xen Security
Advisories (XSA 216-224). Impact ranges from leaks to system crashes
and potential privilege escalations. See also our commentary below.

Technical details
==================

Xen Security Advisories 216 [1]:

|  blkif responses leak backend stack data
|
| The block interface response structure has some discontiguous fields.
| Certain backends populate the structure fields of an otherwise
| uninitialized instance of this structure on their stacks, leaking
| data through the (internal or trailing) padding field.
|
| A malicious unprivileged guest may be able to obtain sensitive
| information from the host or other guests.

Xen Security Advisories 217 [2]:

|  page transfer may allow PV guest to elevate privilege
| 
| Domains controlling other domains are permitted to map pages owned by
| the domain being controlled.  If the controlling domain unmaps such a
| page without flushing the TLB, and if soon after the domain being
| controlled transfers this page to another PV domain (via
| GNTTABOP_transfer or, indirectly, XENMEM_exchange), and that third
| domain uses the page as a page table, the controlling domain will have
| write access to a live page table until the applicable TLB entry is
| flushed or evicted.  Note that the domain being controlled is
| necessarily HVM, while the controlling domain is PV.
| 
| A malicious pair of guests may be able to access all of system memory,
| allowing for all of privilege escalation, host crashes, and
| information leaks.

Xen Security Advisories 218 [3]:

|  Races in the grant table unmap code
|
| * When a grant had been mapped twice by a backend domain, and then
| unmapped by two concurrent unmap calls, the frontend may be informed
| that the page had no further mappings when the first call completed rather
| than when the second call completed.
| 
| * A race triggerable by an unprivileged guest could cause a grant
| maptrack entry for grants to be "freed" twice.  The ultimate effect of
| this would be for maptrack entries for a single domain to be re-used.
|
| For the first issue, for a short window of time, a malicious backend
| could still read and write memory that the frontend thought was its
| own again.  Depending on the usage, this could be either an
| information leak, or a backend-to-frontend privilege escalation.
| 
| The second issue is more difficult to analyze. It can probably cause
| reference counts to leak, preventing memory from being freed on domain
| destruction (denial-of-service), but information leakage or host
| privilege escalation cannot be ruled out.

Xen Security Advisories 219 [4]:

|  x86: insufficient reference counts during shadow emulation
| 
| When using shadow paging, writes to guest pagetables must be trapped and
| emulated, so the shadows can be suitably adjusted as well.
| 
| When emulating the write, Xen maps the guests pagetable(s) to make the final
| adjustment and leave the guest's view of its state consistent.
| 
| However, when mapping the frame, Xen drops the page reference before
| performing the write.  This is a race window where the underlying frame can
| change ownership.
| 
| One possible attack scenario is for the frame to change ownership and to be
| inserted into a PV guest's pagetables.  At that point, the emulated write will
| be an unaudited modification to the PV pagetables whose value is under guest
| control.
| 
| A malicious pair of guests may be able to elevate their privilege to that of
| Xen.

Xen Security Advisories 220 [5]:

| x86: PKRU and BND* leakage between vCPU-s
|
| There is an information leak, of control information mentioning
| pointers into guest address space; this may weaken address space
| randomisation and make other attacks easier.
| 
| When an innocent guest acquires leaked state, it will run with
| incorrect protection state.  This could weaken the protection intended
| by the MPX or PKU features, making other attacks easier which would
| otherwise be excluded; and the incorrect state could also cause a
| denial of service by preventing legitimate accesses.

Xen Security Advisories 221 [6]:

|  NULL pointer deref in event channel poll
| 
| When polling event channels, in general arbitrary port numbers can be
| specified.  Specifically, there is no requirement that a polled event
| channel ports has ever been created.  When the code was generalised
| from an earlier implementation, introducing some intermediate
| pointers, a check should have been made that these intermediate
| pointers are non-NULL.  However, that check was omitted.
| 
| A malicious or buggy guest may cause the hypervisor to access
| addresses it doesn't control, usually leading to a host crash (Denial
| of Service).  Information leaks cannot be excluded.

Xen Security Advisories 222 [7]:

|  stale P2M mappings due to insufficient error checking
| 
| Certain actions require removing pages from a guest's P2M
| (Physical-to-Machine) mapping.  When large pages are in use to map
| guest pages in the 2nd-stage page tables, such a removal operation may
| incur a memory allocation (to replace a large mapping with individual
| smaller ones).  If this allocation fails, these errors are ignored by
| the callers, which would then continue and (for example) free the
| referenced page for reuse.  This leaves the guest with a mapping to a
| page it shouldn't have access to.
| 
| The allocation involved comes from a separate pool of memory created
| when the domain is created; under normal operating conditions it never
| fails, but a malicious guest may be able to engineer situations where
| this pool is exhausted.
| 
| A malicious guest may be able to access memory it doesn't own,
| potentially allowing privilege escalation, host crashes, or
| information leakage.

Xen Security Advisories 224 [8]:

|  grant table operations mishandle reference counts
| 
| * If a grant is mapped with both the GNTMAP_device_map and
| GNTMAP_host_map flags, but unmapped only with host_map, the device_map
| portion remains but the page reference counts are lowered as though it
| had been removed. This bug can be leveraged cause a page's reference
| counts and type counts to fall to zero while retaining writeable
| mappings to the page.
| 
| * Under some specific conditions, if a grant is mapped with both the
| GNTMAP_device_map and GNTMAP_host_map flags, the operation may not
| grab sufficient type counts.  When the grant is then unmapped, the
| type count will be erroneously reduced.  This bug can be leveraged
| cause a page's reference counts and type counts to fall to zero while
| retaining writeable mappings to the page.
| 
| * When a grant reference is given to an MMIO region (as opposed to a
| normal guest page), if the grant is mapped with only the
| GNTMAP_device_map flag set, a mapping is created at host_addr anyway.
| This does *not* cause reference counts to change, but there will be no
| record of this mapping, so it will not be considered when reporting
| whether the grant is still in use.
| 
| For the worst issue, a PV guest could gain a writeable mapping of its
| own pagetable, allowing it to escalate its privileges to that of the
| host.

Commentary from the Qubes Security Team
========================================

The bugs discussed today seem difficult to exploit in practice.

Each require either some race condition to win (XSA 217, 218, 219),
control over more than one VM (XSA 218, 219), some memory allocation,
which is normally beyond attacker's control, to fail or happen in some
specific way (XSA 216, 217, 218, 219, 222, 224?), or a combination of
these.

Additionally some bugs are believed to be limited to being leaks or
DoS only (XSA 216, 221), or affecting only intra-VM-security (XSA
220).

Also, it's worth pointing out that 7 out of 8 of the bugs discussed
here (with XSA 222 being the exception) do not affect when running
only fully-virtualized PVH guests (which is where we have been going
to with Qubes 4.x, see [9]).

Compromise Recovery
====================

Starting with Qubes 3.2 we offer Paranoid Backup Restore Mode, which
has been designed specifically to aid with recovery of a (potentially)
compromised Qubes OS system. Thus, if you believe your system might
have got compromised (perhaps because of the bugs discussed in this
bulletin), then you should read and follow the procedure described
here:

https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/

Patching
=========

The specific packages that resolve the problem discussed in this
bulletin are as follows:

  For Qubes 3.2:
  - Xen packages, version 4.6.5-28
  - Kernel packages, version 4.4.67-13 (security-testing)
  - Kernel packages, version 4.9.33-18 (current-testing)

The packages are to be installed in dom0 via the qubes-dom0-update
command or via the Qubes VM Manager.

A system restart will be required afterwards.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen and kernel binaries, and because of the regenerated initramfs.

These packages will migrate to the current (stable) repository over the
coming days after being tested by the community.

Credits
========

See original Xen Security Advisories.

References
===========

[1]  https://xenbits.xen.org/xsa/advisory-216.html
[2]  https://xenbits.xen.org/xsa/advisory-217.html
[3]  https://xenbits.xen.org/xsa/advisory-218.html
[4]  https://xenbits.xen.org/xsa/advisory-219.html
[5]  https://xenbits.xen.org/xsa/advisory-220.html
[6]  https://xenbits.xen.org/xsa/advisory-221.html
[7]  https://xenbits.xen.org/xsa/advisory-222.html
[8]  https://xenbits.xen.org/xsa/advisory-224.html
[9]  https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-030-2017.txt#L111-L167

--
The Qubes Security Team
https://www.qubes-os.org/security/

20 June, 2017 12:00AM

June 19, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Jeremy Bicha: GNOME Tweak Tool 3.25.3

Today I released the second development snapshot (3.25.3) of what will be GNOME Tweak Tool 3.26.

I consider the initial User Interface (UI) rework proposed by the GNOME Design Team to be complete now. Every page in Tweak Tool has been updated, either in this snapshot or the previous development snapshot.

The hard part still remains: making the UI look as good as the mockups. Tweak Tool’s backend makes this a bit more complicated than usual for an app like this.

Here are a few visual highlights of this release.

The Typing page has been moved into an Additional Layout Options dialog in the Keyboard & Mouse page. Also, the Compose Key option has been given its own dialog box.

Florian Müllner added content to the Extensions page that is shown if you don’t have any GNOME Shell extensions installed yet.

A hidden feature that GNOME has had for a long time is the ability to move the Application Menu from the GNOME top bar to a button in the app’s title bar. This is easy to enable in Tweak Tool by turning off the Application Menu switch in the Top Bar page. This release improves how well that works, especially for Ubuntu users where the required hidden appmenu window button was probably not pre-configured.

Some of the ComboBoxes have been replaced by ListBoxes. One example is on the Workspaces page where the new design allows for more information about the different options. The ListBoxes are also a lot easier to select than the smaller ComboBoxes were.

For details of these and other changes, see the commit log or the NEWS file.

GNOME Tweak Tool 3.26 will be released alongside GNOME 3.26 in mid-September.

19 June, 2017 11:15PM

Marco Trevisan (Treviño): GNOME Fractional (and multi-monitor) Scaling Hackfest, the report

This wasn't a joke!As previously announced, few days ago I attended the GNOME Fractional Scaling Hackfest that me and Red Hat‘s Jonas Ådahl organized at the Canonical office in Taipei 101.
Although the location was chosen mostly because it was the one closest to Jonas and near enough to my temporary place, it turned out to be the best we could use, since the huge amount of hardware that was available there, including some 4k monitors and HiDPI laptops.
Being there also allowed another local Caonical employee (Shih-Yuan Lee) to join our efforts!

As this being said I’ve to thank my employer, for allowing me to do this and for sponsoring the event in order to help making GNOME a better desktop for Ubuntu (and not only).

Going deeper into the event (for which we tracked the various more technical items in a WIP journal), it has been a very though week, hard working till late while trying to look for the various edge cases and discovering bugs that the new “logically sized” framebuffer and actors were causing.

In fact, as I’ve already quickly explained, the whole idea is to paint all the screen actors at the maximum scale value across the displays they intersect and then using scaled framebuffers when painting, so that we can redefine the screen coordinates in logical pixels, more than using pixel units. However, since we want to be able to use any sized element scaled at (potentially any) fractional value, we might incur in problems when eventually we go back to the pixel level, where everything is integer-indexed.

We started by defining the work items for the week and setting up some other HiDPI laptops (Dell XPS 15 and XPS 13 mostly) we got from the office with jhbuild, then as you can see we defined some list of things to care about:

  • Supporting multiple scaling values: allowing to scale up and down (< 1.0) the interface, not only to well-known value, but providing a wider range of floats we support
  • Non-perfect-scaling: covering the cases in which the actor (or the whole monitor) when scaled up/down to a fractional level has not anymore a pixel-friendly size, and thus there are input and outputs issues to handle due to rounding.
  • GNOME Shell UI: the shell StWidget‘s need to be drawn at proper resource scaling value, so that when they’re painted they won’t look blurred.
  • Toolkit supports: there are some Gtk issues when scaling more than 2x, while Qt has support for Fractional scaling.
  • Wayland protocol improvements: related to the point above we might define a way to tell toolkits the actual fractional scaling value, so that they could be scaled at the real value, instead of asking them to scale up to the upper integer scaling level. Also when it comes to games and video players, they should not be scaled up/down at all.
  • X11 clients: supporting XWayland clients

What we did

As you see the list of things we meant to work or plan was quite juicy, so more than enough for one week, but even if we didn’t finish all the tasks (despite the Super-Joans powers :-)), we have been able to start or address the work for most of them so that we’ll know what to work on for the next weeks.

Scaling at 1.25x

As a start, we had to ensure mutter was supporting various scaling values (including the ones < 1.0), we decided (this might change, but given the Unity experience, it proved to work well) to support 8 intermediate values per integer, from 0.5 to 4.0. This, as said, would lead to troubles when it comes to many resolutions (as you see in the board picture, 1280×720 is an example of a case that doesn’t work well when scaled at 1.5 for instance). So we decided to make mutter to expose a list of supported scaling values per each mode, while we defined an algorithm to compute the closest “good” scaling level to get a properly integer sized logical screen.
This caused a configuration API change, and we updated accordingly gnome-settings-daemon and gnome-control-center adding also some UI changes to reflect and control this new feature.
Not only, the ability of having such fractional values, caused various glitches in mutter, mostly related to the damage algorithm, which Jonas refactored. Other issues in screenshots or gnome-shell fullscreen animations have also been found and fixed.

Speaking of Gnome Shell toolkit, we started some work to fix the drawing of cairo-based areas, while I had already something done for labels, that needs to be tuned. Shih-Yuan fixed a scaling problem of the workspace thumbnails.

On toolkits support, we didn’t do much (a part Gnome Shell) as Gtk problem is not something that affects us much in normal scenarios yet, but still we debugged the issue, while it’s probably a future optimization to support fractional-friendly toolkits using an improved wayland protocol. Instead it’s quite important to define a such protocol for apps that don’t need to be scaled, such as games, but in order to do it we need feedback from games developers too, so that we can define it in the best way.

Not much has been also done in XWayland world (right now everything is scaled to the required value by mutter, but the toolkit  will also use scale 1, which would lead to some blurred result), but we agreed that we’d probably need to define an X11 protocol for this.

We finally spent some time for defining an algorithm for picking the preferred scaling per mode. This is a quite controversial aspect, and anyone might have their ideas on this (especially OEMs), so far we defined some DPI limits that we’ll use to evaluate weather a fractional scaling level has to be done or not: outside these limits (which change depending we’re handling a laptop panel or an external monitor [potentially in a docked setup]) we use integer scaling, in between them we use instead proportional (fractional) values.
One idea I had was to see the problem the other way around and define instead the physical size (in tenth of mm) we want for a pixel at least to be, and then scale to ensure we reach those thresholds instead of defining DPIs (again, that physical size should be weighted for external monitors differently, though). Also, hardware vendors might want to be able to tune these defaults, so one idea was also to provide a way for them to define defaults by panel serial.
In any case, the final and most
important goal, to me, is to provide defaults that guarantee usable and readable HiDPI environments, so that people would be able to use gnome-control-center and adjust these values if needed.
And I think could be quite also quite useful to add
to the gnome-shell intro-wizard an option to chose the scaling level if an high DPI monitor is detected.
For this reason, we also filled this wiki page, with display technical infos for all the hardware we had around, and we encourage you to do add your infos (if you don’t have write access to the Wiki, just send it to us).

What to do

As you can see in our technical journal TODO, we’ve plenty of things to do but the main thing is currently fixing the Shell toolkit widgets, while going through various bugs and improving the XWayland clients situation. Then there multiple optimizations to do at mutter level too.

When we ship

Our target is to get this landed by GNOME 3.26, even if this might be under an experimentalgsettings key, as right now the main blocker is the X11 clients support.

How to help

The easiest thing you can do is help testing the code (using jhbuild build gnome-shell with a config based on this) should be enough), also filling the scale factor tests wiki page might help. If you want to get involved with code, these are the git branches to look at.

Read More

You can read a more schematic report that Jonas wrote for this event on the gnome-shell mailing list.

Conclusions

It has been a great event, we did and discussed about many things but first of all I’ve been able to get more closely familiar in the GNOME code with who has wrote most of it, which indeed helped.
We’ve still lots of things to do, but we’re approaching to a state that would allow everyone to get differently scaled monitors at various fractional values, with no issues.

Our final board

Check some other pictures in my flickr gallery

Finally, I’ve to say thanks a lot to Jonas who initially proposed the event and, a part being a terrific engineer, has been a great mate to work and hang out with, making me discover (and survive in) Taipei and its food!

19 June, 2017 09:03PM

Cumulus Linux

OpenStack and Cumulus Linux: A match made in networking heaven

A few weeks ago, we attended the OpenStack Summit where we had a wonderful time connecting with customers, partners and several new faces. With the excitement of the event still lingering, we thought this was a great time to highlight how OpenStack and Cumulus Linux offer a unique, seamless solution for building a private cloud. But first, here are a few highlights from the conference.

OpenStack Summit 2017, Boston

  • Jonathan Bryce, Executive Director at OpenStack Foundation, opened the show talking about the substantial growth of OpenStack over the past several years and how they are just one part of the vibrant open infrastructure community. A large focus of the conference was how organizations are moving towards private cloud environments as they realize it’s a better long-term solution.
  • Throughout the conference, containers and Kubernetes were the hottest topics. Many sessions throughout the four days focused on these technologies and how organizations are looking to use them as an abstraction layer to make infrastructure less visible or locked-in.
  • Edward Snowden was one of the most favorited speakers. Presenting from Russia, Snowden focused on how IT professionals are in position to influence how cloud infrastructure is built, influence the future of the internet (and therefore the future of, ya know, the entire world).
  • “Women of OpenStack” was a big focus this year with several discussions within sessions and a “Women of OpenStack” luncheon.

OpenStack and Cumulus Linux work seamlessly together to offer a fully web-scale cloud environment that is efficient, agile and scalable. In the next few paragraphs, we’ll highlight some key aspects of our solution with OpenStack and link you to some in-depth resources on how you can use OpenStack and Cumulus Linux to build a better network.

Reduction in complexity drives OpenStack adoption with Cumulus

Organizations of all sizes are increasingly dependent upon their applications and are evaluating ways to optimize application performance. Most are looking at adopting web-scale IT principles by moving to a cloud environment that would allow data center operators to better support these applications and address business needs.

Why OpenStack?

The OpenStack Infrastructure-as-a-Service (IaaS) platform has been steadily gaining traction within the enterprise IT environments as it offers a rich variety of components that can be combined to build a tailored cloud solution. In fact, 29% of OpenStack users are interested in using OpenStack for SDN or bare metal technologies and 58% of customers are in information technology (source).

Given that OpenStack is IaaS software, the platform controls pools of compute, networking and storage resource from a variety of vendors.

Why Cumulus Linux?

Cumulus Networks offers network switching based on the open Linux network operating model to OpenStack operators. Because Cumulus Linux is a networking-focused Linux distribution, it runs on network switching hardware from a multitude of IT providers such as Dell, Quanta, Supermicro, HP, Mellanox, and Penguin Computing.

Cumulus Linux offers a completely open architecture that supports easy portability of most third party applications natively. By adopting Linux principles for networking, customers have achieved operational efficiency and reduced their time to production by as much as 95%, while reducing their TCO by up to 60%. Due to these benefits we have doubled the number of customers adopting OpenStack with Cumulus Linux.

Automation and consistency are critical characteristics of web scale IT, which is why Cumulus Linux was designed with automation top of mind. To make the process even easier, Cumulus’ Network Command Line Utility (NCLU) allows network operators and system administrators to harness the power of Linux networking and quickly automate using the NCLU module native in Ansible 2.3.

OpenStack and Cumulus Linux

Customers deploying OpenStack and Cumulus Linux have multiple setup options depending on the needs:

  • Cumulus ML2 Plugin – Makes it easy to manage the network inside OpenStack without having to create VLANs/VXLANs manually before VM provisioning.
  • Pre-provisioned L2 VLAN networks
  • MLAG leaf-spine for pure L2 architectures
  • IP underlay fabrics for VXLAN/SDN deployments,
  • VXLAN gateways (VTEPs) for Bare Metal endpoints and
  • External SDN Gateways (replacing more expensive solutions like Juniper MX)

Integration between overlay and underlay is critical when customers are looking to scale their OpenStack environment with previously used overlays from traditional networking vendors (Cisco: ACI, Juniper: Contrail, etc).

Taking the next step

One of the biggest benefits of building a private cloud environment with OpenStack and Cumulus Linux is that both technologies offer complete flexibility. You can build a network that meets your business’ unique needs and quirks. To see how the paired solution can work for you, we’ve put together several documents that can help you get up and running.

Head to our OpenStack solution page where you will find deployment guides, demos, tech videos and more — everything you need to get up and running with OpenStack and Cumulus Linux.

 

The post OpenStack and Cumulus Linux: A match made in networking heaven appeared first on Cumulus Networks Blog.

19 June, 2017 07:32PM by Kelsey Havens

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: MAAS Development Summary: June 12th – 16th

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS Sprint

The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

MAAS 2.3 (current development release)

The team has been working on the following features and improvements:

  • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
    • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
    • Started working on creating new processes for PR’s auto-testing and landing.
  • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
  • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
  • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
  • UI Improvements
    • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
    • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
    • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
  • LP: #1652298 – Improve loading of elements in the device discovery page

19 June, 2017 05:30PM

Ubuntu Insights: Distributing KeePassXC as a snap

This is a guest post by Jonathan White (find him on Github) one of the developers behind keepassxc. If you would like to contribute a guest post, please contact ubuntu-iot@canonical.com .

Can you tell us about KeePassXC?

KeePassXC, for KeePass Cross-Platform Community Edition, is an extension of the KeePassX password manager project that incorporates major feature requests and bug fixes. We are an active open source project that is available on all Linux distributions, Windows XP to 10, and Macintosh OSX. Our main goal is to incorporate the features that the community wants while balancing portability, speed, and ease of use. Some of the major features that we have already shipped are browser integration, YubiKey authentication, and a redesigned interface.

How did you find out about snaps?

I learned about snaps through an article on Ars Technica about a year ago. Since then I dove into the world of building and deploying snaps through the KeePassXC application. We deployed our first snap version of the app in January 2017.

What was the appeal of snaps that made you decide to invest in them?

The novelty of bundling an application and deploying it to the Ubuntu Store, for free, was really attractive. It also meant we could bypass the lengthy review and approval process of the official apt repository.

How does building snaps compare to other forms of packaging you produce? How easy was it to integrate with your existing infrastructure and process?

The initial build of the snapcraft.yaml file was a bit rough. At the time, the documentation did not provide many full-text examples of different build patterns. It only took a couple of iterations before a successful snap was built and tested locally. The easiest part was publishing the snap for public consumption, that took a matter of minutes.

With the introduction of build.snapcraft.io, the integration with our workflow has improved greatly. Now we can publish snaps immediately upon completion of a milestone, or even intermediate builds for develop.

Do you currently use the snap store as a way of distributing your software? How do you see the store changing the way users find and install your software?

Yes, we use the snap store exclusively for our deployment. It is a critical tool for our distribution with over 18,000 downloads in less than 4 months! The store also ensures users have the latest version and it is always guaranteed to work on their system.

What release channels (edge/beta/candidate/stable) in the store are you using or plan to use?

We use the stable channel for milestone releases and the edge channel for intermediate builds (nightlies).

Is there any other software you develop that might also become available as a Snap in the future?

Not at this time, but if I ever publish another cross-platform tool, I will certainly use the ubuntu store and snap builds.

How do you think packaging KeePassXC as a snap helps your users? Did you get any feedback from them?

Our users are able to discover, download, and use our app in a matter of seconds through the Ubuntu store. Packaging as a snap also removes the dependency nightmare of different Linux distributions. Snap users easily find us on Github and provide feedback on their experience. Most of the issues we have run into involve theming, plugs, and keyboard shortcuts.

How would you improve the snap system?

First I would make it easier to navigate around the developer section of the ubuntu store. It is currently a little confusing on how to get to where your current snaps are. [Note: this is work in progress, stay tuned!]

As far as snaps themselves, I wish they were built more like docker containers where different layers could be combined dynamically to provide the final product. For example, our application uses Qt5 which causes the snap size to bloat up to 70 MB. Instead, the Qt5 binaries should be provided as an independent, shared snap that gets dynamically loaded with our application’s snap. This would greatly cut down on the size and compile time of the deployment; especially if you have multiple Qt apps which all carry their own unique build. [Note: Content interfaces were built for this purpose]

Reduce the number of plugs that require manual connection. It would also be helpful if there was a GUI for the user to enable plugs for specific snaps.

Finally, I had the opportunity to try out the new build.snapcraft.io tool. It seems like the perfect answer to keeping up to date with building and deploying snaps to the store. The only downside I found was that it was impossible to limit the building to just the master and develop branch. This caused over 20 builds to be performed due to how active our project was (PR’s, feature branches, etc). [Note: Great feedback! build.snapcraft.io is evolving this is definitely something we’ll look into]

19 June, 2017 11:07AM

Sean Davis: Development Release: Xfce Settings 4.13.1

The second release of the GTK+ 3 powered Xfce Settings is now ready for testing (and possibly general use).  Check it out!

What’s New?

This release now requires xfconf 4.13+.

New Features

  • Appearance Settings: New configuration option for default monospace font
  • Display Settings: Improved support for embedded DisplayPort connectors

Bug Fixes

  • Display Settings: Fixed drawing of displays, was hit and miss before, now its guaranteed
  • Display Settings: Fixed drag-n-drop functionality, the grab area occupied the space below the drawn displays
  • Display Settings (Minimal): The mini dialog now runs as a single instance, which should help with some display drivers (Xfce #11169)
  • Fixed linking to dbus-glib with xfconf 4.13+ (Xfce #13633)

Deprecations

  • Resolved gtk_menu_popup and gdk_error_trap_pop deprecations
  • Ignoring GdkScreen and GdkCairo deprecations for now. Xfce shares this code with GNOME and Mate, and they have not found a resolution yet.

Code Quality

  • Several indentation fixes
  • Dropped duplicate drawing code, elimination another deprecation in the process

Translation Updates

Arabic, Bulgarian, Catalan, Chinese (China), Chinese (Taiwan), Croatian, Danish, Dutch, Finnish, French, Galician, German, Greek, Hebrew, Indonesian, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Norwegian Bokmal, Norwegian Nynorsk, Occitan, Portuguese, Portuguese (Brazil), Russian, Serbian, Slovak, Spanish, Swedish, Thai, Ukrainian

Downloads

The latest version of Xfce Settings can always be downloaded from the Xfce archives. Grab version 4.13.1 from the below link.

http://archive.xfce.org/src/xfce/xfce4-settings/4.13/xfce4-settings-4.13.1.tar.bz2

  • SHA-256: 01b9e9df6801564b28f3609afee1628228cc24c0939555f60399e9675d183f7e
  • SHA-1: 9ffdf3b7f6fad24f4efd1993781933a2a18a6922
  • MD5: 300d317dd2bcbb0deece1e1943cac368

19 June, 2017 09:40AM

Andres Rodriguez: MAAS Development Summary – June 12th – 16th

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS Sprint

The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

MAAS 2.3 (current development release)

The team has been working on the following features and improvements:

  • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
    • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
    • Started working on creating new processes for PR’s auto-testing and landing.
  • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
  • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
  • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
  • UI Improvements
    • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
    • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
    • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
  • LP: #1652298 – Improve loading of elements in the device discovery page

19 June, 2017 09:15AM

Stephen Michael Kellat: Mission Reports

Well, taking just over 60 days to write again is not generally a good sign. Things have been incredibly busy at the day job. Finding out that a Reduction In Force is expected to happen in late September/early October also sharpens the mind as to the state of the economy. Our CEO at work is somewhat odd, to say the least. Certain acts by the CEO remain incredibly confusing if not utterly baffling.

In UK-slang, I guess I could probably be considered a "God-botherer". I've been doing work as an evangelist lately. The only product though has been the Lord's Kingdom. One of the elders at church wound up with their wife in a local nursing home due to advanced age as well as deteriorating health so I got tasked with conducting full Sunday services at the nursing home. Compared to my day job, the work has been far more worthwhile serving people in an extended care setting. Sadly it cannot displace my job that I am apparently about to lose in about 90 days or so anyhow thanks to pending actions of the board and CEO.

One other thing I have had running in the background has been the external review of Outernet. A short research note was drawn up in LaTeX and was submitted somewhere but bounced. Thanks to the magic of Pandoc, I was able to convert it to HTML to tack on to this blog post.

The Outernet rig in the garage

The Outernet rig is based in my garage to simulate a field deployment. The goal by their project is to get these boards into the wild in places like the African continent. Those aren't "clean room" testing environments. If anything, temperature controls go out the window. My only indulgence is that I added on an uninterruptible power supply due to known failures in the local grid.

The somewhat disconnected Raspberry Pi B+ known as ASTROCONTROL to connect to the Outernet board to retrieve materials

Inside the house a Raspberry Pi running Raspbian is connected via Ethernet to a Wi-Fi extender to reach out to the Outernet board. I have to manually set the time every time that ASTROCONTROL is used. Nothing in the mix is connected to the general Internet. The connection I have through Spectrum is not really all that great here in Ashtabula County.

As seen through ConnectBot, difficulties logging in

The board hit a race condition at one point recently where nothing could log in. A good old-fashioned IT Crowd-style power-cycling resolved the issue.

Pulling files on the Outernet board itself as seen in a screenshot via Cathode on an iPad

Sometimes I have used the Busybox version of tar on the board to gather files to review off the board.

The Outernet UI as seen on a smartphone

The interface gets a little cramped on a smartphone like the one I have.

And now for the text of the paper that didn't make the cut...

Introduction

A current endeavor is to review the Outernet content distribution system. Outernet is a means to provide access to Internet content in impaired areas.1 This is not the only effort to do so, though. At the 33rd Chaos Communications Congress there was a review of the signals being transmitted with a view to reverse engineering it.2 The selection of content as well as the innards of the mainboard shipped in the do-it-yourself kit wind up being areas of review that continue.

In terms of concern, how is the content selected for distribution over the satellite platform? There is no known content selection policy. Content reception was observed to try to discern any patterns.

As to the software involved, how was the board put together? Although the signals were focused on at the Chaos Communications Congress, it is appropriate to learn what is happening on the board itself. As designed, the system intends for access to be had through a web browser. There is no documented method of bulk access for data. A little sleuthing shows that that is possible, though.

Low-Level Software

The software powering the mainboard, a C.H.I.P. device, was put together in an image using the Buildroot cross-compilation system. Beyond the expected web-based interface, a probe using Nmap found that ports were open for SSH as well as traditional FTP. The default directory for the FTP login is a mount point where all payloads received from the satellite platform are stored. The SSH session is provided by Dropbear and deposits you in a Busybox session.

The mainboard currently in use has been found to have problems with power interruption. After having to vigorously re-flash the board due to filesystem corruption caused by a minor power disruption, an uninterruptible power system was purchased to keep it running. Over thirty days of running, as measured by the Busybox-exposed command uptime, was gained through putting the rig on an uninterruptible power supply. The system does not adapt well with the heat as observed in the summer in northeast Ohio as we have had to power-cycle it to reboot it during high temperature periods as remote access became inaccessible.

Currently the Outernet mainboard is being operated air-gapped from other available broadband to observe how it would operate in an Internet-impaired environment. The software operates a Wi-Fi access point on the board with the board addressable at 10.0.0.1. Maintaining a constant connection through a dedicated Raspberry Pi and associated monitor plus keyboard has not proved simple so far.

Content Selection

Presently a few categories of data are routinely transmitted. Weather data is sent for viewing in a dedicated applet. News ripped from the RSS feeds of selected news outlets such as the Voice of America, Deutsche Welle, and WTOP is sent routinely but is not checked for consistency. For example, one feed routinely pushes a page daily that the entire feed is just broken. Pages from Wikipedia are sent but there is no pattern discernible yet as to how the pages are picked.

Currently there is a need to review how Wikipedia may make pages available in an automated fashion. It is an open question as to how these pages are being scraped. Is there a feed? Is there manual intervention at the point of uplink? The pages sent are not the exact web-based versions or PDF exports but rather the printer-friendly versions. For now investigation needs to occur relative to how Wikipedia releases articles to see if there is anything that correlates with what is being released.

There are still open questions that require review. The opacity of the content selection policies and procedures limit the platform's utility. That opacity prevents a user having a reasonable expectation of what exactly is coming through on the downlink.

Conclusion

A technical platform is only a means. With the computers involved at each end, older ideas for content distribution are reborn for access-impaired areas. Content remains key, though.


  1. Alyssa Danigelis, "'Outernet' Project Seeks Free Internet Access For Earth?: Discovery News," DNews, February 25, 2014, http://news.discovery.com/tech/gear-and-gadgets/outernet-project-seeks-free-internet-access-for-earth-140225.htm./\/\

  2. Reverse Engineering Outernet (Hamburg, Germany, 2016), https://media.ccc.de/v/33c3-8399-reverse_engineering_outernet./\/\

19 June, 2017 01:41AM

June 18, 2017

hackergotchi for SparkyLinux

SparkyLinux

Update checker & notifier

There is a new, small tool available for Sparkers: Update Checker & Notifier

The tool checks (in shadow) packages to be upgraded and displays notification on desktop.

Installation:
sudo apt update
sudo apt install sparky-aptus-upgrade-checker

Then reboot to let it start working.

It doesn’t run if:
• running live system
• no active internet connection
• ‘sparky-firstrun’ is installed

It runs ones per system boot, with 30 second of delay.
If runs and finds packages to be upgraded, it lets you start the default Sparky’s upgrade tool (sparky-aptus-upgrade).

It uses Yad, so it’s desktop independent.

Update checker & notifier

 

18 June, 2017 06:26PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Sean Davis: Development Release: Exo 0.11.3

Xfce 4.14 development has been picking up steam in the past few months.  With the release of Exo 0.11.3, things are only going to get steamier.  

What is Exo?

Exo is an Xfce library for application development. It was introduced years ago to aid the development of Xfce applications.  It’s not used quite as heavily these days, but you’ll still find Exo components in Thunar (the file manager) and Xfce Settings Manager.

Exo provides custom widgets and APIs that extend the functionality of GLib and GTK+ (both 2 and 3).  It also provides the mechanisms for defining preferred applications in Xfce.

What’s New in Exo 0.11.3?

New Features

  • exo-csource: Added a new --output flag to write the generated output to a file (Xfce #12901)
  • exo-helper: Added a new --query flag to determine the preferred application (Xfce #8579)

Build Changes

  • Build requirements were updated.  Exo now requires GTK+ 2.24, GTK 3.20, GLib 2.42, and libxfce4ui 4.12
  • Building GTK+ 3 libraries is no longer optional
  • Default debug setting is now “yes” instead of “full”. This means that builds will not fail if there are deprecated GTK+ symbols (and there are plenty).

Bug Fixes

  • Discard preferred application selection if dialog is canceled (Xfce #8802)
  • Do not ship generic category icons, these are standard (Xfce #9992)
  • Do not abort builds due to deprecated declarations (Xfce #11556)
  • Fix crash in Thunar on selection change after directory change (Xfce #13238)
  • Fix crash in exo-helper-1 from GTK 3 migration (Xfce #13374)
  • Fix ExoIconView being unable to decrease its size (Xfce #13402)

Documentation Updates

Available here

  • Add missing per-release API indices
  • Resolve undocumented symbols (100% symbol coverage)
  • Updated project documentation (HACKING, README, THANKS)

Translation Updates

Amharic, Asturian, Catalan, Chinese (Taiwan), Croatian, Danish, Dutch, Finnish, Galician, Greek, Indonesian, Kazakh,  Korean, Lithuanian, Norwegian Bokmal, Norwegian Nynorsk, Occitan, Portuguese (Brazil), Russian, Serbian, Slovenian, Spanish, Thai

Downloads

The latest version of Exo can always be downloaded from the Xfce archives. Grab version 0.11.3 from the below link.

http://archive.xfce.org/src/xfce/exo/0.11/exo-0.11.3.tar.bz2

  • SHA-256: 448d7f2b88074455d54a4c44aed08d977b482dc6063175f62a1abfcf0204420a
  • SHA-1: 758ced83d97650e0428563b42877aecfc9fc3c81
  • MD5: c1801052163cbd79490113f80431674a

18 June, 2017 05:30PM

Kubuntu General News: Latest round of backports PPA updates include Plasma 5.10.2 for Zesty 17.04

Kubuntu 17.04 – Zesty Zapus

The latest 5.10.2 bugfix update for the Plasma 5.10 desktop is now available in our backports PPA for Zesty Zapus 17.04.

Included with the update is KDE Frameworks 5.35

Kdevelop has also been updated to the latest version 5.1.1

Our backports for Xenial Xerus 16.04 also receive updated Plasma and Frameworks, plus some requested KDE applications.

Kubuntu 16.04 – Xenial Xerus

  • Plasma Desktop 5.8.7 LTS bugfix update
  • KDE Frameworks 5.35
  • Digikam 5.5.0
  • Kdevelop 5.1.1
  • Krita 3.1.4
  • Konversation 1.7.2
  • Krusader 2.6

To update, use the Software Repository Guide to add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

 

Upgrade notes:

~ The Kubuntu backports PPA already contains significant version upgrades of Plasma, applications, Frameworks (and Qt for 16.04), so please be aware that enabling the backports PPA for the 1st time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to the versions in this announcement.  The PPA will also continue to receive bugfix and other stable updates when they become available.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], file a bug against our PPA packages [3], or optionally contact us via social media.

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

18 June, 2017 01:08PM

hackergotchi for AIMS Desktop developers

AIMS Desktop developers

AIMS Desktop 2017.1 is available!

Back at DebConf 15 in Germany, I gave a talk on on AIMS Desktop (which was then based on Ubuntu), and our intentions and rationale for wanting to move it over to being Debian based.

Today, alongside the Debian 9 release, we release AIMS Desktop 2017.1, the first AIMS Desktop released based on Debian. For Debian 10, we’d like to get the last remaining AIMS Desktop packages into Debian so that it could be a Debian pure blend.

Students trying out a release candidate at AIMS South Africa

It’s tailored to the needs of students, lecturers and researchers at the African Institute for Mathemetical Sciences, we’re releasing it to the public in the hope that it could be useful for other tertiary education users with an interest in maths and science software. If you run a mirror at your university, it would also be great if you could host a copy. we added an rsync location on the downloads page which you could use to keep it up to date.

18 June, 2017 04:55AM by jonathan

Debian 9 is available!

Congratulations to everyone who has played a part in the creation of Debian GNU/Linux 9.0! It’s a great release, I’ve installed the pre-release versions for friends, family and colleagues and so far the feedback has been very positive.

This release is dedicated to Ian Murdock, who founded the Debian project in 1993, and sadly passed away on 28 December 2015. On the Debian ISO files a dedication statement is available on /doc/dedication/dedication-9.0.txt

Here’s a copy of the dedication text:

Dedicated to Ian Murdock
------------------------

Ian Murdock, the founder of the Debian project, passed away
on 28th December 2015 at his home in San Francisco. He was 42.

It is difficult to exaggerate Ian's contribution to Free
Software. He led the Debian Project from its inception in
1993 to 1996, wrote the Debian manifesto in January 1994 and
nurtured the fledgling project throughout his studies at
Purdue University.

Ian went on to be founding director of Linux International,
CTO of the Free Standards Group and later the Linux
Foundation, and leader of Project Indiana at Sun
Microsystems, which he described as "taking the lesson
that Linux has brought to the operating system and providing
that for Solaris".

Debian's success is testament to Ian's vision. He inspired
countless people around the world to contribute their own free
time and skills. More than 350 distributions are known to be
derived from Debian.

We therefore dedicate Debian 9 "stretch" to Ian.

-- The Debian Developers

During this development cycle, the amount of source packages in Debian grew from around 21 000 to around 25 000 packages, which means that there’s a whole bunch of new things Debian can make your computer do. If you find something new in this release that you like, post about it on your favourite social networks, using the hashtag #newinstretch – or look it up to see what others have discovered!

18 June, 2017 04:00AM by jonathan

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Mako Hill: The Community Data Science Collective Dataverse

I’m pleased to announce the Community Data Science Collective Dataverse. Our dataverse is an archival repository for datasets created by the Community Data Science Collective. The dataverse won’t replace work that collective members have been doing for years to document and distribute data from our research. What we hope it will do is get our data — like our published manuscripts — into the hands of folks in the “forever” business.

Over the past few years, the Community Data Science Collective has published several papers where an important part of the contribution is a dataset. These include:

Recently, we’ve also begun producing replication datasets to go alongside our empirical papers. So far, this includes:

In the case of each of the first groups of papers where the dataset was a part of the contribution, we uploaded code and data to a website we’ve created. Of course, even if we do a wonderful job of keeping these websites maintained over time, eventually, our research group will cease to exist. When that happens, the data will eventually disappear as well.

The text of our papers will be maintained long after we’re gone in the journal or conference proceedings’ publisher’s archival storage and in our universities’ institutional archives. But what about the data? Since the data is a core part — perhaps the core part — of the contribution of these papers, the data should be archived permanently as well.

Toward that end, our group has created a dataverse. Our dataverse is a repository within the Harvard Dataverse where we have been uploading archival copies of datasets over the last six months. All five of the papers described above are uploaded already. The Scratch dataset, due to access control restrictions, isn’t listed on the main page but it’s online on the site. Moving forward, we’ll be populating this new datasets we create as well as replication datasets for our future empirical papers. We’re currently preparing several more.

The primary point of the CDSC Dataverse is not to provide you with way to get our data although you’re certainly welcome to use it that way and it might help make some of it more discoverable. The websites we’ve created (like for the ones for redirects and for page protection) will continue to exist and be maintained. The Dataverse is insurance for if, and when, those websites go down to ensure that our data will still be accessible.


This post was also published on the Community Data Science Collective blog.

18 June, 2017 02:35AM

June 17, 2017

Joe Barker: Configuring msmtp on Ubuntu 16.04

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mta ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults.
defaults
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account <MSMTP_ACCOUNT_NAME>
host smtp.gmail.com
port 587
auth login
user <EMAIL_USERNAME>
password <PASSWORD>
from <FROM_ADDRESS>
logfile /var/log/msmtp/msmtp.log

account default : <MSMTP_ACCOUNT_NAME>

Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
}

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text');
exit();

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

17 June, 2017 08:32PM

June 16, 2017

Jono Bacon: Don’t Use Bots to Engage With People on Social Media

I am going to be honest with you, I am writing this post out of one part frustration and one part guidance to people who I think may be inadvertently making a mistake. I wanted to write this up as a blog post so I can send it to people when I see this happening.

It goes like this: when I follow someone on Twitter, I often get an automated Direct Message which looks something along these lines:

These messages invariably are either trying to (a) get me to look at a product they have created, (b) trying to get me to go to their website, or (c) trying to get me to follow them somewhere else such as LinkedIn.

Unfortunately, there are two similar approaches which I think are also problematic.

Firstly, some people will have an automated tweet go out (publicly) that “thanks” me for following them (as best an automated bot who doesn’t know me can thank me).

Secondly, some people will even go so far as to record a little video that personally welcomes me to their Twitter account. This is usually less than a minute long and again is published as an integrated video in a public tweet.

Why you shouldn’t do this

There are a few reasons why you might want to reconsider this:

Firstly, automated Direct Messages come across as spammy. Sure, I chose to follow you, but if my first interaction with you is advertising, it doesn’t leave a great taste in my mouth. If you are going to DM me, send me a personal message from you, not a bot (or not at all). Definitely don’t try to make that bot seem like a human: much like someone trying to suppress a yawn, we can all see it, and it looks weird.

Pictured: Not hiding a yawn.

Secondly, don’t send out the automated thank-you tweets to your public Twitter feed. This is just noise that everyone other than the people you tagged won’t care about. If you generate too much noise, people will stop following you.

Thirdly, in terms of the personal video messages (and in a similar way to the automated public thank-you messages), in addition to the noise it all seems a little…well, desperate. People can sniff desperation a mile off: if someone follows you, be confident in your value to them. Wow them with great content and interesting ideas, not fabricated personal thank-you messages delivered by a bot.

What underlies all of this is that most people want authentic human engagement. While it is perfectly fine to pre-schedule content for publication (e.g. lots of people use Buffer to have a regular drip-feed of content), automating human engagement just doesn’t hit the mark with authenticity. There is an uncanny valley that people can almost always sniff out when you try to make an automated message seem like a personal interaction.

Of course, many of the folks who do these things are perfectly well intentioned and are just trying to optimize their social media presence. Instead of doing the above things, see my 10 recommendations for social media as a starting point, and explore some other ways to engage your audience well and build growth.

The post Don’t Use Bots to Engage With People on Social Media appeared first on Jono Bacon.

16 June, 2017 11:46PM

Ubuntu Podcast from the UK LoCo: S10E15 – Numberless Thoughtless Goldfish - Ubuntu Podcast

This week Alan and Martin go flashing. We discuss Firefox multi-process, Minecraft now has cross platform multiplayer, the GPL is being tested in court and binary blobs in hardware are probably a bad thing.

It’s Season Ten Episode Fifteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joey Sneddon are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

16 June, 2017 09:33PM

Ubuntu Insights: Ubuntu Server Development Summary – 16 Jun 2017

The purpose of this weekly update is to make sure our community can follow development with toes dipped in before and between jumping headlong into helping shape Ubuntu Server!

Spotlight: Task Tracking

The Canonical Server Team is using Trello to track our weekly tasks. Feel free to take a peek and follow along on the Ubuntu Server Daily board.

cloud-init and curtin

cloud-init

  • Uploaded package to Artful and supported releases proposed
  • Met with Redhat team to discuss packaging and release processes
  • Change config/cloud.cfg to act as template to allow downstream distributions to generate this for special needs
  • Added makefile target to install dependencies on various downstream distributions
  • Enable auto-generation of module docs from schema attribute if present
  • Change Redhat spec file based on init system
  • Convert templates from cheetah to jinja to allow building in python3 environments
  • Setup testing of daily cloud-init COPR builds
  • Fix LP: #1693361 race between apt-daily and cloud-init
  • Fix LP: #1686754 sysconfig renderer from leaving CIDR notation instead of netmask
  • Fix LP: #1686751 selinux issues while running under Redhat

curtin

  • Created PPA for MAAS passthrough networking test
  • Fix LP: #1645680 adding PPA due to new GPG agent

Bug Work and Triage

  • Extended Ubuntu Server triage tool to assist with expiration of bugs in backlog
  • Review expiring ubuntu-server subscribed bugs in backlog
  • Review server-next tagged bugs for priority and relevance
  • Triage samba bugs from backlog
  • 64 bugs reviewed, 1 accepted, 317 in the backlog
  • Notes on daily bug triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page.

Uploads to the Development Release (Artful)

billiard, 3.5.0.2-0ubuntu1, nacc
celery, 4.0.2-0ubuntu1, nacc
cloud-initramfs-tools, 0.38ubuntu1, smoser
curtin, 0.1.0~bzr505-0ubuntu1, smoser
lxcfs, 2.0.7-0ubuntu3, stgraber
lxd, 2.14-0ubuntu4, stgraber
lxd, 2.14-0ubuntu3, stgraber
nss, 2:3.28.4-0ubuntu2, mdeslaur
python-boto, 2.44.0-1ubuntu2, racb
python-tornado, 4.5.1-0ubuntu1, mwhudson
rrdtool, 1.6.0-1ubuntu1, vorlon
ruby2.3, 2.3.3-1ubuntu1, mdeslaur
samba, 2:4.5.8+dfsg-2ubuntu1, mdeslaur
Total: 13

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

cloud-init, xenial, 0.7.9-153-g16a7302f-0ubuntu1~16.04.1, smoser
cloud-init, yakkety, 0.7.9-153-g16a7302f-0ubuntu1~16.10.1, smoser
cloud-init, zesty, 0.7.9-153-g16a7302f-0ubuntu1~17.04.1, smoser
ebtables, trusty, 2.0.10.4-3ubuntu1.14.04.1, slashd
ebtables, xenial, 2.0.10.4-3.4ubuntu2, slashd
ebtables, yakkety, 2.0.10.4-3.5ubuntu1.16.10.1, slashd
ebtables, zesty, 2.0.10.4-3.5ubuntu1.17.04.1, slashd
lxc, zesty, 2.0.8-0ubuntu1~17.04.2, stgraber
lxc, yakkety, 2.0.8-0ubuntu1~16.10.2, stgraber
lxc, xenial, 2.0.8-0ubuntu1~16.04.2, stgraber
lxd, zesty, 2.14-0ubuntu3~17.04.1, stgraber
lxd, yakkety, 2.14-0ubuntu3~16.10.1, stgraber
lxd, xenial, 2.14-0ubuntu3~16.04.1, stgraber
multipath-tools, yakkety, 0.5.0+git1.656f8865-5ubuntu7.3, cyphermox
vlan, trusty, 1.9-3ubuntu10.4, slashd
vlan, xenial, 1.9-3.2ubuntu1.16.04.3, slashd
vlan, yakkety, 1.9-3.2ubuntu2.16.10.2, slashd
vlan, zesty, 1.9-3.2ubuntu2.17.04.2, slashd
Total: 18

Contact the Ubuntu Server team

16 June, 2017 06:48PM

Ubuntu Insights: Ubuntu Desktop Weekly Update: June 16, 2017

GNOME

  • Further theme fixes have been made in Artful to get GNOME Shell and Ambiance looking just right.
  • Network Manager is updated to 1.8. It is currently awaiting the resolution of some test issues before it migrates to the release, but that should take place in the coming days.
  • GNOME Terminal received a small fix to make it easier to create custom terminals. Andy Whitcroft from the kernel team blogs about it here

LivePatch

Work is continuing on the Live Patch client UI. We can now install, enable and disable the Live Patch Snap from the Software Properties window. Next up will be showing notifications when the Live Patch service is protecting your computer.

Snaps

  • GNOME Software now works with the Snap Store to show promoted Snaps, or “Editors Picks”. This is released into Artful and other supported releases will follow.
  • We debugged and fixed some desktop Snap theming issues. There were some file sharing changes needed in snapd in the “Unity7” interface (which will need renaming) and these are now merged. More fixes to the desktop launcher scripts were done to provide further default theming, and these were added to the GNOME Platform Snap as well.
  • James Henstridge has been working on getting Snaps to work with Portals, and he’s making great progress. You can read more about it, and how to test it, here:
    https://forum.snapcraft.io/t/xdg-desktop-portal-proof-of-concept-demo/1027

QA

We’re reviewing and updating the desktop test plan. Once this is finalised (due next week) we’ll be announcing a call-for-testing programme with small, quick tests you can perform regularly and feedback your findings. This will help us to ensure the overall quality of the desktop images is kept high throughout the development cycle. More on this soon.

We’re also running our automated tests on real hardware with Intel, Nvidia and AMD graphics cards to cover the main bases.

Video Acceleration

We’re working through all the various links in the chain to get to a situation where we can playback video using hardware acceleration by default. At the moment our focus is getting it to work on Intel graphics hardware, but there are a few issues around using Intel’s SDK with open-source LibVA, but these are being worked on upstream:

https://github.com/Intel-Media-SDK/MediaSDK/issues/10

In the meantime you can read the current state of play here: https://wiki.ubuntu.com/IntelQuickSyncVideo

Updates

  • Chromium 59.0.3071.86 was promoted to stable, but we found a couple of issues. They’re being worked on right now and the test plan has been updated to catch them in the future.
  • Chromium beta is 60.0.3112.24 and dev is 61.0.3124.4.
  • Network Manager 1.8 has been merged from Debian into Artful.
  • BlueZ 5.45 made it out of testing into Artful.
  • Evolution got updated to the 3.24 series.

News

16 June, 2017 04:13PM

Jono Bacon: Interview with Jeff Atwood on Building Communities

I recently did an interview with Jeff Atwood, co-creator of StackExchange and Discourse about his approach to building platforms, communities. and more.

Read it here.

The post Interview with Jeff Atwood on Building Communities appeared first on Jono Bacon.

16 June, 2017 12:16AM

Jono Bacon: What is IT culture? Today’s leaders need to know

See my new post for opensource.com about how you build culture in an organization/community:

“Culture” is a pretty ambiguous word. Sure, reams of social science research explore exactly what exactly “culture” is, but to the average Joe and Josephine the word really means something different than it does to academics. In most scenarios, “culture” seems to map more closely to something like “the set of social norms and expectations in a group of people.” By extension, then, an “IT culture” is simply “the set of social norms and expectations pertinent to a group of people working in an IT organization.”
I suspect most people see themselves as somewhat passive contributors to this thing called “culture.” Sure, we know we can all contribute to cultural change, but I don’t think most people actually feel particularly empowered to make this kind of meaningful change. On top of that, we can also observe significant changes in cultural norms that depend on variables like time and geography. An IT company in China, for example, might have a very different culture from a company in the San Francisco area. A startup in Birmingham, England will have a different culture to a similar startup in Berlin, Germany. And so on.
Culture is critical. It’s the lifeblood of an organization, but it’s complicated to understand and shape. The “IT culture” of the 1980s and 1990s differs from “IT culture” today—and it will be different again 10 years from now. Apart from generational changes, cultural norms for IT practitioners have changed, too. Today, digital technology is more social, more accessible to people with fewer technical skills, and more embedded in our consumer-oriented world than ever. We’ve learned to cherish simplicity, elegance, and design, and this has reflected the kinds of organizations that are forming.

Read it here.

The post What is IT culture? Today’s leaders need to know appeared first on Jono Bacon.

16 June, 2017 12:11AM

June 15, 2017

Ubuntu Insights: Gitter and Mattermost: two desktop apps for your future chat platform

In the hunt for the perfect communication platform or protocol, a lot of companies are experimenting, which can lead to some confusion as not everyone is moving at the same pace: one team on IRC, another one on Slack, one on “anything but Slack, have you tried Mattermost? It’s almost like RocketChat”. Then, if a platform emerges victorious, come the clients wars: which version, does the web version has less features than the desktop client, what about the mobile client?

This post doesn’t intend to solve the conundrum, nor advocate for one platform over the others, as its author currently has 6 notifications on Telegram, 17 highlights on IRC, 1 mention on RocketChat and 2 on Slack.

What this post proposes is to have an easy and painless way to experience (and experiment with) some of these platforms. Electron applications are really useful when it comes to that, they integrate neatly into the desktop experience and find their place into most workflows.

Enter snaps

As of today, if you are a Mattermost or Gitter user, you can install their respective desktop client as snaps on all supported Linux distributions (including Fedora, openSUSE, Debian…).

Why snaps when these apps have packages available on their website and/or repository? Snaps mean you don’t have to care about updating them anymore, or look for the right binary to unpack, it also means they can be completely isolated from the parts of the filesystem you care about and that you can switch to the beta version, or even tip of master, in a single command, then rollback to stable if the version is broken.

Gitter

Website: gitter.im

Gitter is a rapidly growing platform primarily used to add a chat service to GitHub and Gitlab repositories. With over 800,000 users, Gitter has been recently acquired by Gitlab and is on its way to being open sourced.

To install Gitter as a snap, search for “gitter-desktop” in the Ubuntu Software Center, or on the command line:

sudo snap install gitter-desktop

Mattermost

Website: about.mattermost.com

Mattermost is a highly extensible, open source, self-hosted communication platform, connecting to hundreds of cloud services and can be integrated with almost anything using webhooks, RESTful and language-specific APIs.

While the server itself can be installed in ten minutes with orchestration solutions such as Juju, you can also install the desktop client in a minute, with a single command.

To install Mattermost as a snap, search for “mattermost-desktop” in the Ubuntu Software Center, or on the command line:

sudo snap install mattermost-desktop

Learning more about snaps

You can expect more desktop clients and more Electron apps in general to land in the Snap store in the next few weeks. If you want to give a go at snapping your own apps, you can find all the documentation on snapcraft.io, including your personal cross-architecture build farm.

To discuss snaps and snapcraft, you can reach out to the snap community and developers on… Discourse and IRC!

15 June, 2017 11:17PM by Ubuntu Insights (david.calle@canonical.com)

Scarlett Clark: I’m going to Akademy! Neon team and more..

KDE Akademy 2017Akademy 2017

Yes, I fear I have let my blog go a bit defunct. I have been very busy with a bit of a life re-invented after separation from my 18 year marriage. But all is now well in
the land of Scarlett Gately Clark. I have now settled into my new life in beautiful Payson, AZ. I landed my dream job with Blue Systems, and recently moved to team Neon, where I will be back
at what I am good at, debian style packaging! I also will be working on Plasma Mobile! Exciting times. I will be attending Akademy, though out of my own pocket as I was unable to
procure funding. ( I did not ask KDE E.V due to my failure to assist with KDE CI ) I don’t know what happened with CI, I turned around and it was all done. At least it got done, thanks Ben.
I do plan to assist in the future with CI tickets and the like, as soon as the documentation is done!
Harald and I will be hosting a Snappy BoF at Akademy, hope to see you there!

If you find any of my work useful, please consider a donation or become a patreon!
I have 500USD a month in student loans that are killing me. I also need funding for Sprints and
Akademy. Thank you for any assistance you can provide!
Patreon for Scarlett Clark (me)

15 June, 2017 08:16PM

Ubuntu Insights: Juju 2.2.0 and conjure-up 2.2.0 are here!

We are excited to announce the release of Juju 2.2.0 and conjure-up 2.2.0! This release greatly enhances memory and CPU utilisation at scale, improves the modelling of networks, and adds support for KVM containers on arm64. Additionally, there is now outline support for Oracle Compute, and vSphere clouds are now easier to deploy. conjure-up now supports Juju as a Service (JAAS), macOS clients, Oracle and vSphere clouds, and repeatable spell deployments.

How can I get it?

The best way to get your hands on this release of Juju and conjure-up is to install them via snap packages (see https://snapcraft.io/ for more info on snaps).

         snap install juju --classic
         snap install conjure-up --classic

Other packages are available for a variety of platforms. Please see the online documentation at https://jujucharms.com/docs/stable/reference-install. Those subscribed to a snap channel should be automatically upgraded. If you’re using the ppa/homebrew, you should see an upgrade available.

Upgrading

Changes introduced in 2.2.0 mean that you should also upgrade any controllers and hosted models after installing the new client software. Please see the documentation at https://jujucharms.com/docs/2.2/models-upgrade#upgrading-the-model-software for more information.

New and Improved

  • Users can now deploy workloads to Centos7 machines on Azure.
  • vSphere Juju users with vCenter 5.5 and vCenter 6.0 can now bootstrap successfully and deploy workloads as well as have machines organised into folders.
  • Juju now has initial support for Oracle Cloud, https://jujucharms.com/docs/2.2/help-oracle.
  • Users of Azure can now benefit from better credential management support, we’ve eliminated the need to manually discover subscription ID in order to add an Azure credential. All you need is to have Azure CLI installed and regular Juju credential management commands will “Just Work”.
  • Juju login command now accepts the name or hostname of a public controller as a parameter. Passing a user to log in as has been moved to an option rather than a positional parameter.
  • Behavior for a Juju bootstrap argument ‘-metadata-source’ has changed. In addition to specifying a parent directory that contains “tools” and “images” subdirectories with metadata, this argument can now also point directly to one of these subdirectories if only one type of custom metadata is required. (lp:1696555)
  • Actions that require ‘sudo’ can now be used in conjure-up steps.
  • conjure-up now uses libjuju as its api client.
  • conjure-up can now deploy from release channels, e.g. ‘beta’. * There’s a new bootstrap configuration option, max-txn-log-size, that can be used to configure the size of the capped transaction log used internally by Juju. Larger deployments needed to be able to tune this setting; we don’t recommend setting this option without careful consideration.
  • General Juju log pruning policy can now be configured to specify maximum log entry age and log collection size, https://jujucharms.com/docs/2.2/controllers-config. 
  • Juju status history pruning policy can also be configured to specify maximum status entry age and status collection size, https://jujucharms.com/docs/2.2/models-config.
  • The ‘status –format=yaml’ and ‘show-machine’ commands now show more detailed information about individual machines’ network configuration.
  • Added support for AWS ‘ap-northeast-2’ region, and GCE ‘us-west1’, ‘asia-northeast1’ regions.
  • Actions have received some polish and can now be canceled, and showing a previously run action will include the name of the action along with the results..
  • Rotated Juju log files are now also compressed.
  • Updates to MAAS spaces and subnets can be made available to a Juju model using the new ‘reload-spaces’ command.
  • ‘unit-get private-address’ now uses the default binding for an application.
  • Juju models have always been internally identified by their owner and their short name. These full names have not been exposed well to the user but are now part of juju models and show-model command output.

Fixes

  • Juju more reliably determines whether to connect to the MAASv2 or MAASv1 API based on MAAS endpoint URL as well as the response received from MAAS.
  • Juju is now built with Go version 1.8 to take advantage of performance improvements.
  • Juju users will no longer be missing their firewall rules when adding a new machine on Azure.
  • Juju models with storage can now be cleanly destroyed.
  • Juju is now resilient to a MITM attack as SSH Keys of the bootstrap host are now verified before bootstrap (lp:1579593).
  • Root escalation vulnerability in ‘juju-run’ has been fixed (lp:1682411).
  • Juju’s agent presence data is now aggressively pruned, reducing controller disk space usage and avoiding associated performance issues.
  • MAAS 2.x block storage now works with physical disks, when MAAS reports the WWN unique identifier. (lp:1677001).
  • Automatic bridge names are now properly limited to 15 characters in Juju (lp:1672327).
  • Juju subordinate units are now removed as expected when their principal is removed (lp:1686696 and lp:1655486) You can check the milestones for a detailed breakdown of the Juju and conjure-up bugs we have fixed: https://launchpad.net/juju/+milestone/2.2.0 https://github.com/conjure-up/conjure-up/milestone/19?closed=1

Known issues

Feedback Appreciated!

We encourage everyone to let us know how you’re using Juju. Join us at regular Juju shows – subscribe to our Youtube channel https://youtube.com/jujucharms

Send us a message on Twitter using #jujucharms, join us at #juju on freenode, and subscribe to the mailing list at juju at lists.ubuntu.com.

https://jujucharms.com/docs/stable/contact-us

More information

To learn more about these great technologies please visit https://jujucharms.com and http://conjure-up.io

15 June, 2017 07:38PM

Ubuntu Insights: Custom user mappings in LXD containers

LXD logo

Introduction

As you may know, LXD uses unprivileged containers by default.
The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

LXD does offer a number of options related to unprivileged configuration:

  • Increasing the size of the default uid/gid map
  • Setting up per-container maps
  • Punching holes into the map to expose host users and groups
 

Increasing the size of the default map

As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

In most cases you won’t have to change that. There are however a few cases where you may have to:

  • You need access to uid/gid higher than 65535.
    This is most common when using network authentication inside of your containers.
  • You want to use per-container maps.
    In which case you’ll need 65536 available uid/gid per container.
  • You want to punch some holes in your container’s map and need access to host uids/gids.

The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

But the common case, is a system with a recent version of shadow.
An example of what the configuration may look like is:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:65536
root:100000:65536

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:65536
root:100000:65536

The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:1000000000
root:100000:1000000000

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:1000000000
root:100000:100000000

After altering those files, you need to restart LXD to have it detect the new map:

root@vorash:~# systemctl restart lxd
root@vorash:~# cat /var/log/lxd/lxd.log
lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000
root@vorash:~#

As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

You’ll then need to restart your containers to have them start using your newly expanded map.

 

Per container maps

Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

This can be useful for two reasons:

  1. You are running software which alters kernel resource ulimits.
    Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
  2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

The main downsides to using this feature are:

  • It’s somewhat wasteful with using 65536 uids and gids per container.
    That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
  • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

To have a container use its own distinct map, simply run:

stgraber@castiana:~$ lxc config set test security.idmap.isolated true
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

stgraber@castiana:~$ lxc config set test security.idmap.size 200000
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}]

If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
error: Not enough uid/gid available for the container.
 

Direct user/group mapping

The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

Now, what if you want to share your user’s home directory with a container?

The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
Device home added to test

So that was pretty easy, but did it work?

stgraber@castiana:~$ lxc exec test -- bash
root@test:~# ls -lh /home/
total 529K
drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

No. The mount is clearly there, but it’s completely inaccessible to the container.
To fix that, we need to take a few extra steps:

  • Allow LXD’s use of our user uid and gid
  • Restart LXD to have it load the new map
  • Set a custom map for our container
  • Restart the container to have the new map apply
stgraber@castiana:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid
lxd:201105:1
root:201105:1

stgraber@castiana:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid
lxd:200512:1
root:200512:1

stgraber@castiana:~$ sudo systemctl restart lxd

stgraber@castiana:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap -

stgraber@castiana:~$ lxc restart test

At which point, things should be working in the container:

stgraber@castiana:~$ lxc exec test -- su ubuntu -l
ubuntu@test:~$ ls -lh
total 119K
drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap
ubuntu@test:~$ 

 

Conclusion

User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

 

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Discussion forun: https://discuss.linuxcontainers.org
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

15 June, 2017 05:27PM

Jeremy Bicha: #newinstretch : Latest WebKitGTK+

GNOME Web (Epiphany) in Debian 9 "Stretch"

Debian 9 “Stretch”, the latest stable version of the venerable Linux distribution, will be released in a few days. I pushed a last-minute change to get the latest security and feature update of WebKitGTK+ (packaged as webkit2gtk 2.16.3) in before release.

Carlos Garcia Campos discusses what’s new in 2.16, but there are many, many more improvements since the 2.6 version in Debian 8.

Like many things in Debian, this was a team effort from many people. Thank you to the WebKitGTK+ developers, WebKitGTK+ maintainers in Debian, Debian Release Managers, Debian Stable Release Managers, Debian Security Team, Ubuntu Security Team, and testers who all had some part in making this happen.

As with Debian 8, there is no guaranteed security support for webkit2gtk for Debian 9. This time though, there is a chance of periodic security updates without needing to get the updates through backports.

If you would like to help test the next proposed update, please contact me so that I can help coordinate this.

15 June, 2017 04:02PM

Ubuntu Insights: IBM & Canonical: A Virtualization and Cloud Computing (R-)Evolution

As modern IT evolves, there have been many milestones along the way. Starting with bare metal servers, followed by virtualization, then cloud computing, and beyond. Each advancement has created both challenges and opportunities for IT professionals. Today the industry is focused on deploying solutions that will improve overall IT operations while reducing overhead. Orchestration and modeling solutions help these organizations to integrate, manage, and deploy flexible solutions faster and with more consistency. Canonical and IBM have partnered to help our mutual customers with advanced virtualisation solutions on the IBM z and LinuxONE platforms.

How IBM and Canonical bring virtualisation options to their customers

[https://help.ubuntu.com/lts/serverguide/virtualization.html]

The combination of Ubuntu Server with the highly virtualized platforms IBM LinuxONE and IBM z Systems offers a highly competitive set of capabilities and virtualization options that provide many of the scalability, security, and isolation needs of our customers.

The IBM LinuxONE and z Systems come already with the Processor Resource and Service Management (PR/SM) or Dynamic Partition Manager (DPM), built into firmware, providing creation and management of up to 85 logical partitions in the high-end models such as IBM z13 or IBM LinuxONE Emperor. Although there are no real ‘bare metal’ options with IBM LinuxONE and z Systems available, there are several options for running Ubuntu Server on LinuxONE and z Systems for example:

  • A: In the logical partition(s) which is as close to the hardware as it gets without being on bare metal
  • B: As a guest (aka virtual machine) under IBM z/VM, IBM’s commercially available hypervisor.

The newer options, open source based, are:

  • C: As a virtual machine (VM) under KVM hypervisor
  • D: As a machine container

But who will provide the KVM hypervisor and a machine containers foundation? Right out of the box, Ubuntu Server itself comes with:

  • A built-in KVM with the well-known Linux full-virtualization concept with the same functions, look and feel across all architectures, but exploiting hardware assisted virtualisation of s390x architecture (SIE instruction).
  • LXD “the container hypervisor”, is a lightweight operating-system virtualization concept based on Linux Containers technology, enabling organisations to run Linux VMs straight to (machine) containers

The timeliness of the Ubuntu operating system itself ensures that our KVM is on the latest functional level, especially the s390x bits and pieces are frequently brought up-stream by IBM – hence this hand in hand delivery lets you get the most out of KVM, regardless how it’s used:

  • using virsh only
  • using uvtool and
  • using virtinst

Ubuntu Server actings as a KVM host is more than a valid and up-to-date alternative to other KVM based solutions on s390x – like IBM KVM for z Systems, that was withdrawn from marketing by IBM on March the 7th 2017 – see the KVM for IBM z Systems External Frequently Asked Questions.

For technical details and help on migrating from KVM for IBM z to Ubuntu KVM the Wiki page IBM KVM to Ubuntu KVM is recommended. It provides a quick technical summary of what an administrator should consider when trying to move guests from IBM-KVM to Ubuntu KVM.

LXD delivers a fast, dense and secure basic container management. Containers in LXD pack workloads up to 10x more densely compared to VMs – hence LXD is a perfect way to utilize hardware more efficiently. And similar to KVMs, with LXD you can run different Ubuntu releases or even other Linux distributions inside of LXD machine containers.

The significant advantage of open source based virtualization and container options, KVM and LXD, is that both are recognized by OpenStack. KVM is the default hypervisor for OpenStack and LXD can be integrated using nova-lxd driver.

A key benefit of an Optimized Deployment, includes provisioning, orchestration and modelling provided by Juju and it’s Charms and Bundles, which are sets of scripts for reliably and repeatedly deploying, scaling and managing services within Juju. Even just the combination of Juju and LXD based on Ubuntu Server can be considered a basic Cloud, where each new instance will run inside of a LXD container. Juju considers Ubuntu Server, with LXD enabled, as a cloud enabler where each new instance is implemented as a LXD container. Just install Ubuntu Server in a LPAR to fully benefit from a scale-up architecture of IBM LinuxONE and z Systems and the bare metal performance of LXD containers.

To conclude, there are many approaches to virtualisation and each has its own characteristics. For example:

  • LPARs are as close as possible to bare metal on IBM z Systems and LinuxONE
  • Ubuntu Server can act as KVM hypervisor and be integrated with OpenStack
  • Containers can be combined with any of the other virtualization options

There are also significant advantages:

  • Efficiency / Flexibility: LXD > KVM > LPAR
  • Isolation: LPAR > KVM > LXD

With the robustness of IBM LinuxONE and z Systems platforms, the combination of different virtualization and management options, even inside one physical server, orchestrated by Juju offer a broad range of options for modeling within the enterprise according to the customer’s specific needs.

15 June, 2017 03:05PM

Stéphane Graber: Custom user mappings in LXD containers

LXD logo

Introduction

As you may know, LXD uses unprivileged containers by default.
The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

LXD does offer a number of options related to unprivileged configuration:

  • Increasing the size of the default uid/gid map
  • Setting up per-container maps
  • Punching holes into the map to expose host users and groups

Increasing the size of the default map

As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

In most cases you won’t have to change that. There are however a few cases where you may have to:

  • You need access to uid/gid higher than 65535.
    This is most common when using network authentication inside of your containers.
  • You want to use per-container maps.
    In which case you’ll need 65536 available uid/gid per container.
  • You want to punch some holes in your container’s map and need access to host uids/gids.

The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

But the common case, is a system with a recent version of shadow.
An example of what the configuration may look like is:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:65536
root:100000:65536

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:65536
root:100000:65536

The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:1000000000
root:100000:1000000000

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:1000000000
root:100000:100000000

After altering those files, you need to restart LXD to have it detect the new map:

root@vorash:~# systemctl restart lxd
root@vorash:~# cat /var/log/lxd/lxd.log
lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000
root@vorash:~#

As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

You’ll then need to restart your containers to have them start using your newly expanded map.

Per container maps

Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

This can be useful for two reasons:

  1. You are running software which alters kernel resource ulimits.
    Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
  2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

The main downsides to using this feature are:

  • It’s somewhat wasteful with using 65536 uids and gids per container.
    That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
  • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

To have a container use its own distinct map, simply run:

stgraber@castiana:~$ lxc config set test security.idmap.isolated true
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

stgraber@castiana:~$ lxc config set test security.idmap.size 200000
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}]

If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
error: Not enough uid/gid available for the container.

Direct user/group mapping

The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

Now, what if you want to share your user’s home directory with a container?

The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
Device home added to test

So that was pretty easy, but did it work?

stgraber@castiana:~$ lxc exec test -- bash
root@test:~# ls -lh /home/
total 529K
drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

No. The mount is clearly there, but it’s completely inaccessible to the container.
To fix that, we need to take a few extra steps:

  • Allow LXD’s use of our user uid and gid
  • Restart LXD to have it load the new map
  • Set a custom map for our container
  • Restart the container to have the new map apply
stgraber@castiana:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid
lxd:201105:1
root:201105:1

stgraber@castiana:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid
lxd:200512:1
root:200512:1

stgraber@castiana:~$ sudo systemctl restart lxd

stgraber@castiana:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap -

stgraber@castiana:~$ lxc restart test

At which point, things should be working in the container:

stgraber@castiana:~$ lxc exec test -- su ubuntu -l
ubuntu@test:~$ ls -lh
total 119K
drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap
ubuntu@test:~$ 

Conclusion

User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Discussion forun: https://discuss.linuxcontainers.org
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

15 June, 2017 01:30PM

Ubuntu Insights: Kernel Team Summary- June 15, 2017

Introduction

This blog is to provide a status update from the Ubuntu Kernel Team. There will also be highlights provided for any interesting subjects the team may be working on. If you would like to reach the kernel team, you can find us at the #ubuntu-kernel channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing list at: kernel-team@lists.ubuntu.com

Highlights

  • Unstable updated to 4.12-rc5
  • Virtualbox and zfs enabled in unstable/4.12
  • artful/4.11 updated to 4.11.4
  • Stress-ng 0.08.04 uploaded
  • Add new softlockup stressor, use with caution(!)
  • This is going to the be the first of a bunch of RT stressors

The following kernels were promoted to -proposed for testing:

  • Zesty 4.10.0-23.25
  • Yakkety 4.8.0-55.58
  • Xenial 4.4.0-80.101
  • Trusty 3.13.0-120.167

 

The following kernels were promoted to -proposed for testing:

  • trusty/linux-lts-xenial 4.4.0-80.101~14.04.1
  • xenial/linux-hwe-edge 4.10.0-23.25~16.04.1
  • xenial/linux-hwe 4.8.0-55.58~16.04.1
  • xenial/linux-raspi2 4.4.0-1058.65
  • xenial/linux-snapdragon 4.4.0-1060.64
  • xenial/linux-aws 4.4.0-1019.28
  • xenial/linux-gke 4.4.0-1015.15
  • xenial/linux-joule 4.4.0-1002.7
  • yakkety/linux-raspi2 4.8.0-1039.42
  • zesty/linux-raspi2 4.10.0-1007.9

 

The following kernel snaps were uploaded to the store:

  • pc-kernel 4.4.0-79.100
  • pi2-kernel 4.4.0-1057.64
  • dragonboard-kernel 4.4.0-1059.63

 

Devel Kernel Announcements

The 4.11 kernel in artful-proposed has been updated to 4.11.4. It is also available for testing in the following PPA: https://launchpad.net/~canonical-kernel-team/+archive/ubuntu/proposed

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. The Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

Stable Kernel Announcements

Current cycle: 02-Jun through 24-Jun

  • 02-Jun Last day for kernel commits for this cycle
  • 05-Jun – 10-Jun Kernel prep week
  • 11-Jun – 23-Jun Bug verification & Regression testing
  • 26-Jun Release to -updates.

Kernel Versions

  • precise 3.2.0-126.169
  • trusty 3.13.0-119.166
  • vivid 3.19.0-84.92
  • xenial 4.4.0-78.99
  • yakkety 4.8.0-53.56
  • linux-lts-trusty 3.13.0-117.164~precise1
  • linux-lts-vivid 3.19.0-80.88~14.04.1
  • linux-lts-xenial 4.4.0-78.99~14.04.1

Next cycle: 23-Jun through 15-Jul

23-Jun Last day for kernel commits for this cycle 26-Jun – 01-Jul Kernel prep week. 02-Jul – 14-Jul Bug verification & Regression testing.. 17-Jul Release to -updates.

Status: CVE’s

The current CVE status can be reviewed at the following: http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html

15 June, 2017 01:03PM

Rhonda D'Vine: Apollo 440

It's been a while. And currently I shouldn't even post but rather pack my stuff because I'll get the keys to my flat in 6 days. Yay!

But, for packing I need a good sound track. And today it is Apollo 440. I saw them live at the Sundance Festival here in Vienna 20 years ago. It's been a while, but their music still gives me power to pull through.

So, without further ado, here are their songs:

  • Ain't Talkin' 'Bout Dub: This is the song I first stumbled upon, and got me into them.
  • Stop The Rock: This was featured in a movie I enjoyed, with a great dancing scene. :)
  • Krupa: Also a very up-cheering song!

As always, enjoy!

/music | permanent link | Comments: 2 | Flattr this

15 June, 2017 10:27AM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Data Privacy-Compliant Integration of Office 365 in Fulda schools

The municipal authorities of the German city of Fulda in the state of Hesse are responsible for the administration and operation of the IT in 23 schools in Fulda – including 2 vocational schools and 2 grammar schools – for a total of 13,000 pupils and 1,000 members of teaching staff.

Unlike in the rest of Hesse, as an education authority for a small city, we have the city’s own well-developed fiber-optic network at our disposal. The majority of the schools are already connected to the fiber-optic network covering the whole city, which allowed us to do away with local servers in the schools at an early stage and focus on a centralized IT concept instead. As a result, all the school IT in Fulda now runs over centralized servers in our administration center. We operate an Active Directory domain of our own design on these servers with centralized domain controllers, to which a terminal server farm comprising both Windows and Citrix servers as well as the central file servers are connected.

All the computers in the schools – many of which are cost-effective thin clients – log on to these servers to access data and services. All told, the network comprises almost 1,500 Windows PCs and notebooks plus 850 thin clients. Access to the pupil network is regulated via 130 access points, which allow 3,500 users to access the network simultaneously during school hours.

Illustration about introduction of UCS@school in Fulda schools

Automation should reduce administrative efforts

Last year, we decided to take the centralization of our IT infrastructure one step further. The aim was to reduce the necessary administrative efforts even further on the one hand and to integrate new services for our users into the central concept at the same time on the other. The first step in the respect was to identify a reliable system for centralized identity management, to which the rest of the applications could be mounted.

Our list of requirements included:

  • It should be possible to import all the pupil data from the LUSD administration software, in which all teachers and pupils in Hesse are registered, running at the IT center of the Hessian Data Processing Center (HZD) in Wiesbaden. This import should be performed automated and encrypted within our Active Directory.
  • It should be possible for the IT administrators at the respective schools to maintain the teaching staff’s user accounts manually and simply via a web interface.
  • It should be possible to maintain all user groups, directories, and shares from the XML files imported from LUSD by means of an automated process.
  • It should be possible for the schools to reset pupils’ passwords themselves via a self-service portal and to enter personal information such as e-mail addresses and cell phone numbers.
  • It should be possible to integrate additional services for the users such as Office 365 or private cloud applications simply and reliably.

UCS@school permits centralized identity management

After searching for a while, we came across UCS and UCS@school in spring 2016. After thorough testing, it soon became clear that the centralized identity management and access management offered by UCS presented an excellent solution for realizing our principle of centralization with the same degree of transparency. Following the kick-off in February 2016, we were able to implement the roll-out of UCS as quickly as the end of the summer vacation in July and manage all the pupils’ and teaching staff’s identities via UCS’ identity management system. The majority of users didn’t notice anything until the user login in Windows was changed over to “Named Accounts”.

Univention Corporate Server has been taking care of the synchronization of the user data between Active Directory and UCS, the provision of home directories, and the provision of self-service functions in our centralized IT center ever since. Another important function adopted by UCS is the automatic import of the user data from the state of Hesse’s LUSD directory already mentioned above. In this step, UCS imports the name, class and school of the users and generates a password for each user, which can be changed by the individual at a later point in time. This one password allows the user access to all the services and data as well as the school’s wireless Internet. For the resetting of passwords, UCS offers the option, for example, of saving a user’s private e-mail address, which is then used by the self-service function in UCS to allow each pupil to send a token for the resetting of his password without the need for a teacher to be involved. This process reduces administrative efforts significantly. Just imagine how often passwords need resetting in a network with 14,000 users!

In addition, the automatic life cycle management offered by UCS is also very important to us. If a pupil or member of the teaching staff leaves the school system or changes schools, this information can be input into USD with the corresponding effects on all the resources he uses and his rights. That is a point that I would like to address in more detail when we move on to the use of Office 365.

Framework agreement offers cost-effective use of Office 365 for pupils and teaching staff

In the scope of the further development of our service offering for the schools, we investigated the possibility of allowing pupils and staff to use Office applications, as this request was voiced time and time again by schools, and we wanted to offer them appropriate support.

It turned out that the additional fees for providing a sufficient numbers of licenses for the use of Office 365 in the schools would be low thanks to the administration’s existing framework agreement with Microsoft. Thanks to an expansion clause in the framework agreement, the Office programs can be directly installed and used on up to five devices and an additional five mobile devices for each license owned at no extra charge. The agreement allows the pupils to use Office 365 Pro Plus and the teaching staff to use OneDrive and Office Online too. On top of the existing contractual fees for the FWU framework agreement, it was only an extra 0.05 € per pupil or teacher for the use of Office 365 Pro Plus each year. It was going to be hard to find a better deal than that! The hierarchical roles system in UCS@school proved particularly useful when it came to implementing the different teacher and pupil access privileges. More about that in a minute.

The challenge: Office 365 access complying with data privacy regulations

Once we’d discovered this cost-efficient solution, the next step was to achieve Office 365 access which complied with the pertinent data privacy regulations. After all, our data privacy officer signaled early on that Office 365 as a web service saves content and user data on its own Microsoft Azure cloud – a scenario which fundamentally contradicts German data privacy regulations concerning the treatment of pupils’ data. As such, as the situation is at present, it will not be possible to employ Office 365 in its standard configuration in Fulda until there is a possibility which complies with data privacy regulations, for example use via the “Deutschland Cloud”, which still appears to be in a very early stage of planning. Consequently, we needed to consider another option via which we could still make the financially attractive offer available as a service.

At this point, the Microsoft Office 365 Connector made available in the App Center by Univention came into play. Thank to authentication via the SAML technology integrated in UCS, all users can log on to UCS with their password as usual. The authentication to the web service is processed via UCS – the password and username remain in the internal system and are not communicated to Office 365 and saved there. Nevertheless, the problem remained of the content created in the Office 365 applications’ being saved on the Azure cloud, which is also not in line with data privacy regulations. Our solution to this problem: After registering with the web service, our teaching staff and pupils download the on-premise version of the Office programs, which they then install on their own computers and use locally. This keeps both the user data and the content within our own system, ensuring that they are not saved on Azure.

We installed the Office 365 Connector directly from the Univention App Center and connected it to the Azure Active Directory via an interface. This then allowed us to connect our UCS environment with Azure, with the result that the user authentication required for Office 365 could be effected via UCS’ password service.

Screenshot Office 365 Integration in UCS@school

Central control of Office 365 profiles via the LDAP server integrated in UCS

In the initial setup, we performed the configuration of the Office 365 profiles centrally via the UMC (Univention Management Console), UCS’ web-based management tool. Once all the important parameters had been entered and settings made, we were able to assign the profiles to groups (e.g., pupils / staff) in UCS. As such, it was simple to provide the staff group at school A with the extended Office functions as described above while only permitting users with a “pupil identity” access to Office 365.

As a parameter for the unambiguous identification of users, we decided to use a dummy e-mail address for the respective user in Fulda. Even though it is not currently in use, it could be included in additional scenarios in the future, for example the introduction of a school e-mail solution.

Central administration of the license data allows efficient control

As already mentioned above, personnel changes are becoming more and more common in larger school environments in particular, which is why keeping the number of actively used licenses under control was an important matter for us. UCS@school also offers us convenient administrative solutions in this respect. For example, the centralized identity management system in UCS can be used to assign each user his own Office license. If a pupil or member of staff leaves the school, the information only needs to be updated in the centralized system once. The replication mechanism then automatically relates the information to all necessary points and adapts the user’s license usage accordingly too. The pupil or teacher’s Microsoft license is automatically disabled and deleted within a couple of weeks. This allows us in the school administration to stay on the safe side with respect to the number of active licenses and not worry about running out of licenses, all at no extra administrative cost.

Screenshot License activation for Office 365 for a test user

And that’s not all…

As outlined above, the introduction of the centralized identity management and access management system with UCS@school has not only reduced the necessary administrative efforts significantly for us as an education authority and within the schools themselves – it also opened up an opportunity for us to introduce further applications such as Office 365, in a manner compliant with data privacy regulations no less.

It goes without saying that we will be implementing even more steps in the years to come. For example, there are also plans to establish a private cloud, as it is anything but certain whether the planned German education cloud will actually be implemented in the foreseeable future. And why should we wait for the implementation when we have the opportunity via the Univention App Center, for example, to integrate a private cloud service in our IT infrastructure now? We already have a number of great ideas and we are delighted to be in a position to offer the schools under our care modern, tailored and efficient IT.

Der Beitrag Data Privacy-Compliant Integration of Office 365 in Fulda schools erschien zuerst auf Univention.

15 June, 2017 09:38AM by Maren Abatielos

Kaspersky Security for Linux Mail Server 8.1 in the Univention App Center

The Kaspersky Security for Linux Mailserver app was developed by bitbone AG in cooperation with Univention and Kaspersky Lab support.

The proven security product from Kaspersky is thus also available for the widely used Univention Corporate Server. Thanks to the adaptations, it can be easily installed and deployed via the App Center.

Screenshot Kaspersky Security for Linux Mail Server in UCS 4.2

The functions at a glance

  • Scan of incoming, outgoing and archived mails
  • Intelligent spam filtering reduces network workload
  • Reporting, statistics and logs

The app made by bitbone protects emails on mail servers with UCS basis. Specifically developed for use on UCS, the Kaspersky Security for Linux Mailserver app can be easily installed via the Univention App Center. A separate interface for the management of the anti-spam and anti-virus engines, the backup options, rules and reports supports administrators in their work.

The robust anti-virus engine is optimized for operation and integration in Linux environments. The smallest possible use of server resources reduces shortages. In addition, the configuration allows various optimizations as to the careful use of the system resources.

With the help of a multi-layered spam filtering, which obtains the pattern from Kaspersky’s KSN service and the ones used for anti-virus, a very good detection rate of spam emails is achieved. This allows data traffic to be significantly reduced by filtering unwanted messages.

Screenshot KIaspersky Security for Linux Mail Server Traffic Chart

What makes the app different from other solutions?

Years of experience

As a partner from the very start in Germany, bitbone has accumulated experience in the use of Kaspersky products since 2001. At the same time, the focus of the IT service provider has been on Linux and Open Source – particularly on the Debian distribution, which is the base of UCS. The contact between Univention and bitbone has also been established when they both started their companies.

Stable, high-performance engine

Due to the latest anti-virus engine, Kaspersky Security for Linux Mail Server achieves high detection rates and higher scanning speeds. Malicious email attachments are detected quickly and accurately.

Very good heuristics

The Kaspersky heuristics offer real-time protection – even against new emerging threats. Via the cloud-based database KSN, the solution receives information on infection and malware attacks worldwide. This information is used to improve the real-time protection of all customers.

The app has been developed exclusively for UCS and is only available through the Univention App Center. In this context, bitbone also offers corresponding support offers for UCS customers. Using a special test key, the use of the app is free during the test period of 30 days. After the test key has expired, a corresponding runtime key for the individual number of users must be acquired from bitbone.

Test Kaspersky with UCS

Der Beitrag Kaspersky Security for Linux Mail Server 8.1 in the Univention App Center erschien zuerst auf Univention.

15 June, 2017 07:40AM by Maren Abatielos

hackergotchi for Ubuntu developers

Ubuntu developers

Ted Gould: Replacing Docker Hub and Github with Gitlab

I've been working on making the Inkscape CI performant on Gitlab because if you aren't paying developers you want to make developing fun. I started with implementing ccache, which got us a 4x build time improvement. The next piece of low hanging fruit seemed to be the installation of dependencies, which rarely change, but were getting installed on each build and test run. The Gitlab CI runners use Docker and so I set out to turn those dependencies into a Docker layer.

The well worn path for doing a Docker layer is to create a branch on Github and then add an automated build on Docker Hub. That leaves you with a Docker Repository that has your Docker layer in it. I did this for the Inkscape dependencies with this fairly simple Dockerfile:

FROM ubuntu:16.04
RUN apt-get update -yqq 
RUN apt-get install -y -qq <long package list>

For Inkscape though we'd really like to not set up another service and accounts and permissions. Which led me to Gitlab's Container Registry feature. I took the same Git branch and added a fairly generic .gitlab-ci.yml file that looks like this:

variables:
  IMAGE_TAG: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}/${CI_COMMIT_REF_SLUG}:latest

build:
  image: docker:latest
  services:
    - docker:dind
  stage: build
  script:
    - docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}
    - docker build --pull -t ${IMAGE_TAG} .
    - docker push ${IMAGE_TAG}

That tells the Gitlab CI system to build a Docker layer with the same name as the Git branch and put it in the project's container registry. For Inkscape you can see the results here:

We then just need to change our CI configuration for the Inkscape CI builds so that it uses our new image:

image: registry.gitlab.com/inkscape/inkscape-ci-docker/master

Overall the results were saving approximately one to two minutes per build. Not the drastic results I was hoping for, but this is likely to be caused by the builders being more IO constrained than CPU constrained, so uncompressing the layer is roughly the same cost as installing the packages. This still results in a 10% savings in total pipeline time. The bigger unexpected benefit is that it has cleaned up the CI build logs to where the first page starts the actual Inkscape build instead of having to scroll through pages of dependency installation (old vs. new).

15 June, 2017 05:00AM

Jono Bacon: Work Smarter: The Cocktail of Simplicity, Manageable Adversity, and Muntzing

Back in the 40s, TVs were giant, ugly behemoths. They were jammed with vacuum tubes, big bulky components, and were prone to overheating and failure.

Earl “madman” Muntz was an engineer and businessman who started repairing radios when he was 8 and built his first car radio when he was 14. As only someone with the nickname ‘madman‘ could do, when he worked in the TV business, he would walk around the factory floor, step in front of an unsuspecting engineer and yank components from TV sets until they stopped working. Then, he would put the last removed component back in and the set would often work, but with fewer components (thus cheaper) and often other benefits such as reduced heat.

Netflix not included.

This practice became known as muntzing, which while it sounds like some awful way of hazing people with a hosepipe, it was actually a deliciously simple exercise in efficiency and cost reduction. Rather unsurprisingly, this provides an interesting lesson we can apply outside of knackered, old TVs from the forties.

Simple but not Simpler

Muntz’ was fundamentally zoning in on simplicity as a way to accomplish efficiency.

He wasn’t the first. Around the same time period, Einstein, a man not especially unfamiliar with genius, said:

“Everything should be made as simple as possible, but not simpler.”

Einstein touches on the elegance in simplicity and not to be fooled by thinking simple minds create simple things. We see this every day with seemingly simple devices (e.g. the iPhone) that carefully conceal enormous amounts of complexity behind the scenes, both in terms of technology and workflow.

Somewhat smart fella.

For the work I do in building productive and engaging communities and organizations, this is nirvana. My ultimate goal is to build human systems that deliver solid, productive, and predictable results but are simple in their instrumentation and use. As you can imagine, there is often a lot of complexity that goes into doing this.

So, Muntz and Einstein give us a good recipe: focus on simplicity as a means to accomplish efficiency, and reduce the complexity as a means to become lean. Sounds great in theory, but how do we do this?

Harnessing Adversity to Build Efficiency

There are various reasons why things become inefficient: people get lazy and take shortcuts, complexity slows things down, too many layers of abstraction contribute to this complexity, people accept the new reality of inefficiency and don’t challenge it…the list goes on.

We see this everywhere in the products we build, the organizational methodologies we have in companies and communities, the systems we have to use to file our taxes or invoices, and elsewhere. Not seen this? Go to the DMV in America. You will get it in droves.

An effective way to create efficiency and optimization is when we have manageable adversity. That is, we face tough situations that are within our control, capability, and power to resolve and learn from.

Muhammed Ali said it best:

“I don’t count my situps, I only start counting when it starts hurting, when I feel pain, that’s when I start counting, cause that’s when it really counts.”

Our most difficult moments in life, when we can feel beaten down, tired, and lost, can be the most formidable times of personal growth, evolution, and development. If we therefore instill the right level of adversity into our work, complete with having the ability to resolve it (which is the key difference between adversity being a helpful thing or a discriminatory force), we develop efficiencies.

Putting This Into Practice

As such, there can be enormous value in deliberately injecting adversity into our work as a forcing function to get better results. In other words, sometimes the easiest path forward is not the best path if we want to increase our capability and creativity. Sometimes throwing a few obstacles people need to navigate can be a useful thing.

Here are five recommendations to consider.

1. Add intentional burdens

Baseball players would use two bats, drummers would put additional weights on their ankles, and powerlifters would lift additional weight beyond competition requirements all for the same reason: when you remove the additional burden, your performance improves.

Think about how you can instill an intentional restriction that will force you and your teammates to think creatively around how to solve the problem.

For example, an ex-colleague of mine at Canonical was once facing a very low-level bug in the Linux kernel, before the screen was powered up. As such he saw no error messages to indicate the issue. His solution? He wrote a kernel driver to flash the caps lock light in morse and used a sensor sat on the light to read in the morse to see the issue.

He faced a burden, and that burden generated a creative solution. In a similar way, Steve Jobs famously demanded Burrell Smith accomplish his vision of the first Macintosh with fewer hardware components than he had available. These burdens generated remarkable outcomes.

2. Require an ambitious metric

Create an ambitious metric that a solution is required to accomplish. It is incredible what people will do to accomplish a given metric, be it a score in a video game, a measurement on a device, or a target weight to get into a suit at a wedding (ahem!).

A good example here was the first XPRIZE (I used to work at XPRIZE a few years back). A $10million prize would be awarded to a team who built a reusable spacecraft that could go up into space and back, twice in two weeks.

Bags fly free.

This ambitious requirement to win the competition made teams think creatively across a wide range of engineering challenges to accomplish the goal. The result: the birth of commercial space travel development.

Think about how you place an ambitious requirement for the outcome of a particular project, and one that really helps the team to focus on accomplishing that goal in a creative, lean, and ambitious way.

3. Iterate and optimize

An approach I use throughout my work is to break work down into smaller pieces to (a) generate data we can use to assess the success or failure of a project, and (b) to use that data to iterate, improve, and test again.

This is one of the most fundamentally important approaches to evolving any kind of product or process: we can’t improve without information and iteration. As one consideration when iterating, always ask the question “how can we do more with less?”. Tiny improvements and efficiencies on a regular cadence will stack up and deliver incredible overall results.

4. Focus on creative solutions

This may seem a little generic, but all too often we constrain our thinking with existing ways of working.

As an example, one company I have worked with wanted to get a complex developer platform online quickly. The engineering team drafted a plan to build a complex infrastructure, complete with APIs, and a difficult to understand process for using it.

The founder responded with “just put a damn web server online and let people upload files”. He was right: as a minimum viable product (for a young, and potentially experimental project), he wanted to focus on shipping something that worked.

As Reed Hoffman, founder of LinkedIn once famously said:

“If You’re Not Embarrassed By The First Version Of Your Product, You’ve Launched Too Late”.

Reed is right. Be like Reed.

5. Build a hackable culture

If there is one thing I have learned about innovation over the years is that you can’t predict where it comes from. One such example is Jack Andraka, who invented a cheaper and more effective pancreatic cancer test when he was at high school.

You can’t instruct someone to be innovative, but you can build a culture that encourages and allows people to innovate. Innovation requires permission to flourish, and to accomplish this, you need to encourage and allow people to produce interesting hacks that do interesting things.

Encourage your teams to explore new ideas, build them, and demo them. Encourage people to hack on and improve products and services as proof of concepts. If you have a permissive environment that encourages people to hack, explore, and be creative, you will get that same kind of ethos when you create production products and services.

This can be nerve-wracking for some companies because it can feel like it encourages people to challenge the norms of the company. It does, and that is a good thing. Part of being a hackable culture is to actively encourage people to call you on your bullshit and propose better, more efficient, and more interesting solutions.

I would LOVE to hear your thoughts here. What do you think of these recommendations? Do you agree or disagree with them? Can they be improved? How else can we build better things? Share your ideas and feedback in the comments below…

The post Work Smarter: The Cocktail of Simplicity, Manageable Adversity, and Muntzing appeared first on Jono Bacon.

15 June, 2017 04:45AM

June 14, 2017

hackergotchi for Grml developers

Grml developers

Michael Prokop: Grml 2017.05 – Codename Freedatensuppe

The Debian stretch release is going to happen soon (on 2017-06-17) and since our latest Grml release is based on a very recent version of Debian stretch I’m taking this as opportunity to announce it also here. So by the end of May we released a new stable release of Grml (the Debian based live system focusing on system administrator’s needs), known as version 2017.05 with codename Freedatensuppe.

Details about the changes of the new release are available in the official release notes and as usual the ISOs are available via grml.org/download.

With this new Grml release we finally made the switch from file-rc to systemd. From a user’s point of view this doesn’t change that much, though to prevent having to answer even more mails regarding the switch I wrote down some thoughts in Grml’s FAQ. There are some things that we still need to improve and sort out, but overall the switch to systemd so far went better than anticipated (thanks a lot to the pkg-systemd folks, especially Felipe Sateler and Michael Biebl!).

And last but not least, Darshaka Pathirana helped me a lot with the systemd integration and polishing the release, many thanks!

Happy Grml-ing!

14 June, 2017 08:46PM

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: Croissants, Qatar and a Food Computer Meetup in Zurich

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

14 June, 2017 07:53PM

hackergotchi for Tanglu developers

Tanglu developers

CMlyst 0.3.0 released

CMlyst is a Web Content Management System built using Cutelyst, it was initially inspired by WordPress and then Ghost. So it’s a mixture of both.

Two years ago I did it’s first release, and since them I’ve been slowly improving it, it’s been on production for that long providing www.cutelyst.org web site/blog. The 0.2.0 release was a silent one which marks the transition from QSettings storage to sqlite.

Storing content on QSettings is at first quite interesting since it’s easy to use but it showed not suitable very fast, first it kept leaving .lock files, then it’s not very fast to access so I had used a cache with all data, and a notifier updated that when something changed on the directory, but this also didn’t properly triggered QFileSystemWatcher so once a new page was out the cache wasn’t properly updated.

Once it was ported to sqlite, I decided to study how Ghost worked, this was mainly due many Qt/KDE developer switching to it. Ghost is quite simplistic, so it was very easy to try to provide something quite compatible with it, porting a Ghost theme to CMlyst requires very little changes due it’s syntax being close to Grantlee/Django.

Due porting to sqlite it also became clear that an export/import tool was needed, so you can now import/export it in JSON format, pretty close to Ghost, actually you can even import all you Ghost pages with it, but the opposite won’t work, and that’s because we store pages as HTML not Markdown, my feeling about markdown is that it is simple to use, convenient to geeks but it’s yet another thing to teach users which can simply use a WYSIWYG editor.

Security wise you need to be sure that both Markdown and HTML are safe, and CMlyst doesn’t do this, so if you put it on production be sure that only users that know what they are doing use it, you can even break the layout with a not closed tag.

But don’t worry, I’m working on a fix for this, html-qt is a WHATWG HTML5 specification parser, mostly complete, but the part to have a DOM, is not done yet, with it, I can make sure the HTML won’t break layout and remove unsafe tags.

Feature wise, CMlyst has 80% of Ghost features, if you like it please help add missing features to Admin page.

Some cool numbers

Comparing CMlyst to Ghost can be trick, but it’s interesting to see the numbers.

Memory usage:

  • CMlyst uses ~5MB
  • Ghost uses ~120MB

Requests per second (using the same page content)

  • CMlyst 3500/rps (production mode), 1108/rps (developer mode)
  • Ghost 100/rps (production mode)

While the RPS number is very different, on production you can use NGINX cache which would make the slow Ghost RPS not a problem, but that comes to a price of more storage and RAM usage, if you run on an AWS micro instance with 1GB of RAM this means you can have a lot less instances running at the same time, some simple math shows you could have 200 CMlyst instaces vs 8 of Ghost.

Try it!

https://github.com/cutelyst/CMlyst/archive/v0.3.0.tar.gz

Sadly it’s also liked soon I’ll be forking Grantlee, the lack of maintenance just hit me yesterday (when I was going to release this), Qt 5.7+ has changed QDateTime::toString() to include TZ data which broke Grantlee date filter which isn’t expecting that, so I had to do a weird workaround marking date as local to avoid the extra information.


14 June, 2017 02:14PM by dantti

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Project Sputnik: crazy idea to community driven developer systems

This is a guest post by Barton George from Dell. If you would like to contribute a guest post, please contact ubuntu-iot@canonical.com

Five years ago, I pitched a crazy idea to an internal innovation team at Dell: what if Dell took its highest end laptop, pre-loaded Ubuntu on it, included all the needed drivers and targeted it at developers?  Rather than a no-brainer, this proposal struck the innovation team as counter-intuitive. Dell had already been successfully selling Ubuntu preloaded on systems but these had been lower end offerings.  Would customers really be willing to pay for Linux-based high-end systems? And what did developers really need or want?

The innovation team mulled the proposal over and after a month gave me the green light. I was given a small pot of money and six months to see if the idea, christened “Project Sputnik”, would fly.  From the start, one of the key tenants of the project was that the effort would be conducted transparently and publically. Developers would be specifically asked what they wanted in a Linux laptop targeted at them. The idea was presented as an exploratory project that but if things went really well, then it might just become a real product.

It didn’t take long to learn that the idea of an open source based laptop that ‘just worked’ appealed to a large audience of developers. Specs and capabilities that made up the ‘perfect’ developer laptop came pouring in. A few months into the project the tipping point came in the form of a beta program.  When the beta program was announced, rather than a couple hundred responses as our team expected, 6,000 people from around the world raised their hands to participate. With this, the team knew this project deserved to become a real product. Seven months after the idea had initially been presented, the XPS 13 developer edition debuted.

The initial offering was one system and one config. Today, four and a half years later, the effort has expanded to an entire line of systems. The XPS 13 developer edition is now in its sixth generation and two years ago, an Ubuntu-based Precision mobile workstation was added. That one Precision workstation is now a series of four and as of April 2017, a 27” All-in-One has been added:

  • Dell Precision 5520, mobile workstation, World’s thinnest and lightest 15” mobile workstation
  • Dell Precision 3520, mobile workstation, Affordable, fully customizable 15” mobile workstation
  • Dell Precision 7520, mobile workstation, World’s most powerful 15” mobile workstation
  • Dell Precision 7720, mobile workstation, World’s most powerful mobile workstation
  • Dell Precision 5720, All-in-One, 27” All-in-One workstation class machine

In the last two years the project has kicked into high gear with 100% year over year growth. As the Sputnik line of developer systems goes forward, it will continue to evolve.  As it has been since the effort began, this evolution will continue to be guided by community input. The entire Sputnik team would like to thank the community whose input turned a speculative project into a line of products. Your support and input has guided the products from day one and is what keeps the effort moving forward.

14 June, 2017 10:48AM